Search (89 results, page 1 of 5)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Matylonek, J.C.; Ottow, C.; Reese, T.: Organizing ready reference and administrative information with the reference desk manager (2001) 0.02
    0.022577291 = product of:
      0.11288646 = sum of:
        0.03498863 = weight(_text_:web in 1156) [ClassicSimilarity], result of:
          0.03498863 = score(doc=1156,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.37471575 = fieldWeight in 1156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1156)
        0.07789783 = weight(_text_:log in 1156) [ClassicSimilarity], result of:
          0.07789783 = score(doc=1156,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 1156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1156)
      0.2 = coord(2/10)
    
    Abstract
    Non-academic questions regarding special services, phone numbers, web-sites, library policies, current procedures, technical notices, and other pertinent local institutional information are often asked at the academic library reference desk. These frequent and urgent information requests require tools and resources to answer efficiently. Although ready reference collections at the desk provide a tool for academic information, specialized local information resources are more difficult to create and maintain. As reference desk responsibilities become increasingly complex and communication becomes more problematic, a web database to collect and manage this non-academic, local information can be very useful. At the Oregon State University, librarians in the Reference Services Management group created a custom-designed web-log bulletin board to deal with this non-academic, local information. The resulting database provides reference librarians a one-stop location for the information and makes it easier for them to update the information, via email, as conditions, procedures, and information needs change in their busy, highly computerized information commons.
  2. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.01
    0.0114609385 = product of:
      0.05730469 = sum of:
        0.0494814 = weight(_text_:web in 4640) [ClassicSimilarity], result of:
          0.0494814 = score(doc=4640,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5299281 = fieldWeight in 4640, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4640,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
    Source
    Proceedings of the First European Semantic Web Symposium (ESWS2004), Eds.: C. Bussler, J. Davies, D. Fensel and R. Studer. 2004. S.299-311
    Theme
    Semantic Web
  3. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.009973028 = product of:
      0.049865138 = sum of:
        0.04082007 = weight(_text_:web in 759) [ClassicSimilarity], result of:
          0.04082007 = score(doc=759,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43716836 = fieldWeight in 759, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.027135205 = score(doc=759,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  4. Beuth, P.: ¬Das Netz der Welt : Lobos Webciety (2009) 0.01
    0.0075891363 = product of:
      0.03794568 = sum of:
        0.029528726 = weight(_text_:kommunikation in 2136) [ClassicSimilarity], result of:
          0.029528726 = score(doc=2136,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.20078552 = fieldWeight in 2136, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2136)
        0.008416956 = weight(_text_:web in 2136) [ClassicSimilarity], result of:
          0.008416956 = score(doc=2136,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.09014259 = fieldWeight in 2136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2136)
      0.2 = coord(2/10)
    
    Content
    "Es gibt Menschen, für die ist "offline" keine Option. Sascha Lobo ist so jemand. Zwölf bis 14 Stunden täglich verbringt er im Internet. "Offline sein ist wie Luft anhalten", hat er mal geschrieben. Der Berliner ist eine große Nummer in der Internet-Gemeinde, er ist Blogger, Buchautor, Journalist und Werbetexter. Er ist Mitarbeiter der Firma "Zentrale Intelligenz-Agentur", hat für das Blog Riesenmaschine den Grimme-Online-Award bekommen, seine Bücher ("Dinge geregelt kriegen - ohne einen Funken Selbstdisziplin") haben Kultstatus. Und politisch aktiv ist er auch: Er sitzt im Online-Beirat der SPD. Für die Cebit 2009 hat er den Bereich Webciety konzipiert. Dazu gehört der "Messestand der Zukunft", wie er sagt. Alles, was der Aussteller mitbringen muss, ist ein Laptop. Youtube wird dort vertreten sein, die Macher des Social Bookmarking-Werkzeugs "Mister Wong", aber auch Vertreter von DNAdigital, einer Plattform, auf der sich Unternehmen und Jugendliche über die Entwicklung des Internets austauschen. Webciety ist ein Kunstbegriff, der sich aus Web und Society zusammensetzt, und die vernetzte Gesellschaft bedeutet. Ein Großteil der sozialen Kommunikation - vor allem innerhalb einer Altersstufe - findet inzwischen im Netz statt. Dabei sind es nicht nur die Teenager, die sich bei SchülerVZ anmelden, oder die BWL-Studenten, die bei Xing berufliche Kontakte knüpfen wollen. Laut der aktuellen Studie "Digitales Leben" der Ludwig-Maximilians-Universität München ist jeder zweite deutsche Internetnutzer in mindestens einem Online-Netzwerk registriert. "Da kann man schon sehen, dass ein gewisser Umschwung in der gesamten Gesellschaft zu bemerken ist. Diesen Umschwung kann man durchaus auch auf der Cebit würdigen", sagt Lobo. Er hat angeblich 80 Prozent seiner Freunde online kennen gelernt. "Das hätte ich nicht gemacht, wenn ich nichts von mir ins Netz gestellt hätte." Für ihn sind die Internet-Netzwerke aber keineswegs die Fortsetzung des Poesiealbums mit anderen Mitteln: "Wovor man sich hüten sollte, ist, für alles, was im Netz passiert, Entsprechungen in der Kohlenstoffwelt zu finden. Eine Email ist eben kein Brief, eine SMS ist keine Postkarte."
    Auch ambitionierte soziale Projekte können gelingen: Refunite.org ist eine Art Suchmaschine, mit der Flüchtlinge weltweit nach vermissten Familienangehörigen suchen können. Lobo nennt als Beispiel die englische Seite fixmystreet.co.uk. Dort tragen Menschen ihre Postleitzahl ein und weisen auf Straßenschäden oder fehlende Schilder hin, oft bebildert mit selbst geschossenen Fotos. Die Eingaben werden an die zuständige Behörde weitergeleitet, damit die weiß, wo sie Schlaglöcher ausbessern muss. Online steht dann nachzulesen, was alles in einem Stadtteil verbessert wurde - und was nicht. "Das ist ein relativ simples Tool, das aber die Fähigkeit des Netzes, Informationen zwischen den Menschen neu zu sortieren, dazu nutzt, die Welt tatsächlich zu verbessern", sagt Lobo. 2009 feiert die Cebit also, dass wir alle online sind. In zehn Jahren wird sie feiern, dass wir das gar nicht mehr merken, glaubt Lobo: "Ich bin überzeugt davon, dass wir noch vernetzter sein werden." Halbautomatische Kommunikation nennt er das. "Dass zum Beispiel mein Handy ständig kommuniziert, wo ich gerade bin und diese Information einem ausgewählten Personenkreis zugängig macht. Dass mein Kalender intelligent wird und meldet, dass ein Freund zur gleichen Zeit in der Stadt ist. Vielleicht schlägt er dann vor: ,Wollt ihr euch da nicht treffen?' Solche Funktionen werden so normal sein, dass man im Prinzip ständig online ist, ohne dass es sich so anfühlt." Teilweise gibt es so etwas schon. Google hat mit "Latitude" gerade einen Ortungsdienst fürs Handy vorgestellt. Die Software sorgt dafür, dass ausgewählten Personen per Google Maps angezeigt wird, wo sich der Handybesitzer gerade aufhält. Der technophile Obama würde den Dienst wahrscheinlich mögen. Doch der Geheimdienst NSA wollte ihm sogar schon den Blackberry wegnehmen - damit der mächtigste Mann der Welt eben nicht ständig geortet werden kann."
  5. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.01
    0.006538931 = product of:
      0.032694653 = sum of:
        0.023567477 = weight(_text_:web in 1155) [ClassicSimilarity], result of:
          0.023567477 = score(doc=1155,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 1155) [ClassicSimilarity], result of:
              0.027381519 = score(doc=1155,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1155)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
    Date
    26.12.2011 16:29:02
  6. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.006538931 = product of:
      0.032694653 = sum of:
        0.023567477 = weight(_text_:web in 4644) [ClassicSimilarity], result of:
          0.023567477 = score(doc=4644,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.027381519 = score(doc=4644,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    29. 7.2011 14:44:56
    Source
    Proceedings of the 3rd International Semantic Web Conference (ISWC'04). Eds. D. Plexousakis and F. van Harmelen
  7. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.01
    0.0063880207 = product of:
      0.063880205 = sum of:
        0.063880205 = weight(_text_:web in 4260) [ClassicSimilarity], result of:
          0.063880205 = score(doc=4260,freq=20.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6841342 = fieldWeight in 4260, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
      0.1 = coord(1/10)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
    Source
    ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings. Ed.: Karl Aberer et al
    Theme
    Semantic Web
  8. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.0062353685 = product of:
      0.062353685 = sum of:
        0.062353685 = weight(_text_:web in 4329) [ClassicSimilarity], result of:
          0.062353685 = score(doc=4329,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6677857 = fieldWeight in 4329, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.1 = coord(1/10)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Theme
    Semantic Web
  9. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.006053502 = product of:
      0.03026751 = sum of:
        0.023806747 = weight(_text_:web in 2564) [ClassicSimilarity], result of:
          0.023806747 = score(doc=2564,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 2564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.019382289 = score(doc=2564,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  10. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.01
    0.005583178 = product of:
      0.055831775 = sum of:
        0.055831775 = weight(_text_:web in 231) [ClassicSimilarity], result of:
          0.055831775 = score(doc=231,freq=22.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.59793836 = fieldWeight in 231, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.1 = coord(1/10)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Source
    Proceeding ISWC'07/ASWC'07 : Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference. Ed.: K. Aberer et al
    Theme
    Semantic Web
  11. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.01
    0.005344602 = product of:
      0.053446017 = sum of:
        0.053446017 = weight(_text_:web in 4704) [ClassicSimilarity], result of:
          0.053446017 = score(doc=4704,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.57238775 = fieldWeight in 4704, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
      0.1 = coord(1/10)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Content
    Vgl. unter: http://www.dblab.ntua.gr/~bikakis/LD/5.pdf Vgl. auch: http://swoogle.umbc.edu/. Vgl. auch: http://ebiquity.umbc.edu/paper/html/id/183/. Vgl. auch: Radhakrishnan, A.: Swoogle : An Engine for the Semantic Web unter: http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/.
    Theme
    Semantic Web
  12. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.00
    0.004842802 = product of:
      0.024214009 = sum of:
        0.019045398 = weight(_text_:web in 3284) [ClassicSimilarity], result of:
          0.019045398 = score(doc=3284,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2039694 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.0051686107 = product of:
          0.015505832 = sum of:
            0.015505832 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.015505832 = score(doc=3284,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  13. Cross, P.: DESIRE: making the most of the Web (2000) 0.00
    0.0047134957 = product of:
      0.047134954 = sum of:
        0.047134954 = weight(_text_:web in 2146) [ClassicSimilarity], result of:
          0.047134954 = score(doc=2146,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.50479853 = fieldWeight in 2146, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=2146)
      0.1 = coord(1/10)
    
  14. Bradley, P.: ¬The relevance of underpants to searching the Web (2000) 0.00
    0.0047134957 = product of:
      0.047134954 = sum of:
        0.047134954 = weight(_text_:web in 3961) [ClassicSimilarity], result of:
          0.047134954 = score(doc=3961,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.50479853 = fieldWeight in 3961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=3961)
      0.1 = coord(1/10)
    
  15. Smith, A.G.: Web links as analogues of citations (2004) 0.00
    0.0047134957 = product of:
      0.047134954 = sum of:
        0.047134954 = weight(_text_:web in 4205) [ClassicSimilarity], result of:
          0.047134954 = score(doc=4205,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.50479853 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=4205)
      0.1 = coord(1/10)
    
  16. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.00
    0.004665151 = product of:
      0.04665151 = sum of:
        0.04665151 = weight(_text_:web in 4088) [ClassicSimilarity], result of:
          0.04665151 = score(doc=4088,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.49962097 = fieldWeight in 4088, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.1 = coord(1/10)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  17. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.00
    0.004658935 = product of:
      0.023294676 = sum of:
        0.016833913 = weight(_text_:web in 2565) [ClassicSimilarity], result of:
          0.016833913 = score(doc=2565,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.18028519 = fieldWeight in 2565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.019382289 = score(doc=2565,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  18. Bergman, M.K.: ¬The Deep Web : surfacing hidden value (2001) 0.00
    0.004040139 = product of:
      0.040401388 = sum of:
        0.040401388 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.040401388 = score(doc=39,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=39)
      0.1 = coord(1/10)
    
  19. Brooks, T.A.: Where is meaning when form is gone? : Knowledge representation an the Web (2001) 0.00
    0.004040139 = product of:
      0.040401388 = sum of:
        0.040401388 = weight(_text_:web in 3889) [ClassicSimilarity], result of:
          0.040401388 = score(doc=3889,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 3889, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3889)
      0.1 = coord(1/10)
    
  20. Baker, T.: ¬A grammar of Dublin Core (2000) 0.00
    0.003727148 = product of:
      0.01863574 = sum of:
        0.013467129 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.013467129 = score(doc=1236,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.0051686107 = product of:
          0.015505832 = sum of:
            0.015505832 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.015505832 = score(doc=1236,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22

Languages

  • e 83
  • d 5
  • More… Less…