Search (32 results, page 1 of 2)

  • × type_ss:"el"
  • × theme_ss:"Suchmaschinen"
  1. Schomburg, S.; Prante, J.: Search Engine Federation in Libraries - Suchmaschinenföderation in Bibliotheken (2009) 0.09
    0.08533767 = product of:
      0.11378356 = sum of:
        0.051698197 = weight(_text_:digital in 2809) [ClassicSimilarity], result of:
          0.051698197 = score(doc=2809,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 2809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=2809)
        0.032486375 = weight(_text_:library in 2809) [ClassicSimilarity], result of:
          0.032486375 = score(doc=2809,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.24650425 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2809)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 2809) [ClassicSimilarity], result of:
              0.059197973 = score(doc=2809,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 2809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2809)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The hbz (Academic Library Center, Cologne) has a strong focus on search engine applications: Beyond the projected integration of respective technologies into the new release of the Digital Library portal solution (DigiBib6), vascoda background services also apply and take advantage of search engine technology. Experience since 2003 has given proof that building and updating of search engine indexes involves a vast amount of resources. The use of search engine federations, however, pledges major improvements: The total amount of data records held in linked indexes can be almost unlimited but also allow for a joint output of all hits retrieved. A federation also comes with excellent response times - hits retrieved can also refer to or link into the original system's layout. Nonetheless, the major challenge these days is different search engine technologies, e.g. Lucene and FAST, the variations in terms of ranking, and the implementation or non-implementation of so-called drill-downs. The lecture is designed to give a brief insight into the hbz search engine workshop with an introduction to the special project state of play.
  2. Summann, F.; Lossau, N.: Search engine technology and digital libraries : moving from theory to practice (2004) 0.07
    0.07432698 = product of:
      0.09910263 = sum of:
        0.048741527 = weight(_text_:digital in 1196) [ClassicSimilarity], result of:
          0.048741527 = score(doc=1196,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.2465345 = fieldWeight in 1196, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=1196)
        0.030628446 = weight(_text_:library in 1196) [ClassicSimilarity], result of:
          0.030628446 = score(doc=1196,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 1196, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=1196)
        0.019732658 = product of:
          0.039465316 = sum of:
            0.039465316 = weight(_text_:project in 1196) [ClassicSimilarity], result of:
              0.039465316 = score(doc=1196,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.18654276 = fieldWeight in 1196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1196)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article describes the journey from the conception of and vision for a modern search-engine-based search environment to its technological realisation. In doing so, it takes up the thread of an earlier article on this subject, this time from a technical viewpoint. As well as presenting the conceptual considerations of the initial stages, this article will principally elucidate the technological aspects of this journey. The starting point for the deliberations about development of an academic search engine was the experience we gained through the generally successful project "Digital Library NRW", in which from 1998 to 2000-with Bielefeld University Library in overall charge-we designed a system model for an Internet-based library portal with an improved academic search environment at its core. At the heart of this system was a metasearch with an availability function, to which we added a user interface integrating all relevant source material for study and research. The deficiencies of this approach were felt soon after the system was launched in June 2001. There were problems with the stability and performance of the database retrieval system, with the integration of full-text documents and Internet pages, and with acceptance by users, because users are increasingly performing the searches themselves using search engines rather than going to the library for help in doing searches. Since a long list of problems are also encountered using commercial search engines for academic use (in particular the retrieval of academic information and long-term availability), the idea was born for a search engine configured specifically for academic use. We also hoped that with one single access point founded on improved search engine technology, we could access the heterogeneous academic resources of subject-based bibliographic databases, catalogues, electronic newspapers, document servers and academic web pages.
  3. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.04
    0.042796366 = product of:
      0.08559273 = sum of:
        0.060926907 = weight(_text_:digital in 1084) [ClassicSimilarity], result of:
          0.060926907 = score(doc=1084,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 1084, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 1084) [ClassicSimilarity], result of:
              0.049331643 = score(doc=1084,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 1084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  4. Kriewel, S.; Klas, C.P.; Schaefer, A.; Fuhr, N.: DAFFODIL : strategic support for user-oriented access to heterogeneous digital libraries (2004) 0.03
    0.030157281 = product of:
      0.120629124 = sum of:
        0.120629124 = weight(_text_:digital in 4838) [ClassicSimilarity], result of:
          0.120629124 = score(doc=4838,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.61014175 = fieldWeight in 4838, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4838)
      0.25 = coord(1/4)
    
    Abstract
    DAFFODIL is a search system for digital libraries aiming at strategic support during the information search process. From a user point of view this strategic support is mainly implemented by high-level search functions, so-called stratagems, which provide functionality beyond today's digital libraries. Through the tight integration of stratagems and with the federation of heterogeneous digital libraries, DAFFODIL reaches high effects of synergy for information and services. These effects provide high-quality metadata for the searcher through an intuitively controllable user interface. The implementation of stratagems follows a tool-based model.
  5. Smith, A.G.: Search features of digital libraries (2000) 0.03
    0.028900167 = product of:
      0.11560067 = sum of:
        0.11560067 = weight(_text_:digital in 940) [ClassicSimilarity], result of:
          0.11560067 = score(doc=940,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.58470786 = fieldWeight in 940, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
      0.25 = coord(1/4)
    
    Abstract
    Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc). This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries; for instance, by providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.
  6. Hurz, S.: Google verfolgt Nutzer, auch wenn sie explizit widersprechen (2018) 0.02
    0.021540914 = product of:
      0.086163655 = sum of:
        0.086163655 = weight(_text_:digital in 4404) [ClassicSimilarity], result of:
          0.086163655 = score(doc=4404,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4358155 = fieldWeight in 4404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.078125 = fieldNorm(doc=4404)
      0.25 = coord(1/4)
    
    Source
    https://www.sueddeutsche.de/digital/2.220/standortverlauf-google-verfolgt-nutzer-auch-wenn-sie-explizit-widersprechen-1.4092772
  7. Lossau, N.: Search engine technology and digital libraries : libraries need to discover the academic internet (2004) 0.02
    0.015078641 = product of:
      0.060314562 = sum of:
        0.060314562 = weight(_text_:digital in 1161) [ClassicSimilarity], result of:
          0.060314562 = score(doc=1161,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1161)
      0.25 = coord(1/4)
    
  8. Matrix of WWW indices : a comparison of Internet indexing tools (1995) 0.01
    0.013535989 = product of:
      0.054143958 = sum of:
        0.054143958 = weight(_text_:library in 3165) [ClassicSimilarity], result of:
          0.054143958 = score(doc=3165,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.4108404 = fieldWeight in 3165, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=3165)
      0.25 = coord(1/4)
    
    Imprint
    Ann Arbor : University of Michigan School of Information and Library Studies
    Object
    Internet Public Library
  9. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.01
    0.012924549 = product of:
      0.051698197 = sum of:
        0.051698197 = weight(_text_:digital in 1215) [ClassicSimilarity], result of:
          0.051698197 = score(doc=1215,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 1215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
      0.25 = coord(1/4)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.
  10. Dunning, A.: Do we still need search engines? (1999) 0.01
    0.011883841 = product of:
      0.047535364 = sum of:
        0.047535364 = product of:
          0.09507073 = sum of:
            0.09507073 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09507073 = score(doc=6021,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Ariadne. 1999, no.22
  11. Hughes, T.; Acharya, A.: ¬An interview with Anurag Acharya, Google Scholar lead engineer 0.01
    0.011604695 = product of:
      0.04641878 = sum of:
        0.04641878 = weight(_text_:library in 94) [ClassicSimilarity], result of:
          0.04641878 = score(doc=94,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.3522223 = fieldWeight in 94, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=94)
      0.25 = coord(1/4)
    
    Abstract
    When I interned at Google last summer after getting my MSI degree, I worked on projects for the Book Search and Google Scholar teams. I didn't know it at the time, but in completing my research over the course of the summer, I would become the resident expert on how universities were approaching Google Scholar as a research tool and how they were implementing Scholar on their library websites. Now working at an academic library, I seized a recent opportunity to sit down with Anurag Acharya, Google Scholar's founding engineer, to delve a little deeper into how Scholar features are developed and prioritized, what Scholar's scope and aims are, and where the product is headed. -Tracey Hughes, GIS Coordinator, Social Sciences & Humanities Library, University of California San Diego
  12. Leighton, H.V.: Performance of four World Wide Web (WWW) index services : Infoseek, Lycos, WebCrawler and WWWWorm (1995) 0.01
    0.011485667 = product of:
      0.045942668 = sum of:
        0.045942668 = weight(_text_:library in 3168) [ClassicSimilarity], result of:
          0.045942668 = score(doc=3168,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.34860963 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
      0.25 = coord(1/4)
    
    Source
    http://www.winona.msus.edu/services-f/library-f/webind.htm
  13. Campbell, K.: Understanding and comparing search engines (1996) 0.01
    0.011485667 = product of:
      0.045942668 = sum of:
        0.045942668 = weight(_text_:library in 5666) [ClassicSimilarity], result of:
          0.045942668 = score(doc=5666,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.34860963 = fieldWeight in 5666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.09375 = fieldNorm(doc=5666)
      0.25 = coord(1/4)
    
    Source
    http://www.hamline.edu/library/links/comparisons.html
  14. Birmingham, J.: Internet search engines (1996) 0.01
    0.01018615 = product of:
      0.0407446 = sum of:
        0.0407446 = product of:
          0.0814892 = sum of:
            0.0814892 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.0814892 = score(doc=5664,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10.11.1996 16:36:22
  15. Tomaiuolo, N.G.; Packer, J.G.: Quantitative analysis of five WWW 'search engines' (1996) 0.01
    0.00957139 = product of:
      0.03828556 = sum of:
        0.03828556 = weight(_text_:library in 5675) [ClassicSimilarity], result of:
          0.03828556 = score(doc=5675,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.29050803 = fieldWeight in 5675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=5675)
      0.25 = coord(1/4)
    
    Abstract
    Provides a table of the results from over 100 questions actually asked at a library reference desk: The summary notes the average number of relevant 'hits' for all investigated search engines are: AltaVista: 9.3; InfoSeek: 8.3; Lycos: 8.1; Magellan: 7.8; Point: 2.1
  16. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.01
    0.008720686 = product of:
      0.034882743 = sum of:
        0.034882743 = product of:
          0.069765486 = sum of:
            0.069765486 = weight(_text_:project in 1210) [ClassicSimilarity], result of:
              0.069765486 = score(doc=1210,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32976416 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  17. Bryan, K.; Leise, T.: ¬The $25.000.000.000 eigenvector : the linear algebra behind Google 0.01
    0.008633038 = product of:
      0.034532152 = sum of:
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1353) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1353,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1353, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1353)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Google's success derives in large part from its PageRank algorithm, which ranks the importance of webpages according to an eigenvector of a weighted link matrix. Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course. Instructors may assign this article as a project to more advanced students, or spend one or two lectures presenting the material with assigned homework from the exercises. This material also complements the discussion of Markov chains in matrix algebra. Maple and Mathematica files supporting this material can be found at www.rose-hulman.edu/~bryan.
  18. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.006976548 = product of:
      0.027906192 = sum of:
        0.027906192 = product of:
          0.055812385 = sum of:
            0.055812385 = weight(_text_:project in 4709) [ClassicSimilarity], result of:
              0.055812385 = score(doc=4709,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.26381132 = fieldWeight in 4709, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  19. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.01
    0.0067907665 = product of:
      0.027163066 = sum of:
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.054326132 = score(doc=1149,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    17.12.2013 11:02:22
  20. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.01
    0.0067907665 = product of:
      0.027163066 = sum of:
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.054326132 = score(doc=4996,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    19. 2.2019 17:22:00