Search (678 results, page 34 of 34)

  • × type_ss:"el"
  1. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.027163066 = score(doc=3284,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2010 14:41:24
  2. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.027163066 = score(doc=1163,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  3. Gillitzer, B.: Yewno (2017) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
              0.027163066 = score(doc=3447,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3447)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2017 10:16:49
  4. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.027163066 = score(doc=4217,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:32:44
  5. Hensinger, P.: Trojanisches Pferd "Digitale Bildung" : Auf dem Weg zur Konditionierungsanstalt in einer Schule ohne Lehrer? (2017) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 5000) [ClassicSimilarity], result of:
              0.027163066 = score(doc=5000,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 5000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5000)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2019 11:45:19
  6. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0033953832 = product of:
      0.013581533 = sum of:
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.027163066 = score(doc=5234,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:33:49
  7. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.00
    0.0033499864 = product of:
      0.013399946 = sum of:
        0.013399946 = weight(_text_:library in 93) [ClassicSimilarity], result of:
          0.013399946 = score(doc=93,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.10167781 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
      0.25 = coord(1/4)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  8. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.00
    0.0033499864 = product of:
      0.013399946 = sum of:
        0.013399946 = weight(_text_:library in 318) [ClassicSimilarity], result of:
          0.013399946 = score(doc=318,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.10167781 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
      0.25 = coord(1/4)
    
    Abstract
    Die Gemeinsame Normdatei (GND) ist seit April 2012 die Datei, die die im deutschsprachigen Bibliothekswesen verwendeten Normdaten enthält. Folglich muss auf Basis dieser Daten eine Repräsentation für die Darstellung als Linked Data im Semantic Web etabliert werden. Neben der eigentlichen Bereitstellung von GND-Daten im Semantic Web sollen die Daten mit bereits als Linked Data vorhandenen Datenbeständen (DBpedia, VIAF etc.) verknüpft und nach Möglichkeit kompatibel sein, wodurch die GND einem internationalen und spartenübergreifenden Publikum zugänglich gemacht wird. Dieses Dokument dient vor allem zur Beschreibung, wie die GND-Linked-Data-Repräsentation entstand und dem Weg zur Spezifikation einer eignen Ontologie. Hierfür werden nach einer kurzen Einführung in die GND die Grundprinzipien und wichtigsten Standards für die Veröffentlichung von Linked Data im Semantic Web vorgestellt, um darauf aufbauend existierende Vokabulare und Ontologien des Bibliothekswesens betrachten zu können. Anschließend folgt ein Exkurs in das generelle Vorgehen für die Bereitstellung von Linked Data, wobei die so oft zitierte Open World Assumption kritisch hinterfragt und damit verbundene Probleme insbesondere in Hinsicht Interoperabilität und Nachnutzbarkeit aufgedeckt werden. Um Probleme der Interoperabilität zu vermeiden, wird den Empfehlungen der Library Linked Data Incubator Group [LLD11] gefolgt.
  9. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0033499864 = product of:
      0.013399946 = sum of:
        0.013399946 = weight(_text_:library in 53) [ClassicSimilarity], result of:
          0.013399946 = score(doc=53,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.10167781 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.25 = coord(1/4)
    
    Content
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
  10. Patalong, F.: Life after Google : I. Besser suchen, wirklich finden (2002) 0.00
    0.0030832277 = product of:
      0.012332911 = sum of:
        0.012332911 = product of:
          0.024665821 = sum of:
            0.024665821 = weight(_text_:project in 1165) [ClassicSimilarity], result of:
              0.024665821 = score(doc=1165,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116589226 = fieldWeight in 1165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1165)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Auch das bringt was: Gezielte Plattformwechsel Das versucht auch ein Dienst wie Pandia : Der Metasearcher kombiniert in seinen Anfragen gute Searchengines mit der Vollindexierung qualitativ hochwertiger Inhalte-Angebote. So kombiniert Pandia gezielt die Encyclopedia Britannica, Lexika und Searchengines mit den Datenbeständen von Amazon. Wozu das gut sein soll und kann, zeigt das praktische Beispiel einer sehr sachlich orientierten Suche: "Retina Implant". Dabei geht es um Techniken, über oparative Eingriffe und Implantate an Netzhaut-Degeneration erblindeter Menschen das Augenlicht (zumindest teilweise) wieder zu geben. Pandia beantwortet die Suche zunächst mit dem Verweis auf etliche universitäre und privatwirtschaftliche Forschungsinstitute. 13 von 15 Suchergebnissen sind 100 Prozent relevant: Hier geht es ab in die Forschung. Die letzten beiden verweisen zum einen auf eine Firma, die solche Implantate herstellt, die andere auf einen Fachkongress unter anderem zu diesem Thema: Das ist schon beeindruckend treffsicher. Und dann geht's erst los: Mit einem Klick überträgt Pandia die Suchabfrage auf das Suchmuster "Nachrichtensuche", als Resultat werden Presse- und Medienberichte geliefert. Deren Relevanz ist leicht niedriger: Um Implantate geht es immer, um Augen nicht unbedingt, aber in den meisten Fällen. Nicht schlecht. Noch ein Klick, und die Suche im "Pandia Plus Directory" reduziert die Trefferanzahl auf zwei: Ein Treffer führt zur Beschreibung des universitären "Retinal Implant Project", der andere zu Intelligent Implants, einer von Bonner Forschern gegründeten Firma, die sich auf solche Implantate spezialisiert hat - und nebenbei weltweit zu den führenden zählt. Noch ein Klick, und Pandia versucht, Bücher zum Thema zu finden: Die gibt es bisher nicht, aber mit Pandias Hilfe ließe sich sicher eins recherchieren und schreiben. Trotzdem: Keiner der angesprochenen Dienste taugt zum Universalwerkzeug. Was der eine kann, das schafft der andere nicht. Da hilft nur ausprobieren. Der Suchdienst muss zum Sucher passen. Fazit und Ausblick So gut Google auch ist, es geht noch besser. Die intelligente Kombination der besten Fertigkeiten guter Suchwerkzeuge schlägt selbst den Platzhirsch unter den Suchdiensten. Doch darum geht es ja gar nicht. Es geht darum, die Suche im Web effektiv zu gestalten, und das will nach wie vor gelernt sein. Noch einfacher und effektiver geht das mit zahlreichen, oft kostenlosen Werkzeugen, die entweder als eigenständige Software (Bots) für Suche und Archivierung sorgen, oder aber als Add-On in den heimischen Browser integriert werden können. Doch dazu mehr im zweiten Teil dieses kleinen Web-Wanderführers"
  11. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    0.0030832277 = product of:
      0.012332911 = sum of:
        0.012332911 = product of:
          0.024665821 = sum of:
            0.024665821 = weight(_text_:project in 1554) [ClassicSimilarity], result of:
              0.024665821 = score(doc=1554,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116589226 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
  12. cis: Nationalbibliothek will das deutsche Internet kopieren (2008) 0.00
    0.0029709602 = product of:
      0.011883841 = sum of:
        0.011883841 = product of:
          0.023767682 = sum of:
            0.023767682 = weight(_text_:22 in 4609) [ClassicSimilarity], result of:
              0.023767682 = score(doc=4609,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.1354154 = fieldWeight in 4609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4609)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24.10.2008 14:19:22
  13. Dodge, M.: ¬A map of Yahoo! (2000) 0.00
    0.0027071978 = product of:
      0.010828791 = sum of:
        0.010828791 = weight(_text_:library in 1555) [ClassicSimilarity], result of:
          0.010828791 = score(doc=1555,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.08216808 = fieldWeight in 1555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
      0.25 = coord(1/4)
    
    Content
    The View From Above Browsing for a particular piece on information on the Web can often feel like being stuck in an unfamiliar part of town walking around at street level looking for a particular store. You know the store is around there somewhere, but your viewpoint at ground level is constrained. What you really want is to get above the streets, hovering half a mile or so up in the air, to see the whole neighbourhood. This kind of birds-eye view function has been memorably described by David D. Clark, Senior Research Scientist at MIT's Laboratory for Computer Science and the Chairman of the Invisible Worlds Protocol Advisory Board, as the missing "up button" on the browser [3] . ET-Map is a nice example of a prototype for Clark's "up-button" view of an information space. The goal of information maps, like ET-Map, is to provide the browser with a sense of the lie of the information landscape, what is where, the location of clusters and hotspots, what is related to what. Ideally, this 'big-picture' all-in-one visual summary needs to fit on a single standard computer screen. ET-Map is one of my favourite examples, but there are many other interesting information maps being developed by other researchers and companies (see inset at the bottom of this page). How does ET-Map work? Here is a sequence of screenshots of a typical browsing session with ET-Map, which ends with access to Web pages on jazz musician Miles Davis. You can also tryout ET-Map for yourself, using a fully working demo on the AI Lab's website [4] . We begin with the top-level map showing forty odd broad entertainment 'subject regions' represented by regularly shaped tiles. Each tile is a visual summary of a group of Web pages with similar content. These tiles are shaded different colours to differentiate them, while labels identify the subject of the tile and the number in brackets telling you how many individual Web page links it contains. ET-Map uses two important, but common-sense, spatial concepts in its organisation and representation of the Web. Firstly, the 'subject regions' size is directly related to the number of Web pages in that category. For example, the 'MUSIC' subject area contains over 11,000 pages and so has a much larger area than the neighbouring area of 'LIVE' which only has 4,300 odd pages. This is intuitively meaningful, as the largest tiles are visually more prominent on the map and are likely to be more significant as they contain the most links. In addition, a second spatial concept, that of neighbourhood proximity, is applied so 'subject regions' closely related in term of content are plotted close to each other on the map. For example, 'FILM' and 'YEAR'S OSCARS', at the bottom left, are neighbours in both semantic and spatial space. This make senses as many things in the real-world are ordered in this way, with things that are alike being spatially close together (e.g. layout of goods in a store, or books in a library). Importantly, ET-Map is also a multi-layer map, with sub-maps showing greater informational resolution through a finer degree of categorization. So for any subject region that contains more than two hundred Web pages, a second-level map, with more detailed categories is generated. This subdivision of information space is repeated down the hierarchy as far as necessary. In the example, the user selected the 'MUSIC' subject region which, not surprisingly, contained many thousands of pages. A second-level map with numerous different music categories is then presented to the user. Delving deeper, the user wants to learn more about jazz music, so clicking on the 'JAZZ' tile leads to a third-level map, a fine-grained map of jazz related Web pages. Finally, selecting the 'MILES DAVIS' subject region leads to more a conventional looking ranking of pages from which the user selects one to download.
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."
  14. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.00
    0.0025465374 = product of:
      0.01018615 = sum of:
        0.01018615 = product of:
          0.0203723 = sum of:
            0.0203723 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.0203723 = score(doc=5988,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10. 9.2006 17:22:54
  15. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.00
    0.0025465374 = product of:
      0.01018615 = sum of:
        0.01018615 = product of:
          0.0203723 = sum of:
            0.0203723 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.0203723 = score(doc=3035,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  16. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.0025465374 = product of:
      0.01018615 = sum of:
        0.01018615 = product of:
          0.0203723 = sum of:
            0.0203723 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.0203723 = score(doc=405,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  17. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.00
    0.0023928476 = product of:
      0.00957139 = sum of:
        0.00957139 = weight(_text_:library in 2467) [ClassicSimilarity], result of:
          0.00957139 = score(doc=2467,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.07262701 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.25 = coord(1/4)
    
    Source
    http://www.miskatonic.org/library/facet-biblio.html
  18. Laaff, M.: Googles genialer Urahn (2011) 0.00
    0.0021221146 = product of:
      0.008488459 = sum of:
        0.008488459 = product of:
          0.016976917 = sum of:
            0.016976917 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.016976917 = score(doc=4610,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24.10.2008 14:19:22

Years

Languages

  • e 497
  • d 158
  • a 9
  • el 2
  • i 2
  • f 1
  • nl 1
  • More… Less…

Types

  • a 330
  • s 18
  • i 17
  • r 14
  • m 12
  • n 7
  • b 5
  • p 5
  • x 4
  • More… Less…

Themes