Search (312 results, page 16 of 16)

  • × type_ss:"el"
  1. Hensinger, P.: Trojanisches Pferd "Digitale Bildung" : Auf dem Weg zur Konditionierungsanstalt in einer Schule ohne Lehrer? (2017) 0.00
    0.0030833227 = product of:
      0.012333291 = sum of:
        0.012333291 = product of:
          0.024666581 = sum of:
            0.024666581 = weight(_text_:22 in 5000) [ClassicSimilarity], result of:
              0.024666581 = score(doc=5000,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15476047 = fieldWeight in 5000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5000)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2019 11:45:19
  2. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0030833227 = product of:
      0.012333291 = sum of:
        0.012333291 = product of:
          0.024666581 = sum of:
            0.024666581 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.024666581 = score(doc=5234,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:33:49
  3. Choudhury, G.S.; DiLauro, T.; Droettboom, M.; Fujinaga, I.; MacMillan, K.: Strike up the score : deriving searchable and playable digital formats from sheet music (2001) 0.00
    0.0029678904 = product of:
      0.011871561 = sum of:
        0.011871561 = product of:
          0.023743123 = sum of:
            0.023743123 = weight(_text_:software in 1220) [ClassicSimilarity], result of:
              0.023743123 = score(doc=1220,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.13149375 = fieldWeight in 1220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1220)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the final report to NEH, the Curator of Special Collections at the MSEL stated, "the most useful thing we learned from this project was that you can never overestimate the amount of time it will take to create a quality digital product" (Requardt 1998). The word "resources" might represent a more comprehensive choice than the word "time" in this previous statement. This "sink" of time and resources manifested itself by an increasing allocation of human labor and time to deal with workflow issues related to large-scale digitization. The Levy Collection experience provides ample evidence that there will be mistakes during and after digitization and that unforeseen challenges or difficulties will arise, especially when dealing with rare or fragile materials. The current strategy of allocating additional human labor neither limits costs nor scales well. Consequently, the Digital Knowledge Center (DKC) of the Milton S. Eisenhower Library sought and secured funding for the development of a workflow management system through the National Science Foundation's (NSF) Digital Libraries Initiative, Phase 2 and the Institute for Museum and Library Services (IMLS)6 National Leadership Grant Program. The Levy family and a technology entrepreneur in Maryland provided additional funding for other aspects of the project. The mission of this second phase of the Levy project ("Levy II") can be summarized as follows: * Reduce costs for large collection ingestion by creating a suite of open-source processes, tools and interfaces for workflow management * Increase access capabilities by providing a suite of research tools * Demonstrate utility of tools and processes with a subset of the online Levy Collection The cornerstones of the workflow management system include: optical music recognition (OMR) software to generate a logical representation of the score -- for sound generation, musical searching, and musicological research -- and an automated name authority control system to disambiguate names (e.g., the authors Mark Twain and Samuel Clemens are the same individual). The research tools focus upon enhanced searching capabilities through the development and application of a fast, disk-based search engine for lyrics and music, and the incorporation of an XML structure for metadata. Though this paper focuses on the OMR component of our work, a companion paper to be published in a future issue of D-Lib will describe more fully the other tools (e.g., the automated name authority control system and the disk-based search engine), the overall workflow management system, and the project management process.
  4. Weigert, M.: Horizobu: Webrecherche statt Websuche (2011) 0.00
    0.0029678904 = product of:
      0.011871561 = sum of:
        0.011871561 = product of:
          0.023743123 = sum of:
            0.023743123 = weight(_text_:software in 4443) [ClassicSimilarity], result of:
              0.023743123 = score(doc=4443,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.13149375 = fieldWeight in 4443, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4443)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    "Das Problem mit der Suchmaschinen-Optimierung Suchmaschinen sind unser Instrument, um mit der Informationsflut im Internet klar zu kommen. Wie ich in meinem Artikel Die kürzeste Anleitung zur Suchmaschinenoptimierung aller Zeiten ausgeführt habe, gibt es dabei leider das Problem, dass der Platzhirsch Google nicht wirklich die besten Suchresultate liefert: Habt ihr schon mal nach einem Hotel, einem Restaurant oder einer anderen Location gesucht - und die ersten vier Ergebnis-Seiten sind voller Location-Aggregatoren? Wenn ich ganz spezifisch nach einem Hotel soundso in der Soundso-Strasse suche, dann finde ich, das relevanteste Ergebnis ist die Webseite dieses Hotels. Das gehört auf Seite 1 an Platz 1. Dort aber finden sich nur die Webseiten, die ganz besonders dolle suchmaschinenoptimiert sind. Wobei Google Webseiten als am suchmaschinenoptimiertesten einstuft, wenn möglichst viele Links darauf zeigen und der Inhalt relevant sein soll. Die Industrie der Suchmaschinen-Optimierer erreicht dies dadurch, dass sie folgende Dinge machen: - sie lassen Programme und Praktikanten im Web rumschwirren, die sich überall mit hirnlosen Kommentaren verewigen (Hauptsache, die sind verlinkt und zeigen auf ihre zu pushende Webseite) - sie erschaffen geistlose Blogs, in denen hirnlose Texte stehen (Hauptsache, die Keyword-Dichte stimmt) - diese Texte lassen sie durch Schüler und Praktikanten oder gleich durch Software schreiben - Dann kommt es anscheinend noch auf Keywords im Titel, in der URL etc. an.
  5. Patalong, F.: Life after Google : I. Besser suchen, wirklich finden (2002) 0.00
    0.0024732419 = product of:
      0.0098929675 = sum of:
        0.0098929675 = product of:
          0.019785935 = sum of:
            0.019785935 = weight(_text_:software in 1165) [ClassicSimilarity], result of:
              0.019785935 = score(doc=1165,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.10957812 = fieldWeight in 1165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1165)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Auch das bringt was: Gezielte Plattformwechsel Das versucht auch ein Dienst wie Pandia : Der Metasearcher kombiniert in seinen Anfragen gute Searchengines mit der Vollindexierung qualitativ hochwertiger Inhalte-Angebote. So kombiniert Pandia gezielt die Encyclopedia Britannica, Lexika und Searchengines mit den Datenbeständen von Amazon. Wozu das gut sein soll und kann, zeigt das praktische Beispiel einer sehr sachlich orientierten Suche: "Retina Implant". Dabei geht es um Techniken, über oparative Eingriffe und Implantate an Netzhaut-Degeneration erblindeter Menschen das Augenlicht (zumindest teilweise) wieder zu geben. Pandia beantwortet die Suche zunächst mit dem Verweis auf etliche universitäre und privatwirtschaftliche Forschungsinstitute. 13 von 15 Suchergebnissen sind 100 Prozent relevant: Hier geht es ab in die Forschung. Die letzten beiden verweisen zum einen auf eine Firma, die solche Implantate herstellt, die andere auf einen Fachkongress unter anderem zu diesem Thema: Das ist schon beeindruckend treffsicher. Und dann geht's erst los: Mit einem Klick überträgt Pandia die Suchabfrage auf das Suchmuster "Nachrichtensuche", als Resultat werden Presse- und Medienberichte geliefert. Deren Relevanz ist leicht niedriger: Um Implantate geht es immer, um Augen nicht unbedingt, aber in den meisten Fällen. Nicht schlecht. Noch ein Klick, und die Suche im "Pandia Plus Directory" reduziert die Trefferanzahl auf zwei: Ein Treffer führt zur Beschreibung des universitären "Retinal Implant Project", der andere zu Intelligent Implants, einer von Bonner Forschern gegründeten Firma, die sich auf solche Implantate spezialisiert hat - und nebenbei weltweit zu den führenden zählt. Noch ein Klick, und Pandia versucht, Bücher zum Thema zu finden: Die gibt es bisher nicht, aber mit Pandias Hilfe ließe sich sicher eins recherchieren und schreiben. Trotzdem: Keiner der angesprochenen Dienste taugt zum Universalwerkzeug. Was der eine kann, das schafft der andere nicht. Da hilft nur ausprobieren. Der Suchdienst muss zum Sucher passen. Fazit und Ausblick So gut Google auch ist, es geht noch besser. Die intelligente Kombination der besten Fertigkeiten guter Suchwerkzeuge schlägt selbst den Platzhirsch unter den Suchdiensten. Doch darum geht es ja gar nicht. Es geht darum, die Suche im Web effektiv zu gestalten, und das will nach wie vor gelernt sein. Noch einfacher und effektiver geht das mit zahlreichen, oft kostenlosen Werkzeugen, die entweder als eigenständige Software (Bots) für Suche und Archivierung sorgen, oder aber als Add-On in den heimischen Browser integriert werden können. Doch dazu mehr im zweiten Teil dieses kleinen Web-Wanderführers"
  6. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    0.0024732419 = product of:
      0.0098929675 = sum of:
        0.0098929675 = product of:
          0.019785935 = sum of:
            0.019785935 = weight(_text_:software in 1166) [ClassicSimilarity], result of:
              0.019785935 = score(doc=1166,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.10957812 = fieldWeight in 1166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1166)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  7. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.00
    0.0024732419 = product of:
      0.0098929675 = sum of:
        0.0098929675 = product of:
          0.019785935 = sum of:
            0.019785935 = weight(_text_:software in 423) [ClassicSimilarity], result of:
              0.019785935 = score(doc=423,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.10957812 = fieldWeight in 423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=423)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In 2021, sharing content is easier than ever. Our lingua franca is visual: memes, infographics, TikToks. Our references cross borders and platforms, shared and remixed a hundred different ways in minutes. Digital culture is collective by default and has us together all around the world. But as the internet reaches its "dirty 30s," what happens when pieces of digital culture that have been saved, screenshotted, and reposted for years need to retire? Let's dig into the story of one of these artifacts: The Lenna image. The Lenna image may be relatively unknown in pop culture today, but in the engineering world, it remains an icon. I first encountered the image in an undergrad class, then grad school, and then all over the sites and software I use every day as a tech worker like Github, OpenCV, Stack Overflow, and Quora. To understand where the image is today, you have to understand how it got here. So, I decided to scrape Google scholar, search, and reverse image search results to track down thousands of instances of the image across the internet (see more in the methods section).
  8. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.00
    0.0023124919 = product of:
      0.0092499675 = sum of:
        0.0092499675 = product of:
          0.018499935 = sum of:
            0.018499935 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.018499935 = score(doc=5988,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10. 9.2006 17:22:54
  9. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.0023124919 = product of:
      0.0092499675 = sum of:
        0.0092499675 = product of:
          0.018499935 = sum of:
            0.018499935 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.018499935 = score(doc=1184,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 14:08:22
  10. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.00
    0.0023124919 = product of:
      0.0092499675 = sum of:
        0.0092499675 = product of:
          0.018499935 = sum of:
            0.018499935 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.018499935 = score(doc=3035,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  11. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.0023124919 = product of:
      0.0092499675 = sum of:
        0.0092499675 = product of:
          0.018499935 = sum of:
            0.018499935 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.018499935 = score(doc=405,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  12. Laaff, M.: Googles genialer Urahn (2011) 0.00
    0.0019270767 = product of:
      0.007708307 = sum of:
        0.007708307 = product of:
          0.015416614 = sum of:
            0.015416614 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.015416614 = score(doc=4610,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24.10.2008 14:19:22

Years

Languages

  • d 154
  • e 150
  • el 2
  • a 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 131
  • i 17
  • m 8
  • r 6
  • x 5
  • b 4
  • s 4
  • n 3
  • p 1
  • More… Less…