Search (17 results, page 1 of 1)

  • × theme_ss:"Information Gateway"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Keßler, K.; Krüger, A.T.; Ghammad, Y.; Wulle, S.; Balke, W.-T.; Stump, K.: PubPharm - Der Fachinformationsdienst Pharmazie (2016) 0.04
    0.03576184 = product of:
      0.053642757 = sum of:
        0.045045823 = weight(_text_:im in 3133) [ClassicSimilarity], result of:
          0.045045823 = score(doc=3133,freq=8.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.3123187 = fieldWeight in 3133, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3133)
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 3133) [ClassicSimilarity], result of:
              0.025790809 = score(doc=3133,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 3133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3133)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Der Fachinformationsdienst (FID) Pharmazie verfolgt das Ziel, die Informationsinfrastruktur und die Literaturversorgung für die pharmazeutische Hochschulforschung nachhaltig zu verbessern. Das Projekt wird seit dem 1. Januar 2015 von der Deutschen Forschungsgemeinschaft gefördert. Eine Besonderheit stellt die Kooperation zwischen der Universitätsbibliothek Braunschweig und dem Institut für Informationssysteme (IfIS) der TU Braunschweig dar, wodurch aktuelle Forschung auf dem Gebiet der Informatik in die Implementierung innovativer FID-Dienste mündet. Im Zentrum des Projektes steht der nutzerzentrierte Aufbau einer erweiterbaren und personalisierbaren Informationsinfrastruktur. Das vom FID entwickelte Discovery System "PubPharm" zur pharmaziespezifischen Recherche basiert, als Weiterentwicklung des beluga-Systems der SUB Hamburg, auf der Open Source Software VuFind. Als Datengrundlage enthält es u.a. die Medline Daten, erweitert durch Normdaten, die unter anderem die Suche nach chemischen Strukturen erlauben. Gleichzeitig werden vom Institut für Informationssysteme innovative Suchmöglichkeiten basierend auf Narrativer Intelligenz untersucht und perspektivisch in das Retrieval des Discovery Systems eingebunden. Im Rahmen von sog. FID-Lizenzen bietet der FID Pharmazie Wissenschaftlern/innen Volltextzugriff auf pharmazeutische Fachzeitschriften. Bestandteil der Lizenzen ist das Recht zur Langzeitarchivierung. Bei deren technischer Umsetzung kooperiert der FID mit der TIB Hannover. Der FID Pharmazie koppelt seine Aktivitäten eng an die pharmazeutische Fachcommunity: unter anderem begleitet ein Fachbeirat die Entwicklungen. Im Rahmen der Öffentlichkeitsarbeit werden Nutzer/innen umfassend über die Angebote informiert, u.a. in Webcasts und im PubPharm Blog.
  2. axk: Fortschritt im Schneckentempo : die Deutsche Digitale Bibliothek (2012) 0.03
    0.033519894 = product of:
      0.050279837 = sum of:
        0.041713018 = weight(_text_:im in 313) [ClassicSimilarity], result of:
          0.041713018 = score(doc=313,freq=14.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.28921118 = fieldWeight in 313, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.02734375 = fieldNorm(doc=313)
        0.008566818 = product of:
          0.025700454 = sum of:
            0.025700454 = weight(_text_:online in 313) [ClassicSimilarity], result of:
              0.025700454 = score(doc=313,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16597117 = fieldWeight in 313, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=313)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    "Die Bestände von rund 30.000 deutschen Kulturinstitutionen sollen als Online-Kopie zukünftig in der Deutschen Digitalen Bibliothek (DDB) zu finden sein. Doch dahin scheint es noch ein langer Weg zu sein: Das Online-Portal läuft bislang nur im internen Testbetrieb. Und die öffentliche Hand kann den Aufwand für die Digitalisierung der ganzen Werke kaum alleine stemmen, wie Kulturstaatsminister Bernd Neumann (CDU) bereits mehrfach betonte. Um die Verwandlung von gemeinfreien Kulturgütern in Bits und Bytes schneller voranzubringen, sollte über eine im April 2011 veröffentlichte Ausschreibung ein großes Unternehmen als Digitalisierungspartner gewonnen werden. Der Konzessionsinhaber hätte dann die Rahmenvereinbarung abnicken und auf dieser Basis die Verträge mit den einzelnen Kulturinstitutionen aushandeln müssen. Bei der Digitalisierung hätte der potentielle Partner aus der Wirtschaft der jeweiligen Einrichtung eine digitale Kopie überlassen müssen und ein nicht-exklusives Verwertungsrecht an der eigenen Kopie erhalten - all das auf "eigenes wirtschaftliches Risiko". Informierten Kreisen zufolge war diese Ausschreibung von vornherein auf den Suchmaschinenriesen Google zugeschnitten. Der kooperiert seit 2007 mit der Bayerischen Staatsbibliothek und digitalisiert auf eigene Kosten urheberrechtsfreie Bücher aus deren Beständen. Man ging wohl davon aus, dass sich Google aus Imagegründen auch für die unattraktiv gestaltete Lizenz zur deutschlandweiten Digitalisierung interessieren würde - was aber nicht der Fall war. Stattdessen musste die Ausschreibung mangels passender Bewerber erfolglos zurückgezogen werden, wie im Juni 2012 bekannt gemacht wurde. Neue Ausschreibungen für exklusive Partnerschaften soll es laut der Pressestelle des Kulturstaatsminister nicht geben, aber einzelne Kooperationen mit verschiedenen Unternehmen. Hier sollen bereits Verhandlungen mit nicht weiter benannten Interessenten laufen.
    Immer wieder in die Kritik gerät die angesetzte Finanzierung der DDB: So sind seit 2011 jährlich 2,6 Millionen Euro für den Betrieb der Plattform vorgesehen, für die Digitalisierung von Inhalten stehen aber keine direkten Bundesmittel zur Verfügung. Dr. Ellen Euler zufolge, der Geschäftsführerin der Deutschen Digitalen Bibliothek, seien Aufstockungen zumindest im Gespräch. Von den Dimensionen der 750 Millionen Euro, die der damalige französische Premier Nicholas Sarkozy für die Digitalisierung in seinem Land zusagte, dürfte man jedoch noch weit entfernt sein. Derzeit wird die Digitalisierung der Inhalte vor allem von den Ländern und den ihnen unterstellten Einrichtungen vorangetrieben. So plant etwa das Land Berlin laut einer parlamentarischen Anfrage (PDF-Datei) 2012 und 2013 jeweils 900.000 Euro für ein eigenes "Kompetenzzentrum Digitalisierung" bereitzustellen, das die Arbeit von Bibliotheken, Archiven und Museen koordinieren soll. Inwgesamt richte sich ein Großteil der Bemühungen der Länder auf vom Verfall bedrohte Bestände, wie Dr. Euler verriet. Eine übergreifende Strategie seitens der Bundesregierung, wie sie auch von der Opposition gefordert wurde, gibt es derzeit nicht.
    Der Anfang des Jahres vom Bundestag beschlossene Antrag für eine "Digitalisierungsoffensive" (PDF-Datei) überlässt es "vor allem auch Angebot und Nachfrage", was digitalisiert wird. Für den Ausgleich der Interessen sollen dabei das Kompetenznetzwerk Deutsche Digitale Bibliothek sorgen, in dem 13 große Einrichtungen vertreten sind, sowie ein Kuratorium mit Vertretern aus Bund, Länder und Kommunen. Immerhin plant die DDB laut Euler ein zentrales Register, mit dem die verschiedenen Institutionen ihre Vorhaben abgleichen könnten, um unnötige Doppeldigitalisierungen zu vermeiden. Nach wie vor offen ist auch noch, wann die Beta-Version der DDB nun endlich öffentlich zugänglich gemacht wird: Ursprünglich für Ende 2011 angekündigt, nennt die Webseite des Kulturstaatsministers zur Stunde noch das inzwischen abgelaufene zweite Quartal 2012 als Starttermin. Dr. Euler von der DDB sprach vom Herbst 2012, im September werde möglicherweise ein konkreter Termin bekanntgegeben."
    Source
    http://www.heise.de/newsticker/meldung/Fortschritt-im-Schneckentempo-die-Deutsche-Digitale-Bibliothek-1640670.html
  3. dpa: Europeana hat Startschwierigkeiten : Europas Online-Bibliothek (2008) 0.03
    0.02939368 = product of:
      0.04409052 = sum of:
        0.031852208 = weight(_text_:im in 4537) [ClassicSimilarity], result of:
          0.031852208 = score(doc=4537,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.22084267 = fieldWeight in 4537, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4537)
        0.012238312 = product of:
          0.036714934 = sum of:
            0.036714934 = weight(_text_:online in 4537) [ClassicSimilarity], result of:
              0.036714934 = score(doc=4537,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 4537, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4537)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    "Brüssel. Die erste gemeinsame Online-Bibliothek der Europäischen Union ist nach nur wenigen Stunden wegen technischer Probleme wieder abgeschaltet worden. Ein unerwarteter Besucheransturm legte das Portal www.europeana.eu lahm, sagte ein Sprecher der EU-Kommission am Freitag in Brüssel. Die 20 Millionen Klicks pro Stunde habe die Seite nicht verkraftet. "Wir waren ausgerüstet für fünf Millionen Klicks", begründete der Sprecher die Panne. Spätestens bis Mitte Dezember soll die Seite wieder zur Verfügung stehen. Zuvor müsse zusätzliche Computerkapazität im Rechenzentrum der Universität Amsterdam angemietet werden. Bereits am Donnerstagmittag war die Zahl der Server von drei auf sechs verdoppelt worden, nachdem die Seite bereits in den ersten Stunden nach der Freischaltung vorübergehend zusammengebrochen war. "Die Kosten können noch aus dem Budget der Europeana abgedeckt werden", sagte der Sprecher. Europeana macht Dokumente, Bücher, Gemälde, Filme und Fotografien aus europäischen Sammlungen kostenlos im Internet zugänglich. Bisher sind dort knapp drei Millionen Objekte eingestellt, bis 2010 sollen es zehn Millionen sein. Mehr als 1000 Archive, Museen und Bibliotheken haben bereits digitalisiertes Material geliefert. Die Kommission stellt zwei Millionen Euro pro Jahr für den Unterhalt der Plattform bereit. Die Kosten für die Digitalisierung tragen die Mitgliedstaaten."
  4. Hulek, K.; Teschke, O.: ¬Die Transformation von zbMATH zu einer offenen Plattform für die Mathematik (2020) 0.03
    0.029098257 = product of:
      0.043647386 = sum of:
        0.031532075 = weight(_text_:im in 395) [ClassicSimilarity], result of:
          0.031532075 = score(doc=395,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.2186231 = fieldWeight in 395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=395)
        0.01211531 = product of:
          0.03634593 = sum of:
            0.03634593 = weight(_text_:online in 395) [ClassicSimilarity], result of:
              0.03634593 = score(doc=395,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=395)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Mit Beginn des Jahres 2021 wird die bisher kostenpflichtige Datenbank zbMATH in eine Open Access-Plattform übergeführt werden. Damit steht dieser Dienst weltweit allen Interessierten kostenfrei zur Verfügung. Diese Änderung des Geschäftsmodells wird es in Zukunft ermöglichen, die meisten Daten von zbMATH für Zwecke der Forschung und der Verknüpfung mit anderen nicht-kommerziellen Diensten unter den Bedingungen einer CC-BY-SA Lizenz frei zu nutzen. Im Folgenden beschreiben wir die Herausforderungen und die Vision, die sich für die Überführung von zbMATH in eine offene Plattform ergeben.
    Issue
    Online: 19.08.2020.
  5. dpa: Europeana gestartet : Europa eröffnet virtuelle Bibliothek (2008) 0.02
    0.020784471 = product of:
      0.031176705 = sum of:
        0.022522911 = weight(_text_:im in 2408) [ClassicSimilarity], result of:
          0.022522911 = score(doc=2408,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15615936 = fieldWeight in 2408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2408)
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 2408) [ClassicSimilarity], result of:
              0.025961377 = score(doc=2408,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 2408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2408)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    "Die Europäische Union hat zum ersten Mal eine gemeinsame digitale Bibliothek. Auf dem Internetportal www.europeana.eu stehen seit Donnerstag fast drei Millionen Dokumente, Bücher, Gemälde,Filme und Fotografien aus europäischen Sammlungen zur Verfügung, teilte die EU-Kommission mit. Bereits in den ersten Stunden nach der Freischaltung besuchten mehr als zehn Millionen Internetnutzer die Seite, die vorübergehend zusammenbrach. Die Zahl der Server wurde daraufhin von drei auf sechs verdoppelt. "Wir hätten uns in unseren kühnsten Träumen nicht vorstellen können, dass es einen solchen Ansturm auf Europeana gibt", sagte EU-Medienkommissarin Viviane Reding in Brüssel. Bis 2010 sollen auf dem Portal zehn Millionen Objekte in allen EU-Sprachen abrufbar sein. Mehr als 1000 Archive, Museen und Bibliotheken lieferten bereits digitalisiertes Material. Die Kommission stellt zwei Millionen Euro pro Jahr für den Unterhalt der Plattform bereit. Die Kosten für die Digitalisierung tragen die Mitgliedstaaten. Bisher ist nur ein Prozent aller europäischen Kulturgüter elektronisch verfügbar. Um die angestrebte Zahl von zehn Millionen Werken bis 2010 zu erreichen, müssen die Staaten nach Schätzung der Kommission zusammen weitere 350 Millionen Euro in die Hand nehmen. Wie viel die Mitgliedstaaten bisher aufbrachten, blieb offen. Die EU will die Erforschung und Entwicklung von Technologien in dem Bereich in den kommenden zwei Jahren mit 119 Millionen Euro fördern. Die EU-Kulturminister sprachen sich dafür aus, das kulturelle Angebot im Internet weiter auszubauen und gleichzeitig ihren Kampf gegen Online-Piraterie zu verstärken. "Wir wollen legale glaubwürdige Angebote für Verbraucher schaffen", sagte die französische Ressortchefin Christine Albanel, die derzeit die Ministerrunde führt"
  6. Poley, C.: LIVIVO: Neue Herausforderungen an das ZB MED-Suchportal für Lebenswissenschaften (2016) 0.01
    0.012740883 = product of:
      0.03822265 = sum of:
        0.03822265 = weight(_text_:im in 2173) [ClassicSimilarity], result of:
          0.03822265 = score(doc=2173,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.26501122 = fieldWeight in 2173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=2173)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Deutsche Zentralbibliothek für Medizin (ZB MED) hat als Anbieterin von Suchportalen in den Lebenswissenschaften eine lange Tradition. Mit LIVIVO steht seit 2015 ein neues Produkt zur Verfügung, das erstmals das gesamte Fächerspektrum von ZB MED abdeckt: Medizin, Gesundheit, Ernährungs-, Umwelt- und Agrarwissenschaften. In der Anfangsphase von LIVIVO stand der Aufbau eines modernen Fachportals mit einer neuen Suchmaschine im Vordergrund, das die Funktionalitäten der Vorgängerportale miteinander vereinigt. Dabei wurde eine neue Weboberfläche entwickelt, die sich durch eine hohe Benutzerfreundlichkeit und ein responsives Webdesign auszeichnet. Das große Potential für die Entwicklung von LIVIVO liegt im Bereitstellen von Suchdiensten basierend auf den mehr als 55 Millionen Metadatensätzen. Aktuelle Arbeiten von ZB MED beschäftigen sich nun damit, automatische Schnittstellen für Suchservices anzubieten. Gleichzeitig wird mit dem Aufbau des ZB MED-Knowledge-Environment eine unverzichtbare Datenbasis für Forschungsarbeiten an ZB MED geschaffen. Dieser Aufsatz wird auf die aktuellen Herausforderungen eines wissenschaftlichen Portals am Beispiel von LIVIVO eingehen, Lösungsansätze skizzieren und davon ausgehend die Entwicklungslinien vorzeichnen.
  7. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.01
    0.005731291 = product of:
      0.017193872 = sum of:
        0.017193872 = product of:
          0.051581617 = sum of:
            0.051581617 = weight(_text_:retrieval in 1193) [ClassicSimilarity], result of:
              0.051581617 = score(doc=1193,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33420905 = fieldWeight in 1193, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1193)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  8. Veen, T. van; Oldroyd, B.: Search and retrieval in The European Library : a new approach (2004) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 1164) [ClassicSimilarity], result of:
              0.04467099 = score(doc=1164,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 1164, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1164)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The objective of the European Library (TEL) project [TEL] was to set up a co-operative framework and specify a system for integrated access to the major collections of the European national libraries. This has been achieved by successfully applying a new approach for search and retrieval via URLs (SRU) [ZiNG] combined with a new metadata paradigm. One aim of the TEL approach is to have a low barrier of entry into TEL, and this has driven our choice for the technical solution described here. The solution comprises portal and client functionality running completely in the browser, resulting in a low implementation barrier and maximum scalability, as well as giving users control over the search interface and what collections to search. In this article we will describe, step by step, the development of both the search and retrieval architecture and the metadata infrastructure in the European Library project. We will show that SRU is a good alternative to the Z39.50 protocol and can be implemented without losing investments in current Z39.50 implementations. The metadata model being used by TEL is a Dublin Core Application Profile, and we have taken into account that functional requirements will change over time and therefore the metadata model will need to be able to evolve in a controlled way. We make this possible by means of a central metadata registry containing all characteristics of the metadata in TEL. Finally, we provide two scenarios to show how the TEL concept can be developed and extended, with applications capable of increasing their functionality by "learning" new metadata or protocol options.
  9. Müller, B.; Poley, C.; Pössel, J.; Hagelstein, A.; Gübitz, T.: LIVIVO - the vertical search engine for life sciences (2017) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 3368) [ClassicSimilarity], result of:
              0.04467099 = score(doc=3368,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 3368, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3368)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
  10. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 1205) [ClassicSimilarity], result of:
              0.041265294 = score(doc=1205,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 1205, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  11. Peters, C.; Picchi, E.: Across languages, across cultures : issues in multilinguality and digital libraries (1997) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 1233) [ClassicSimilarity], result of:
              0.041265294 = score(doc=1233,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 1233, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1233)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    With the recent rapid diffusion over the international computer networks of world-wide distributed document bases, the question of multilingual access and multilingual information retrieval is becoming increasingly relevant. We briefly discuss just some of the issues that must be addressed in order to implement a multilingual interface for a Digital Library system and describe our own approach to this problem.
  12. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.00
    0.0040794373 = product of:
      0.012238312 = sum of:
        0.012238312 = product of:
          0.036714934 = sum of:
            0.036714934 = weight(_text_:online in 6470) [ClassicSimilarity], result of:
              0.036714934 = score(doc=6470,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 6470, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6470)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  13. Oard, D.W.: Serving users in many languages : cross-language information retrieval for digital libraries (1997) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 1261) [ClassicSimilarity], result of:
              0.03647371 = score(doc=1261,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 1261, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1261)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We are rapidly constructing an extensive network infrastructure for moving information across national boundaries, but much remains to be done before linguistic barriers can be surmounted as effectively as geographic ones. Users seeking information from a digital library could benefit from the ability to query large collections once using a single language, even when more than one language is present in the collection. If the information they locate is not available in a language that they can read, some form of translation will be needed. At present, multilingual thesauri such as EUROVOC help to address this challenge by facilitating controlled vocabulary search using terms from several languages, and services such as INSPEC produce English abstracts for documents in other languages. On the other hand, support for free text searching across languages is not yet widely deployed, and fully automatic machine translation is presently neither sufficiently fast nor sufficiently accurate to adequately support interactive cross-language information seeking. An active and rapidly growing research community has coalesced around these and other related issues, applying techniques drawn from several fields - notably information retrieval and natural language processing - to provide access to large multilingual collections.
  14. Lossau, N.: Search engine technology and digital libraries : libraries need to discover the academic internet (2004) 0.00
    0.004038437 = product of:
      0.01211531 = sum of:
        0.01211531 = product of:
          0.03634593 = sum of:
            0.03634593 = weight(_text_:online in 1161) [ClassicSimilarity], result of:
              0.03634593 = score(doc=1161,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 1161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1161)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    With the development of the World Wide Web, the "information search" has grown to be a significant business sector of a global, competitive and commercial market. Powerful players have entered this market, such as commercial internet search engines, information portals, multinational publishers and online content integrators. Will Google, Yahoo or Microsoft be the only portals to global knowledge in 2010? If libraries do not want to become marginalized in a key area of their traditional services, they need to acknowledge the challenges that come with the globalisation of scholarly information, the existence and further growth of the academic internet
  15. Summann, F.; Lossau, N.: Search engine technology and digital libraries : moving from theory to practice (2004) 0.00
    0.0032421078 = product of:
      0.009726323 = sum of:
        0.009726323 = product of:
          0.029178968 = sum of:
            0.029178968 = weight(_text_:retrieval in 1196) [ClassicSimilarity], result of:
              0.029178968 = score(doc=1196,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.18905719 = fieldWeight in 1196, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1196)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes the journey from the conception of and vision for a modern search-engine-based search environment to its technological realisation. In doing so, it takes up the thread of an earlier article on this subject, this time from a technical viewpoint. As well as presenting the conceptual considerations of the initial stages, this article will principally elucidate the technological aspects of this journey. The starting point for the deliberations about development of an academic search engine was the experience we gained through the generally successful project "Digital Library NRW", in which from 1998 to 2000-with Bielefeld University Library in overall charge-we designed a system model for an Internet-based library portal with an improved academic search environment at its core. At the heart of this system was a metasearch with an availability function, to which we added a user interface integrating all relevant source material for study and research. The deficiencies of this approach were felt soon after the system was launched in June 2001. There were problems with the stability and performance of the database retrieval system, with the integration of full-text documents and Internet pages, and with acceptance by users, because users are increasingly performing the searches themselves using search engines rather than going to the library for help in doing searches. Since a long list of problems are also encountered using commercial search engines for academic use (in particular the retrieval of academic information and long-term availability), the idea was born for a search engine configured specifically for academic use. We also hoped that with one single access point founded on improved search engine technology, we could access the heterogeneous academic resources of subject-based bibliographic databases, catalogues, electronic newspapers, document servers and academic web pages.
  16. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.00
    0.002836844 = product of:
      0.008510532 = sum of:
        0.008510532 = product of:
          0.025531596 = sum of:
            0.025531596 = weight(_text_:retrieval in 1255) [ClassicSimilarity], result of:
              0.025531596 = score(doc=1255,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16542503 = fieldWeight in 1255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1255)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.
  17. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 1194) [ClassicSimilarity], result of:
              0.0207691 = score(doc=1194,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 1194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1194)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.