Search (301 results, page 1 of 16)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.24
    0.24236046 = product of:
      0.6059011 = sum of:
        0.06059011 = product of:
          0.18177032 = sum of:
            0.18177032 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.18177032 = score(doc=230,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.18177032 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.18177032 = score(doc=230,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.18177032 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.18177032 = score(doc=230,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.18177032 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.18177032 = score(doc=230,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(4/10)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Matylonek, J.C.; Ottow, C.; Reese, T.: Organizing ready reference and administrative information with the reference desk manager (2001) 0.02
    0.022577291 = product of:
      0.11288646 = sum of:
        0.03498863 = weight(_text_:web in 1156) [ClassicSimilarity], result of:
          0.03498863 = score(doc=1156,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.37471575 = fieldWeight in 1156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1156)
        0.07789783 = weight(_text_:log in 1156) [ClassicSimilarity], result of:
          0.07789783 = score(doc=1156,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 1156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1156)
      0.2 = coord(2/10)
    
    Abstract
    Non-academic questions regarding special services, phone numbers, web-sites, library policies, current procedures, technical notices, and other pertinent local institutional information are often asked at the academic library reference desk. These frequent and urgent information requests require tools and resources to answer efficiently. Although ready reference collections at the desk provide a tool for academic information, specialized local information resources are more difficult to create and maintain. As reference desk responsibilities become increasingly complex and communication becomes more problematic, a web database to collect and manage this non-academic, local information can be very useful. At the Oregon State University, librarians in the Reference Services Management group created a custom-designed web-log bulletin board to deal with this non-academic, local information. The resulting database provides reference librarians a one-stop location for the information and makes it easier for them to update the information, via email, as conditions, procedures, and information needs change in their busy, highly computerized information commons.
  3. Option für Metager als Standardsuchmaschine, Suchmaschine nach dem Peer-to-Peer-Prinzip (2021) 0.02
    0.0198628 = product of:
      0.099314004 = sum of:
        0.033407938 = weight(_text_:kommunikation in 431) [ClassicSimilarity], result of:
          0.033407938 = score(doc=431,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=431)
        0.06590606 = weight(_text_:schutz in 431) [ClassicSimilarity], result of:
          0.06590606 = score(doc=431,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.31906208 = fieldWeight in 431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.03125 = fieldNorm(doc=431)
      0.2 = coord(2/10)
    
    Content
    Auch auf dem Volla-Phone ist es bald möglich, MetaGer als Standardsuchmaschine zu wählen. Das Volla Phone ist ein Produkt von "Hallo Welt Systeme UG" in Remscheid. Die Entwickler des Smartphones verfolgen den Ansatz, möglichst wenig von der Aufmerksamkeit des Nutzers zu beanspruchen. Technik soll nicht ablenken und sich in der Vordergrund spielen, sondern als bloßes Werkzeug im Hintergrund bleiben. Durch Möglichkeiten wie detaillierter Datenschutzeinstellungen, logfreiem VPN, quelloffener Apps aus einem alternativen App Store wird zudem Schutz der Privatsphäre ermöglicht - ganz ohne Google-Dienste. Durch die Partnerschaft mit MetaGer können die Nutzer von Volla-Phone auch im Bereich Suchmaschine Privatsphärenschutz realisieren. Mehr unter: https://suma-ev.de/mit-metager-auf-dem-volla-phone-suchen/
    YaCy: Suchmaschine nach dem Peer-to-Peer-Prinzip. YaCy ist eine dezentrale, freie Suchmaschine. Die Besonderheit: die freie Suchmaschine läuft nicht auf zentralen Servern eines einzelnen Betreibers, sondern funktioniert nach dem Peer-to-Peer (P2P) Prinzip. Dieses basiert darauf, dass die YaCy-Nutzer aufgerufene Webseiten auf ihrem Computer lokal indexieren. Jeder Nutzer "ercrawlt" sich damit einen kleinen Index, den er durch Kommunikation mit anderen YaCy-Peers teilen kann. Das Programm sorgt dafür, dass durch die kleinen dezentralen Crawler einzelner Nutzer schließlich ein globaler Gesamtindex entsteht. Je mehr Nutzer Teil dieser dezentralen Suche sind, desto größer wird der gemeinsame Index, auf den der einzelne Nutzer dann Zugriff haben kann. Seit kurzem befindet sich YaCy im Verbund unserer abgefragten Suchmaschinen. Wir sind somit auch Teil des Indexes der Suchmaschine.
  4. Molor-Erdene, B.: Schutz der Privatsphäre oder der Gesundheit? (2020) 0.02
    0.018641049 = product of:
      0.18641049 = sum of:
        0.18641049 = weight(_text_:schutz in 5821) [ClassicSimilarity], result of:
          0.18641049 = score(doc=5821,freq=4.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.9024438 = fieldWeight in 5821, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0625 = fieldNorm(doc=5821)
      0.1 = coord(1/10)
    
    Source
    https://www.heise.de/tp/features/Schutz-der-Privatsphaere-oder-der-Gesundheit-4695908.html?view=print
  5. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.02
    0.01732253 = product of:
      0.08661264 = sum of:
        0.07618159 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.07618159 = score(doc=54,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 54) [ClassicSimilarity], result of:
              0.031293165 = score(doc=54,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 54, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Date
    10.12.2020 9:29:12
    Theme
    Semantic Web
  6. Räwel, J.: Automatisierte Kommunikation (2023) 0.01
    0.01181149 = product of:
      0.1181149 = sum of:
        0.1181149 = weight(_text_:kommunikation in 909) [ClassicSimilarity], result of:
          0.1181149 = score(doc=909,freq=16.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.8031421 = fieldWeight in 909, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=909)
      0.1 = coord(1/10)
    
    Content
    In den Sozialwissenschaften gibt es zwei fundamental unterschiedliche Auffassungen, was unter Kommunikation zu verstehen ist. Angelehnt an das Alltagsverständnis und daher auch in den Sozialwissenschaften dominant, gehen "handlungstheoretische" Vorstellungen von Kommunikation davon aus, dass diese instrumentellen Charakters ist. Es sind Menschen in ihrer physisch-psychischen Kompaktheit, die mittels Kommunikation, sei dies in mündlicher oder schriftlicher Form, Informationen austauschen. Kommunizierende werden nach dieser Vorstellung wechselseitig als Sender bzw. Empfänger von Informationen verstanden. Kommunikation dient der mehr oder minder erfolgreichen Übertragung von Informationen von Mensch zu Mensch. Davon paradigmatisch zu unterscheiden sind "systemtheoretische" Vorstellungen von Kommunikation, wie sie wesentlich von dem 1998 verstorbenen Soziologen Niklas Luhmann in Vorschlag gebracht wurden. Nach diesem Paradigma wird behauptet, dass ihr "Eigenleben" charakteristisch ist. Kommunikation zeichnet sich durch ihre rekursive Eigendynamik aus, welche die Möglichkeiten der Kommunizierenden begrenzt, diese zu steuern und zu beeinflussen. Gemäß dieser Konzeption befindet sich individuelles Bewusstseins - in ihrer je gedanklichen Eigendynamik - in der Umwelt von Kommunikationssystemen und vermag diese mittels Sprache lediglich zu irritieren, nicht aber kontrollierend zu determinieren. Dies schon deshalb nicht, weil in Kommunikationssystemen, etwa einem Gespräch als einem "Interaktionssystem", mindestens zwei bewusste Systeme mit ihrer je unterschiedlichen gedanklichen Eigendynamik beteiligt sind.
    Source
    https://www.telepolis.de/features/Automatisierte-Kommunikation-7520683.html?seite=all
  7. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.01
    0.0114609385 = product of:
      0.05730469 = sum of:
        0.0494814 = weight(_text_:web in 4640) [ClassicSimilarity], result of:
          0.0494814 = score(doc=4640,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5299281 = fieldWeight in 4640, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4640,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
    Source
    Proceedings of the First European Semantic Web Symposium (ESWS2004), Eds.: C. Bussler, J. Davies, D. Fensel and R. Studer. 2004. S.299-311
    Theme
    Semantic Web
  8. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.01
    0.0114448285 = product of:
      0.057224143 = sum of:
        0.011783739 = weight(_text_:web in 1255) [ClassicSimilarity], result of:
          0.011783739 = score(doc=1255,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.12619963 = fieldWeight in 1255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1255)
        0.045440406 = weight(_text_:log in 1255) [ClassicSimilarity], result of:
          0.045440406 = score(doc=1255,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.24782073 = fieldWeight in 1255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1255)
      0.2 = coord(2/10)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.
  9. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.009973028 = product of:
      0.049865138 = sum of:
        0.04082007 = weight(_text_:web in 759) [ClassicSimilarity], result of:
          0.04082007 = score(doc=759,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43716836 = fieldWeight in 759, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.027135205 = score(doc=759,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  10. Sander-Beuermann, W.: Generationswechsel bei MetaGer : ein Rückblick und Ausblick (2019) 0.01
    0.00988591 = product of:
      0.098859094 = sum of:
        0.098859094 = weight(_text_:schutz in 4993) [ClassicSimilarity], result of:
          0.098859094 = score(doc=4993,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.4785931 = fieldWeight in 4993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.046875 = fieldNorm(doc=4993)
      0.1 = coord(1/10)
    
    Issue
    Teil 1: Von ersten Internet-Pionieren bis zu Meta-Suchmaschinen [https://www.password-online.de/?wysija-page=1&controller=email&action=view&email_id=633&wysijap=subscriptions&user_id=1045]. Teil 2: Was weiter gelten muss: Freier Wissenszugang, Privatsphäre und Schutz vor Datenkraken! [https://www.password-online.de/?wysija-page=1&controller=email&action=view&email_id=635&wysijap=subscriptions&user_id=1045]
  11. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.01
    0.009644936 = product of:
      0.04822468 = sum of:
        0.040401388 = weight(_text_:web in 2116) [ClassicSimilarity], result of:
          0.040401388 = score(doc=2116,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 2116, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2116) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2116,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2116)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]
  12. Pohl, A.; Steeg, F.: Zurück ins Web : die Entwicklung eines neuen Webauftritts für die Nordrhein-Westfälische Bibliographie (NWBib) (2016) 0.01
    0.008562385 = product of:
      0.042811923 = sum of:
        0.03498863 = weight(_text_:web in 3063) [ClassicSimilarity], result of:
          0.03498863 = score(doc=3063,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.37471575 = fieldWeight in 3063, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3063)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 3063) [ClassicSimilarity], result of:
              0.023469873 = score(doc=3063,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 3063, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3063)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Am Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (hbz) wird seit Anfang 2014 nach Vorgaben und unter Begutachtung der Universitäts- und Landesbibliotheken in Düsseldorf, Münster und Bonn ein neuer Webauftritt für die Landesbibliographie Nordrhein-Westfalens, die Nordrhein-Westfälische Bibliographie (NWBib) entwickelt. Die Entwicklung basiert auf der Web-Schnittstelle des Linked-Open-Data-Dienst lobid und wird vollständig mit Open-Source-Software entwickelt. Aus der Perspektive des Entwicklungsteams am hbz beschreibt der Artikel Kontext und Durchführung des Projekts. Der Beitrag skizziert die historische Entwicklung der NWBib mit Fokus auf die Beziehung der Bibliographie zum World Wide Web (WWW), erläutert die Voraussetzungen für die Neuentwicklung sowie die Leitlinien des Entwicklungsprozesses, gibt einen Überblick über die Nutzung des neuen Webauftritts und die zur Umsetzung verwendete Technologie. Abgeschlossen wir der Artikel mit Lessons-Learned und einem Ausblick auf weitere Entwicklungen.
    Source
    LIBREAS: Library ideas. no.29, 2016 [urn:nbn:de:kobv:11-100238146]
  13. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.008474903 = product of:
      0.042374514 = sum of:
        0.033329446 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.033329446 = score(doc=40,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.027135205 = score(doc=40,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  14. Rötzer, F.: Jeder sechste Deutsche findet ein Social-Scoring-System nach chinesischem Vorbild gut (2019) 0.01
    0.008238259 = product of:
      0.08238258 = sum of:
        0.08238258 = weight(_text_:schutz in 4551) [ClassicSimilarity], result of:
          0.08238258 = score(doc=4551,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.3988276 = fieldWeight in 4551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4551)
      0.1 = coord(1/10)
    
    Content
    Wenn die Umfrageergebnisse, die von YouGov und dem SINUS-Institut erhoben wurden, zutreffen, dann lehnen über zwei Drittel der Deutschen ein solches staatliches Bewertungs- und Steuerungssystem ab. 15 Prozent haben dazu keine Meinung, lehnen es also nicht rundweg ab. Erstaunlich aber ist, dass 17 Prozent, also fast jeder Sechste das sogar begrüßen würde. Gefragt wurden 2.036 Personen ab 18 Jahren. Die Befragung soll repräsentativ sein. 40 Prozent fänden die Möglichkeit gut, die Menschen in ihrer Umgebung bewerten zu können, 39 Prozent würden sich auch selbst von anderen bewerten lassen. In der Fragestellung wurde als Beispiel vorgeschlagen, für Unfreundlichkeit Minuspunkte oder für Freundlichkeit Pluspunkte zu vergeben. Da schlägt wahrscheinlich auch durch, dass die meisten Menschen die allerorten praktizierte Bewertung, das Ranking/Scoring von diesem und jenem, einschließlich seiner selbst gegenüber anderen, und die Quantifizierung des eigenen Lebens schon übernommen haben. Dazu kommt, dass die Überwachung des finanziellen und digitalen Verhaltens zu einer Wurstigkeit geführt hat, der Schutz der Privatsphäre hat für viele aufgrund der Vorteile keine besondere Bedeutung mehr, die Transparenz des persönlichen Lebens wird als Schicksal erlebt, zumindest so lange man überzeugt ist, nichts verbergen zu müssen.
  15. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amode, D.; Sutskever, I.: Language models are unsupervised multitask learners 0.01
    0.007789783 = product of:
      0.07789783 = sum of:
        0.07789783 = weight(_text_:log in 871) [ClassicSimilarity], result of:
          0.07789783 = score(doc=871,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=871)
      0.1 = coord(1/10)
    
    Abstract
    Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
  16. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.007640626 = product of:
      0.038203128 = sum of:
        0.0329876 = weight(_text_:web in 4639) [ClassicSimilarity], result of:
          0.0329876 = score(doc=4639,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35328537 = fieldWeight in 4639, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.015646582 = score(doc=4639,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  17. Beuth, P.: ¬Das Netz der Welt : Lobos Webciety (2009) 0.01
    0.0075891363 = product of:
      0.03794568 = sum of:
        0.029528726 = weight(_text_:kommunikation in 2136) [ClassicSimilarity], result of:
          0.029528726 = score(doc=2136,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.20078552 = fieldWeight in 2136, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2136)
        0.008416956 = weight(_text_:web in 2136) [ClassicSimilarity], result of:
          0.008416956 = score(doc=2136,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.09014259 = fieldWeight in 2136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2136)
      0.2 = coord(2/10)
    
    Content
    "Es gibt Menschen, für die ist "offline" keine Option. Sascha Lobo ist so jemand. Zwölf bis 14 Stunden täglich verbringt er im Internet. "Offline sein ist wie Luft anhalten", hat er mal geschrieben. Der Berliner ist eine große Nummer in der Internet-Gemeinde, er ist Blogger, Buchautor, Journalist und Werbetexter. Er ist Mitarbeiter der Firma "Zentrale Intelligenz-Agentur", hat für das Blog Riesenmaschine den Grimme-Online-Award bekommen, seine Bücher ("Dinge geregelt kriegen - ohne einen Funken Selbstdisziplin") haben Kultstatus. Und politisch aktiv ist er auch: Er sitzt im Online-Beirat der SPD. Für die Cebit 2009 hat er den Bereich Webciety konzipiert. Dazu gehört der "Messestand der Zukunft", wie er sagt. Alles, was der Aussteller mitbringen muss, ist ein Laptop. Youtube wird dort vertreten sein, die Macher des Social Bookmarking-Werkzeugs "Mister Wong", aber auch Vertreter von DNAdigital, einer Plattform, auf der sich Unternehmen und Jugendliche über die Entwicklung des Internets austauschen. Webciety ist ein Kunstbegriff, der sich aus Web und Society zusammensetzt, und die vernetzte Gesellschaft bedeutet. Ein Großteil der sozialen Kommunikation - vor allem innerhalb einer Altersstufe - findet inzwischen im Netz statt. Dabei sind es nicht nur die Teenager, die sich bei SchülerVZ anmelden, oder die BWL-Studenten, die bei Xing berufliche Kontakte knüpfen wollen. Laut der aktuellen Studie "Digitales Leben" der Ludwig-Maximilians-Universität München ist jeder zweite deutsche Internetnutzer in mindestens einem Online-Netzwerk registriert. "Da kann man schon sehen, dass ein gewisser Umschwung in der gesamten Gesellschaft zu bemerken ist. Diesen Umschwung kann man durchaus auch auf der Cebit würdigen", sagt Lobo. Er hat angeblich 80 Prozent seiner Freunde online kennen gelernt. "Das hätte ich nicht gemacht, wenn ich nichts von mir ins Netz gestellt hätte." Für ihn sind die Internet-Netzwerke aber keineswegs die Fortsetzung des Poesiealbums mit anderen Mitteln: "Wovor man sich hüten sollte, ist, für alles, was im Netz passiert, Entsprechungen in der Kohlenstoffwelt zu finden. Eine Email ist eben kein Brief, eine SMS ist keine Postkarte."
    Auch ambitionierte soziale Projekte können gelingen: Refunite.org ist eine Art Suchmaschine, mit der Flüchtlinge weltweit nach vermissten Familienangehörigen suchen können. Lobo nennt als Beispiel die englische Seite fixmystreet.co.uk. Dort tragen Menschen ihre Postleitzahl ein und weisen auf Straßenschäden oder fehlende Schilder hin, oft bebildert mit selbst geschossenen Fotos. Die Eingaben werden an die zuständige Behörde weitergeleitet, damit die weiß, wo sie Schlaglöcher ausbessern muss. Online steht dann nachzulesen, was alles in einem Stadtteil verbessert wurde - und was nicht. "Das ist ein relativ simples Tool, das aber die Fähigkeit des Netzes, Informationen zwischen den Menschen neu zu sortieren, dazu nutzt, die Welt tatsächlich zu verbessern", sagt Lobo. 2009 feiert die Cebit also, dass wir alle online sind. In zehn Jahren wird sie feiern, dass wir das gar nicht mehr merken, glaubt Lobo: "Ich bin überzeugt davon, dass wir noch vernetzter sein werden." Halbautomatische Kommunikation nennt er das. "Dass zum Beispiel mein Handy ständig kommuniziert, wo ich gerade bin und diese Information einem ausgewählten Personenkreis zugängig macht. Dass mein Kalender intelligent wird und meldet, dass ein Freund zur gleichen Zeit in der Stadt ist. Vielleicht schlägt er dann vor: ,Wollt ihr euch da nicht treffen?' Solche Funktionen werden so normal sein, dass man im Prinzip ständig online ist, ohne dass es sich so anfühlt." Teilweise gibt es so etwas schon. Google hat mit "Latitude" gerade einen Ortungsdienst fürs Handy vorgestellt. Die Software sorgt dafür, dass ausgewählten Personen per Google Maps angezeigt wird, wo sich der Handybesitzer gerade aufhält. Der technophile Obama würde den Dienst wahrscheinlich mögen. Doch der Geheimdienst NSA wollte ihm sogar schon den Blackberry wegnehmen - damit der mächtigste Mann der Welt eben nicht ständig geortet werden kann."
  18. Räwel, J.: Können Maschinen denken? : der Turing-Test aus systemtheoretischer Perspektive (2018) 0.01
    0.007233031 = product of:
      0.07233031 = sum of:
        0.07233031 = weight(_text_:kommunikation in 4383) [ClassicSimilarity], result of:
          0.07233031 = score(doc=4383,freq=6.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.49182206 = fieldWeight in 4383, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4383)
      0.1 = coord(1/10)
    
    Abstract
    Alan Turing beantwortete diese Frage nicht unmittelbar, sondern indirekt über das mittlerweile praktisch durchführbare Gedankenexperiment eines "imitation game". Seine Antwort lautete, es könne zumindest nicht ausgeschlossen werden, dass Maschinen (Computer) denken können, wenn sie Kommunikation, etwa als Wechselspiel von Fragen und Antworten, derart gut imitieren, dass (durch Menschen) mit hoher Wahrscheinlichkeit nicht entschieden werden kann, ob fragliche Kommunikation maschinell oder durch eine menschliche Person erfolgt. Dies zumal angesichts (damals zukünftiger) selbstlernender Maschinen bzw. Algorithmen, denen es in ihrer Anwendung möglich ist, eigene Strukturen zu verändern und damit potentiell die Simulation (?) der Kommunikation - bzw. des damit zusammenhängenden Denkens (?) - des "imitation game" fortlaufend zu verbessern. Wir werden die eingangs gestellte Frage, bezugnehmend auf die Luhmannsche Systemtheorie und damit unter einem radikal anderen Blickwinkel als sonst üblich, relativ umstandslos und direkt beantworten können. Allerdings werden wir feststellen müssen, dass die schlichte, umstandslose Beantwortung der Frage durch die komplexen theoretischen Voraussetzungen der Systemtheorie erkauft sind. So ist etwa aus systemtheoretischer Perspektive keineswegs selbstverständlich, obgleich die eingangs gestellte Frage dies nahelegt, welche Instanz im Unterschied zum fraglichen maschinellen Denken fraglos Denken erlaubt.
  19. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.007123591 = product of:
      0.035617955 = sum of:
        0.029157192 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
          0.029157192 = score(doc=4553,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 4553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.019382289 = score(doc=4553,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  20. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.01
    0.006538931 = product of:
      0.032694653 = sum of:
        0.023567477 = weight(_text_:web in 1155) [ClassicSimilarity], result of:
          0.023567477 = score(doc=1155,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 1155) [ClassicSimilarity], result of:
              0.027381519 = score(doc=1155,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1155)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
    Date
    26.12.2011 16:29:02

Years

Languages

  • e 189
  • d 106
  • f 2
  • i 2
  • a 1
  • More… Less…