Search (126 results, page 1 of 7)

  • × year_i:[2010 TO 2020}
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.25
    0.24833582 = product of:
      0.49667165 = sum of:
        0.12416791 = product of:
          0.37250373 = sum of:
            0.37250373 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.37250373 = score(doc=1826,freq=2.0), product of:
                0.39767802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046906993 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.37250373 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.37250373 = score(doc=1826,freq=2.0), product of:
            0.39767802 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046906993 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.12
    0.12416791 = product of:
      0.24833582 = sum of:
        0.062083956 = product of:
          0.18625186 = sum of:
            0.18625186 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.18625186 = score(doc=4388,freq=2.0), product of:
                0.39767802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046906993 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.18625186 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.18625186 = score(doc=4388,freq=2.0), product of:
            0.39767802 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046906993 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  3. Kauke, V.; Klotz-Berendes, B.: Wechsel des Bibliothekssystems in die Cloud (2015) 0.05
    0.05242333 = product of:
      0.10484666 = sum of:
        0.069153175 = weight(_text_:services in 2474) [ClassicSimilarity], result of:
          0.069153175 = score(doc=2474,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.40155616 = fieldWeight in 2474, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2474)
        0.03569348 = product of:
          0.07138696 = sum of:
            0.07138696 = weight(_text_:management in 2474) [ClassicSimilarity], result of:
              0.07138696 = score(doc=2474,freq=6.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.45151538 = fieldWeight in 2474, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Cloudbasierte Bibliothekssysteme stellen die neue Generation der Bibliothekssysteme dar. Sie ermöglichen ein gemeinsames Management von Print- und elektronischen Medien. Da in der Hochschulbibliothek der Fachhochschule Münster die elektronischen Ressourcen entscheidend zur Literaturversorgung von Lehrenden und Studierenden beitragen, beschäftigt sich ein Projektteam seit Ende 2014 mit der Evaluation des Systems WorldShare Management Services (WMS) der Firma OCLC. Die ersten Ergebnisse und einige weitere Überlegungen zur Migration des Systems werden in diesem Beitrag vorgestellt.
    Object
    WorldShare Management Services
  4. Meyer-Doerpinghaus, U.; Tröger, B.: Forschungsdatenmanagement als Herausforderung für Hochschulen und Hochschulbibliotheken (2015) 0.05
    0.04914839 = product of:
      0.09829678 = sum of:
        0.069153175 = weight(_text_:services in 2472) [ClassicSimilarity], result of:
          0.069153175 = score(doc=2472,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.40155616 = fieldWeight in 2472, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2472)
        0.029143604 = product of:
          0.058287207 = sum of:
            0.058287207 = weight(_text_:management in 2472) [ClassicSimilarity], result of:
              0.058287207 = score(doc=2472,freq=4.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.36866072 = fieldWeight in 2472, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2472)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Eines der wichtigsten neuen Handlungsfelder der Forschung, das im Zuge der Digitalisierung von Information entstanden ist, ist das Management von Forschungsdaten. Die Hochschulen müssen sich darauf einstellen, ihren Wissenschaftlerinnen und Wissenschaftlern die notwendigen Strukturen und Services zur Verfügung zu stellen. Die in der Hochschulrektorenkonferenz (HRK) organisierten Leitungen der deutschen Hochschulen sehen darin eine zentrale Aufgabe. Die Universität Münster geht mit gutem Beispiel voran: In enger Zusammenarbeit mit der Hochschulleitung hat die Universitäts- und Landesbibliothek damit begonnen, Strukturen und Services zur Unterstützung des Forschungsdatenmanagements aufzubauen.
    Theme
    Information Resources Management
  5. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.03
    0.032057434 = product of:
      0.06411487 = sum of:
        0.049395125 = weight(_text_:services in 3373) [ClassicSimilarity], result of:
          0.049395125 = score(doc=3373,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.28682584 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.014719742 = product of:
          0.029439485 = sum of:
            0.029439485 = weight(_text_:management in 3373) [ClassicSimilarity], result of:
              0.029439485 = score(doc=3373,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.18620178 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  6. Hannemann, J.; Kett, J.: Linked data for libraries (2010) 0.03
    0.029788423 = product of:
      0.059576847 = sum of:
        0.041913155 = weight(_text_:services in 3964) [ClassicSimilarity], result of:
          0.041913155 = score(doc=3964,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 3964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=3964)
        0.017663691 = product of:
          0.035327382 = sum of:
            0.035327382 = weight(_text_:management in 3964) [ClassicSimilarity], result of:
              0.035327382 = score(doc=3964,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.22344214 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3964)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web in general and the Linking Open Data initiative in particular encourage institutions to publish, share and interlink their data. This has considerable potential for libraries, which can complement their data by linking it to other, external data sources. This paper details the first linked open data service of the German National Library. The focus is on the challenges met during the inception of this service. Extrapolating from our experiences, the paper further discusses the German National Library's perspective on the future of library data exchange and the potential for the creation of globally interlinked library data. We outline how this process can be facilitated and how new services can be offered based on these growing metadata collections.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  7. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.03
    0.029788423 = product of:
      0.059576847 = sum of:
        0.041913155 = weight(_text_:services in 3896) [ClassicSimilarity], result of:
          0.041913155 = score(doc=3896,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.017663691 = product of:
          0.035327382 = sum of:
            0.035327382 = weight(_text_:management in 3896) [ClassicSimilarity], result of:
              0.035327382 = score(doc=3896,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.22344214 = fieldWeight in 3896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3896)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  8. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.02
    0.020352593 = product of:
      0.040705185 = sum of:
        0.020956578 = weight(_text_:services in 468) [ClassicSimilarity], result of:
          0.020956578 = score(doc=468,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.1216899 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.019748608 = product of:
          0.039497215 = sum of:
            0.039497215 = weight(_text_:management in 468) [ClassicSimilarity], result of:
              0.039497215 = score(doc=468,freq=10.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.24981591 = fieldWeight in 468, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    One of the major challenges of digital archiving is how to deal with changing technologies and changing user communities. On the one hand software, hardware and (multimedia) data formats that become obsolete and are not supported anymore still need to be kept accessible. On the other hand changing user communities necessitate technical means to formalize, detect and measure knowledge evolution. Furthermore, digital archival records are usually not deleted from the AIS and therefore, the amount of digitally archived (multimedia) content can be expected to grow rapidly. Therefore, efficient storage management solutions geared to the fact that cultural heritage is not as frequently accessed like up-to-date content residing in a digital library are required. Software and hardware needs to be tightly connected based on sophisticated knowledge representation and management models in order to face that challenge. In line with the above, contributions to the workshop should focus on, but are not limited to:
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  9. Svensson, L.G.; Jahns, Y.: PDF, CSV, RSS and other Acronyms : redefining the bibliographic services in the German National Library (2010) 0.02
    0.017463814 = product of:
      0.06985526 = sum of:
        0.06985526 = weight(_text_:services in 3970) [ClassicSimilarity], result of:
          0.06985526 = score(doc=3970,freq=8.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.405633 = fieldWeight in 3970, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3970)
      0.25 = coord(1/4)
    
    Abstract
    In January 2010, the German National Library discontinued the print version of the national bibliography and replaced it with an online journal. This was the first step in a longer process of redefining the National Library's bibliographic services, leaving the field of traditional media - e. g. paper or CD-ROM databases - and focusing on publishing its data over the WWW. A new business model was set up - all web resources are now published in an extra bibliography series and the bibliographic data are freely available. Step by step the prices of the other bibliographic data will be also reduced. In the second stage of the project, the focus is on value-added services based on the National Library's catalogue. The main purpose is to introduce alerting services based on the user's search criteria offering different access methods such as RSS feeds, integration with e. g. Zotero, or export of the bibliographic data as a CSV or PDF file. Current standards of cataloguing remain a guide line to offer high-value end-user retrieval but they will be supplemented by automated indexing procedures to find & browse the growing number of documents. A transparent cataloguing policy and wellarranged selection menus are aimed.
  10. Fietkiewicz, K.J.; Stock, W.G.: Jedem seine eigene "Truman Show" : YouNow, Periscope, Ustream und ihre Nutzer - "Social Live"-Streaming Services (2017) 0.02
    0.017288294 = product of:
      0.069153175 = sum of:
        0.069153175 = weight(_text_:services in 3770) [ClassicSimilarity], result of:
          0.069153175 = score(doc=3770,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.40155616 = fieldWeight in 3770, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3770)
      0.25 = coord(1/4)
    
    Abstract
    Die seinerzeit 19-jährige Studentin Katrin Scheibe war 2015 Teilnehmerin an einem Seminar der Uni Düsseldorf über "Social Live"-Streaming Services und hat mit anderen Studenten zusammen eine Live-Übertragung einer Sitzung über YouNow durchgeführt. Innerhalb des rund einstündigen Programms schnellte die Zuschauerzahl auf weit über 200 hoch. Die meist jugendlichen Zuseher empfanden es als höchst interessant, eine Uni-Lehrveranstaltung hautnah miterleben zu dürfen. Ebenso waren die Studenten von dem aktuellen und zeitnahen Thema begeistert und publizierten ihre Forschungsresultate unter einem Pseudonym (Mathilde B. Friedländer) in internationalen Fachzeitschriften.
  11. Tetzchner, J. von: As a monopoly in search and advertising Google is not able to resist the misuse of power : is the Internet turning into a battlefield of propaganda? How Google should be regulated (2017) 0.02
    0.016171718 = product of:
      0.06468687 = sum of:
        0.06468687 = weight(_text_:services in 3891) [ClassicSimilarity], result of:
          0.06468687 = score(doc=3891,freq=14.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.3756214 = fieldWeight in 3891, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3891)
      0.25 = coord(1/4)
    
    Content
    "Let us start with your positive experiences with Google. I have known Google longer than most. At Opera, we were the first to add their search into the browser interface, enabling it directly from the search box and the address field. At that time, Google was an up-and-coming geeky company. I remember vividly meeting with Google's co-founder Larry Page, his relaxed dress code and his love for the Danger device, which he played with throughout our meeting. Later, I met with the other co-founder of Google, Sergey Brin, and got positive vibes. My first impression of Google was that it was a likeable company. Our cooperation with Google was a good one. Integrating their search into Opera helped us deliver a better service to our users and generated revenue that paid the bills. We helped Google grow, along with others that followed in our footsteps and integrated Google search into their browsers. Then the picture for you and for opera darkened. Yes, then things changed. Google increased their proximity with the Mozilla foundation. They also introduced new services such as Google Docs. These services were great, gained quick popularity, but also exposed the darker side of Google. Not only were these services made to be incompatible with Opera, but also encouraged users to switch their browsers. I brought this up with Sergey Brin, in vain. For millions of Opera users to be able to access these services, we had to hide our browser's identity. The browser sniffing situation only worsened after Google started building their own browser, Chrome. ...
    How should Google be regulated? We should limit the amount of information that is being collected. In particular we should look at information that is being collected across sites. It should not be legal to combine data from multiple sites and services. The fact that these sites and services are using the same underlying technology does not change the fact that the user's dealings is with a site at a time and each site should not have the right to share the data with others. I believe this the cornerstone of laws in many countries today, but these laws need to be enforced. Data about us is ours alone and it should not be possible to sell it. We should also limit the ability to target users individually. In the past, ads on sites were ads on sites. You might know what kind of users visited a site and you would place tech ads on tech sites and fashion ads on fashion sites. Now the ads follow you individually. That should be made illegal as it uses data collected from multiple sources and invades our privacy. I also believe there should be regulation as to how location data is used and any information related to our mobile devices. In addition, regulators need to be vigilant as to how companies that have monopoly power use their power. That kind of goes without saying. Companies with monopoly powers should not be able to use those powers when competing in an open market or using their monopoly services to limit competition."
  12. Lange, C.; Mossakowski, T.; Galinski, C.; Kutz, O.: Making heterogeneous ontologies interoperable through standardisation : a Meta Ontology Language to be standardised: Ontology Integration and Interoperability (OntoIOp) (2011) 0.01
    0.014818538 = product of:
      0.059274152 = sum of:
        0.059274152 = weight(_text_:services in 50) [ClassicSimilarity], result of:
          0.059274152 = score(doc=50,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.344191 = fieldWeight in 50, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=50)
      0.25 = coord(1/4)
    
    Abstract
    Assistive technology, especially for persons with disabilities, increasingly relies on electronic communication among users, between users and their devices, and among these devices. Making such ICT accessible and inclusive often requires remedial programming, which tends to be costly or even impossible. We, therefore, aim at more interoperable devices, services accessing these devices, and content delivered by these services, at the levels of 1. data and metadata, 2. datamodels and data modelling methods and 3. metamodels as well as a meta ontology language. Even though ontologies are widely being used to enable content interoperability, there is currently no unified framework for ontology interoperability itself. This paper outlines the design considerations underlying OntoIOp (Ontology Integration and Interoperability), a new standardisation activity in ISO/TC 37/SC 3 to become an international standard, which aims at filling this gap.
  13. Vatant, B.; Dunsire, G.: Use case vocabulary merging (2010) 0.01
    0.013971052 = product of:
      0.05588421 = sum of:
        0.05588421 = weight(_text_:services in 4336) [ClassicSimilarity], result of:
          0.05588421 = score(doc=4336,freq=8.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.3245064 = fieldWeight in 4336, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.03125 = fieldNorm(doc=4336)
      0.25 = coord(1/4)
    
    Abstract
    The publication of library legacy includes publication of structuring vocabularies such as thesauri, classifications, subject headings. Different sources use different vocabularies, different in structure, width, depth and scope, and languages. Federated access to distributed data collections is currently possible if they rely on the same vocabularies. Mapping techniques and standards supporting them (such as SKOS mapping properties, OWL sameAs and equivalentClass) are still largely experimental, even in the linked data land. Libraries use a variety of controlled subject vocabulary and classification schemes to index items in their collections. Although most collections will employ only a single scheme, different schemes may be chosen to index different collections within a library or in separate libraries; schemes are chosen on the basis of language, subject focus (general or specific), granularity (specificity), user expectation, and availability and support (cost, currency, completeness, tools). For example, a typical academic library will operate separate metadata systems for the library's main collections, special collections (e.g. manuscripts, archives, audiovisual), digital collections, and one or more institutional repositories for teaching and research output; each of these systems may employ a different subject vocabulary, with little or no interoperability between terms and concepts. Users expect to have a single point-of-search in resource discovery services focussed on their local institutional collections. Librarians have to use complex and expensive resource discovery platforms to meet user expectations. Library communities continue to develop resource discovery services for consortia with a geographical, subject, sector (public, academic, school, special libraries), and/or domain (libraries, archives, museums) focus. Services are based on distributed searching (e.g. via Z39.50) or metadata aggregations (e.g. OCLC's WorldCat and OAISter). As a result, the number of different subject schemes encountered in such services is increasing. Trans-national consortia (e.g. Europeana) add to the complexity of the environment by including subject vocabularies in multiple languages. Users expect single point-of-search in consortial resource discovery service involving multiple organisations and large-scale metadata aggregations. Users also expect to be able to search for subjects using their own language and terms in an unambiguous, contextualised manner.
  14. Kashyap, M.M.: Application of integrative approach in the teaching of library science techniques and application of information technology (2011) 0.01
    0.013971052 = product of:
      0.05588421 = sum of:
        0.05588421 = weight(_text_:services in 4395) [ClassicSimilarity], result of:
          0.05588421 = score(doc=4395,freq=8.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.3245064 = fieldWeight in 4395, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
      0.25 = coord(1/4)
    
    Abstract
    Today many libraries are using computers and allied information technologies to improve their work methods and services. Consequently, the libraries need such professional staff, or need to train the present one, who could face the challenges placed by the introduction of these technologies in the libraries. To meet the demand of such professional staff, the departments of Library and Information Science in India introduced new courses of studies to expose their students in the use and application of computers and other allied technologies. Some courses introduced are: Computer Application in Libraries; Systems Analysis and Design Technique; Design and Development of Computer-based Library Information Systems; Database Organisation and Design; Library Networking; Use and Application of Communication Technology, and so forth. It is felt that the computer and information technologies biased courses need to be restructured, revised, and more harmoniously blended with the traditional main stream courses of library and information science discipline. We must alter the strategy of teaching library techniques, such as classification, cataloguing, and library procedures, and the techniques of designing computer-based library information systems and services. The use and application of these techniques get interwoven when we shift from a manually operated library system's environment to computer-based library system's environment. As such, it becomes necessary that we must follow an integrative approach, when we teach these techniques to the students of library and information science or train library staff in the use and application of these techniques to design, develop and implement computer-based library information systems and services. In the following sections of this paper, we shall outline the likeness or correspondence between certain concepts and techniques formed by computer specialist and the one developed by the librarians, in their respective domains. We make use of these techniques (i.e. the techniques of both the domains) in the design and implementation of computer-based library information systems and services. As such, it is essential that lessons of study concerning the exposition of these supplementary and complementary techniques must be integrated.
  15. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.01
    0.013971052 = product of:
      0.05588421 = sum of:
        0.05588421 = weight(_text_:services in 54) [ClassicSimilarity], result of:
          0.05588421 = score(doc=54,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.3245064 = fieldWeight in 54, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
  16. Bahls, D.; Scherp, G.; Tochtermann, K.; Hasselbring, W.: Towards a recommender system for statistical research data (2012) 0.01
    0.012348781 = product of:
      0.049395125 = sum of:
        0.049395125 = weight(_text_:services in 474) [ClassicSimilarity], result of:
          0.049395125 = score(doc=474,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.28682584 = fieldWeight in 474, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=474)
      0.25 = coord(1/4)
    
    Abstract
    To effectively promote the exchange of scientific data, retrieval services are required to suit the needs of the research community. A large amount of research in the field of economics is based on statistical data, which is often drawn from external sources like data agencies, statistical offices or affiated institutes. Since producing such data for a particular research question is expensive in time and money-if possible at all- research activities are often influenced by the availability of suitable data. Researchers choose or adjust their questions, so that the empirical foundation to support their results is given. As a consequence, researchers look out and poll for newly available data in all sorts of directions due to a lacking information infrastructure for this domain. This circumstance and a recent report from the High Level Expert Group on Scientific Data motivate recommendation and notification services for research data sets. In this paper, we elaborate on a case-based recommender system for statistical data, which allows for precise query specification. We discuss required similarity measures on the basis of cross-domain code lists and propose a system architecture. To address the problem of continuous polling, we elaborate on a notification service to inform researchers on newly avaible data sets based on their personal request.
  17. Fallaw, C.; Dunham, E.; Wickes, E.; Strong, D.; Stein, A.; Zhang, Q.; Rimkus, K.; ill Ingram, B.; Imker, H.J.: Overly honest data repository development (2016) 0.01
    0.012224671 = product of:
      0.048898686 = sum of:
        0.048898686 = weight(_text_:services in 3371) [ClassicSimilarity], result of:
          0.048898686 = score(doc=3371,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.28394312 = fieldWeight in 3371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3371)
      0.25 = coord(1/4)
    
    Abstract
    After a year of development, the library at the University of Illinois at Urbana-Champaign has launched a repository, called the Illinois Data Bank (https://databank.illinois.edu/), to provide Illinois researchers with a free, self-serve publishing platform that centralizes, preserves, and provides persistent and reliable access to Illinois research data. This article presents a holistic view of development by discussing our overarching technical, policy, and interface strategies. By openly presenting our design decisions, the rationales behind those decisions, and associated challenges this paper aims to contribute to the library community's work to develop repository services that meet growing data preservation and sharing needs.
  18. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.01
    0.012099287 = product of:
      0.048397146 = sum of:
        0.048397146 = weight(_text_:services in 134) [ClassicSimilarity], result of:
          0.048397146 = score(doc=134,freq=6.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2810308 = fieldWeight in 134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
      0.25 = coord(1/4)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  19. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.01
    0.011234601 = product of:
      0.044938404 = sum of:
        0.044938404 = product of:
          0.08987681 = sum of:
            0.08987681 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.08987681 = score(doc=3582,freq=4.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  20. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.0111216875 = product of:
      0.04448675 = sum of:
        0.04448675 = product of:
          0.0889735 = sum of:
            0.0889735 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.0889735 = score(doc=8365,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2015 16:08:38

Languages

  • e 62
  • d 59
  • i 2
  • a 1
  • es 1
  • More… Less…

Types

  • a 79
  • s 4
  • x 3
  • m 2
  • r 2
  • More… Less…