Search (10 results, page 1 of 1)

  • × author_ss:"Weber, S."
  1. Weber, S.: ¬Das Google-Copy-Paste-Syndrom : Wie Netzplagiate Ausbildung und Wissen gefährden (2007) 0.04
    0.041092478 = product of:
      0.082184955 = sum of:
        0.0061828885 = weight(_text_:in in 140) [ClassicSimilarity], result of:
          0.0061828885 = score(doc=140,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1041228 = fieldWeight in 140, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=140)
        0.03670451 = weight(_text_:und in 140) [ClassicSimilarity], result of:
          0.03670451 = score(doc=140,freq=30.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.3793607 = fieldWeight in 140, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=140)
        0.03929756 = product of:
          0.07859512 = sum of:
            0.07859512 = weight(_text_:ausbildung in 140) [ClassicSimilarity], result of:
              0.07859512 = score(doc=140,freq=4.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.3354479 = fieldWeight in 140, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.03125 = fieldNorm(doc=140)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Es handelt sich um das erste deutschsprachige Sachbuch, das sich dem Copy/Paste-Syndrom und den Folgen des Google-Wikipedia-Wissensmonopols widmet. Der Autor beobachtet kritisch die "Ergoogelung" der Wirklichkeit und die fortschreitende Austreibung des Geistes aus der Textproduktion. Er fragt, wie die Medienwissenschaft auf dieses Problem - wenn überhaupt - reagiert. Einer (Text-)Kultur ohne Hirn leistet nicht nur der Netzplagiarismus Vorschub: Cyber-Neusprech bzw. "Weblisch", Chat- und SMS-kontaminierte Bewusstseine, affirmative Bagatelle-Forschung, Technophilie und Bullshit-PR für neue Medien schaffen ein Milieu, in dem eine Kritik des Internets und seiner Verwendung systematisch ausgeblendet wird.Wenn rund dreißig Prozent der Studierenden bei Umfragen zugeben, dass sie Textklau aus dem Internet betreiben, dann läuft etwas aus dem Ruder. Die gegenwärtig grassierende Copy-Paste-Mentalität bedroht die gesamte wissenschaftliche Textkultur. Ein grundlegender Wandel der Kulturtechnik zeichnet sich ab: von der eigenen Idee und der eigenen Formulierung hin zur "Umgehung des Hirns" und zur Textbearbeitung bereits vorhandener Segmente im Web. "Das Google-Copy-Paste-Syndrom" ist das erste deutschsprachige Sachbuch, das sich dem Kopieren-und-Einsetzen-Phänomen sowie den Folgen des Google-Wikipedia-Wissensmonopols widmet. Der Autor beobachtet kritisch die Ergoogelung der Wirklichkeit und die fortschreitende Austreibung des Geistes aus der Textproduktion. Er fragt, wie die Medienwissenschaft auf dieses Problem - wenn überhaupt - reagiert. Netzplagiate gefährden Ausbildung und Wissen. Cyber-Neusprech oder "Weblisch", Chat- und SMS-kontaminiertes Bewusstsein, affirmative Bagatelle-Forschung, Technophilie und Bullshit-PR für neue Medien schaffen zudem ein Milieu, in dem eine Kritik des Internets und seiner Verwendung systematisch ausgeblendet wird. Dieses Buch richtet sich nicht nur an Lehrende in Schulen und Universitäten, die sich mit diesem neuen Problem konfrontiert sehen. Es ist so geschrieben, dass es auch für ein breites Publikum, das die neuen Medien verwendet, eine kritische Lektüre bietet.
  2. Höhfeld, S.; Weber, S.: Stand und Perspektiven von Informationswissenschaft und -praxis (2006) 0.02
    0.019095764 = product of:
      0.05728729 = sum of:
        0.0071393843 = weight(_text_:in in 4929) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=4929,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 4929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4929)
        0.050147906 = weight(_text_:und in 4929) [ClassicSimilarity], result of:
          0.050147906 = score(doc=4929,freq=14.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.51830536 = fieldWeight in 4929, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4929)
      0.33333334 = coord(2/6)
    
    Abstract
    Im Rahmen der Vortragsreihe "Informationswissenschaft - Stand und Perspektiven" an der Heinrich-Heine-Universität Düsseldorf werden sowohl wissenschaftlich als auch praktisch fundierte Methoden und Ansätze der Informationswissenschaft und - praxis dargestellt. Nahezu alle Lehrstühle desFachs in Deutschland sowie Experten aus der deutschen Informationswirtschaft vermitteln einen Überblick zum derzeitigen Stand von Forschung und Anwendung.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.1, S.27-29
  3. Weber, S.: Eine Million Bücher mit automatisch erzeugten Texten (2018) 0.02
    0.016486328 = product of:
      0.049458984 = sum of:
        0.008834538 = weight(_text_:in in 4504) [ClassicSimilarity], result of:
          0.008834538 = score(doc=4504,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 4504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4504)
        0.040624447 = weight(_text_:und in 4504) [ClassicSimilarity], result of:
          0.040624447 = score(doc=4504,freq=12.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.41987535 = fieldWeight in 4504, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4504)
      0.33333334 = coord(2/6)
    
    Abstract
    Wenn Künstliche Intelligenz eine Doktorarbeit schreiben kann, was heißt dann Bildung? Interview mit dem Verleger Philip M. Parker. Im deutschsprachigen Raum ist er fast ein Unbekannter: Der in Singapur lehrende Wirtschaftswissenschaftler und Unternehmer Philip M. Parker. Der von ihm gegründete Buchverlag ICON Group International hat mehr als eine Million verschiedener Bücher publiziert, deren Inhalte komplett automatisch generiert wurden. Mit Apps, Spielen und Lehrbüchern zu landwirtschaftlichen Techniken, Lesen und Rechnen - auch diese allesamt komplett automatisch erzeugt - will er die Alphabetisierung und Bildung in der Dritten Welt vorantreiben. Sein Programm "Totopoetry" erzeugt automatisch Perlen der Dichtkunst, wie er eindrucksvoll zeigen kann. Und nun will Parker auch noch die Wikipedia revolutionieren: natürlich mit Bots, die die Inhalte automatisch erstellen und übersetzen.
  4. Weber, S.: Kommen nach den "science wars" die "reference wars"? : Wandel der Wissenskultur durch Netzplagiate und das Google-Wikipedia-Monopol (2005) 0.02
    0.015306472 = product of:
      0.045919415 = sum of:
        0.008834538 = weight(_text_:in in 4023) [ClassicSimilarity], result of:
          0.008834538 = score(doc=4023,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 4023, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4023)
        0.037084877 = weight(_text_:und in 4023) [ClassicSimilarity], result of:
          0.037084877 = score(doc=4023,freq=10.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.38329202 = fieldWeight in 4023, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4023)
      0.33333334 = coord(2/6)
    
    Abstract
    Wer eben mal schnell wissen will, wann sich Habermas habilitierte oder was nun Gotthard Günther mit "Polykontexturallogik" genau meinte, der befragt Google oder gleich die Wikipedia. Die Vorteile sind unübersehbar: Es müssen keine dicken Bände durchforstet werden, der Gang in die Bibliothek und zum vergilbten Zettelkasten ist nicht mehr notwendig. Mittlerweile sind Google-Ergebnisse und Wikipedia-Beiträge zu Wissensautoritäten, zu Wissensmonopolen neuer Art geworden: Veröffentlicht und öffentlich zugänglich erscheint oft nur noch, was von Google gefunden wird und/oder in die Wikipedia aufgenommen wurde.
  5. Weber, S.: ¬Die Automatisierung der Inhalte-Erstellung (2018) 0.01
    0.010731532 = product of:
      0.032194596 = sum of:
        0.0075724614 = weight(_text_:in in 4565) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4565,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4565, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4565)
        0.024622133 = weight(_text_:und in 4565) [ClassicSimilarity], result of:
          0.024622133 = score(doc=4565,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.2544829 = fieldWeight in 4565, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4565)
      0.33333334 = coord(2/6)
    
    Abstract
    KI als Autor von Content: "Als ich das zum ersten Mal sah, dachte ich: Was zur Hölle ist das?" ( Ein Nutzer berichtet in der Dokumentation "Inside Google" über seine Reaktion auf das für ihn erste automatisch generierte Fotoalbum von Google Photos) - "Kopieren, Programmieren, Automatisieren sind die neuen [.] Werkzeuge. (Kenneth Goldsmith, Klappentext zu "Uncreative Writing") Weshalb staunte der versierte App-Nutzer? Die Google Photos-App hat automatisch eine Bildergalerie von Fotos seines jüngsten Urlaubs produziert. Wie aus dem Nichts war sie da, ungefragt: Ein- und auszoomende Urlaubsbilder, die Übergänge zum Teil mit Effekten, wie wir sie von PowerPoint kennen. Die Google-App hat alle Bilder geolokalisiert, die Reiseroute rekonstruiert und Datumsangaben ergänzt. Schließlich unterlegte sie das Ganze mit der üblichen Einheitsmusik, die wir von Abertausenden anderen Videos im Netz kennen.
    Content
    Dieser Artikel ist eine leicht überarbeitete Fassung von Kapitel 2 des Buchs "Roboterjournalismus, Chatbots & Co. Wie Algorithmen Inhalte produzieren und unser Denken beeinflussen", erschienen am 19.11.2018 in der Heise-Reihe "Telepolis". Mit einer Liste von Anbietern. Vgl. auch: http://www.heise.de/_]4228345.
  6. Weber, S.: Wohin steuert das Netz? : Einige unorthodoxe Überlegungen zu Netzwerk-Theorie und cyberpoietischer Empirie (2001) 0.01
    0.0068394816 = product of:
      0.04103689 = sum of:
        0.04103689 = weight(_text_:und in 6581) [ClassicSimilarity], result of:
          0.04103689 = score(doc=6581,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.42413816 = fieldWeight in 6581, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=6581)
      0.16666667 = coord(1/6)
    
    Series
    Schriftenreihe der Deutschen Gesellschaft für Publizistik- und Kommunikationswissenschaft; Bd.28
    Source
    Kommunikationskulturen zwischen Kontinuität und Wandel: Universelle Netzwerke für die Zivilgesellschaft. Hrsg.: U. Maier-Rabler u. M. Latzer
  7. Stock, W.G.; Weber, S.: Facets of informetrics : Preface (2006) 0.01
    0.0060736625 = product of:
      0.018220987 = sum of:
        0.008743925 = weight(_text_:in in 76) [ClassicSimilarity], result of:
          0.008743925 = score(doc=76,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 76, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=76)
        0.009477063 = weight(_text_:und in 76) [ClassicSimilarity], result of:
          0.009477063 = score(doc=76,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.09795051 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=76)
      0.33333334 = coord(2/6)
    
    Abstract
    According to Jean M. Tague-Sutcliffe "informetrics" is "the study of the quantitative aspects of information in any form, not just records or bibliographies, and in any social group, not just scientists" (Tague-Sutcliffe, 1992, 1). Leo Egghe also defines "informetrics" in a very broad sense. "(W)e will use the term' informetrics' as the broad term comprising all-metrics studies related to information science, including bibliometrics (bibliographies, libraries,...), scientometrics (science policy, citation analysis, research evaluation,...), webometrics (metrics of the web, the Internet or other social networks such as citation or collaboration networks), ..." (Egghe, 2005b,1311). According to Concepcion S. Wilson "informetrics" is "the quantitative study of collections of moderatesized units of potentially informative text, directed to the scientific understanding of information processes at the social level" (Wilson, 1999, 211). We should add to Wilson's units of text also digital collections of images, videos, spoken documents and music. Dietmar Wolfram divides "informetrics" into two aspects, "system-based characteristics that arise from the documentary content of IR systems and how they are indexed, and usage-based characteristics that arise how users interact with system content and the system interfaces that provide access to the content" (Wolfram, 2003, 6). We would like to follow Tague-Sutcliffe, Egghe, Wilson and Wolfram (and others, for example Björneborn & Ingwersen, 2004) and call this broad research of empirical information science "informetrics". Informetrics includes therefore all quantitative studies in information science. If a scientist performs scientific investigations empirically, e.g. on information users' behavior, on scientific impact of academic journals, on the development of the patent application activity of a company, on links of Web pages, on the temporal distribution of blog postings discussing a given topic, on availability, recall and precision of retrieval systems, on usability of Web sites, and so on, he or she contributes to informetrics. We see three subject areas in information science in which such quantitative research takes place, - information users and information usage, - evaluation of information systems, - information itself, Following Wolfram's article, we divide his system-based characteristics into the "information itself "-category and the "information system"-category. Figure 1 is a simplistic graph of subjects and research areas of informetrics as an empirical information science.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.8, S.385-389
  8. Weber, S.: ¬Der Angriff der Digitalgeräte auf die übrigen Lernmedien (2015) 0.01
    0.005528287 = product of:
      0.03316972 = sum of:
        0.03316972 = weight(_text_:und in 2505) [ClassicSimilarity], result of:
          0.03316972 = score(doc=2505,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.34282678 = fieldWeight in 2505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=2505)
      0.16666667 = coord(1/6)
    
    Abstract
    Von "Flipped Classrooms", Mikrolernen und dem möglichen Ende der Schreibschrift.
  9. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    0.0024665273 = product of:
      0.014799163 = sum of:
        0.014799163 = weight(_text_:in in 6232) [ClassicSimilarity], result of:
          0.014799163 = score(doc=6232,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24922498 = fieldWeight in 6232, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6232)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center
  10. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.00
    0.0016088387 = product of:
      0.009653032 = sum of:
        0.009653032 = weight(_text_:in in 754) [ClassicSimilarity], result of:
          0.009653032 = score(doc=754,freq=26.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16256167 = fieldWeight in 754, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
      0.16666667 = coord(1/6)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
    Section 11: To argue that fighting large search engines and plagiarism slice-by-slice by using dedicated servers combined by one hub could eventually decrease the importance of other global search engines. Section 12: To argue that global search engines are an area that cannot be left to the free market, but require some government control or at least non-profit institutions. We will mention other areas where similar if not as glaring phenomena are visible. Section 13: We will mention in passing the potential role of virtual worlds, such as the currently overhyped system "second life". Section 14: To elaborate and try out a model for knowledge workers that does not require special search engines, with a description of a simple demonstrator. Section 15 (Not originally part of the proposal): To propose concrete actions and to describe an Austrian effort that could, with moderate support, minimize the role of Google for Austria. Section 16: References (Not originally part of the proposal) In what follows, we will stick to Sections 1 -14 plus the new Sections 15 and 16 as listed, plus a few Appendices.
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.