Search (7 results, page 1 of 1)

  • × author_ss:"Maurer, H."
  1. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.01
    0.009536377 = product of:
      0.023840941 = sum of:
        0.004989027 = product of:
          0.024945134 = sum of:
            0.024945134 = weight(_text_:problem in 754) [ClassicSimilarity], result of:
              0.024945134 = score(doc=754,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.14068612 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.2 = coord(1/5)
        0.018851914 = weight(_text_:of in 754) [ClassicSimilarity], result of:
          0.018851914 = score(doc=754,freq=62.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2885868 = fieldWeight in 754, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
      0.4 = coord(2/5)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
    Section 11: To argue that fighting large search engines and plagiarism slice-by-slice by using dedicated servers combined by one hub could eventually decrease the importance of other global search engines. Section 12: To argue that global search engines are an area that cannot be left to the free market, but require some government control or at least non-profit institutions. We will mention other areas where similar if not as glaring phenomena are visible. Section 13: We will mention in passing the potential role of virtual worlds, such as the currently overhyped system "second life". Section 14: To elaborate and try out a model for knowledge workers that does not require special search engines, with a description of a simple demonstrator. Section 15 (Not originally part of the proposal): To propose concrete actions and to describe an Austrian effort that could, with moderate support, minimize the role of Google for Austria. Section 16: References (Not originally part of the proposal) In what follows, we will stick to Sections 1 -14 plus the new Sections 15 and 16 as listed, plus a few Appendices.
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.
  2. Klemme, M.; Maurer, H.; Schneider, A.: Glimpses at the future of networked hypermedia systems (1996) 0.00
    0.0047777384 = product of:
      0.023888692 = sum of:
        0.023888692 = weight(_text_:of in 6156) [ClassicSimilarity], result of:
          0.023888692 = score(doc=6156,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.36569026 = fieldWeight in 6156, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6156)
      0.2 = coord(1/5)
    
    Abstract
    Discusses the current state of the art in the field of large-scale networked hypermedia systems. Identifies ways in which the future generation of networked hypermedia systems will differ from the present generation. Surveys: type; preparation, storage, and interchange of hypermedia documents; security, costs and copyright; navigation, search and retrieval; usability; and hypermedia as a technology of integration
    Source
    Journal of educational multimedia and hypermedia. 5(1996) nos.3/4, S.225-238
  3. Maurer, H.; Tomek, I.: Broadening the scope of hypermedia principles (1990) 0.00
    0.004423326 = product of:
      0.02211663 = sum of:
        0.02211663 = weight(_text_:of in 4874) [ClassicSimilarity], result of:
          0.02211663 = score(doc=4874,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.33856338 = fieldWeight in 4874, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4874)
      0.2 = coord(1/5)
    
    Abstract
    Argues for the inclusion of hypermedia systems among the basic components of computer environments. Reviews hypermedia principles and the terminolgy used and gives examples of several applications in which hypermedia already are or couls advantageously be used. Most computer applications would greatly benefit if hypermedia were extended from isolated applications to a system-wide facility and this could substantially simplify implementation of new applications. Extending hypermedia concepts to the organisation of the computer environment itself - the file system - and to the user interface would make computer environments more flexible and easier to use
  4. Maurer, H.: Object-oriented modelling of hyperstructure : overcoming the static link deficiency (1994) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 764) [ClassicSimilarity], result of:
          0.019153563 = score(doc=764,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 764, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=764)
      0.2 = coord(1/5)
    
    Abstract
    Although the object-oriente paradigm is well suited for modelling self-contained independent objects, it is not suited for modelling persistent relations (static links) between abstract data objects. At the same time, the concept of computer-navigable links is an integral part of hypermedia paradigm. In contrast to multimedia, where the object-oriented paradigm plays a leading role, the 'static link' deficiency considerably reduces the application of object-oriented methods in hypermedia. Presents a new logical data model (the HM Data Model) which incorporates the well-known principles of object-oriented data modelling into the management of large-scale, multi-user hypermedia databases. The model is based on the notion of abstract hypermedia data objects called S-collections. Computer-navigable links approach not only overcomes the static link deficiency of the object-oriented paradigm, but also supports modularity, incremental development, and flexible versioning, and provides a solid logical basis for sematic modelling
  5. Kulathuramaiyer, N.; Maurer, H.: Implications of emerging data mining (2009) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 3144) [ClassicSimilarity], result of:
          0.019153563 = score(doc=3144,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 3144, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3144)
      0.2 = coord(1/5)
    
    Abstract
    Data Mining describes a technology that discovers non-trivial hidden patterns in a large collection of data. Although this technology has a tremendous impact on our lives, the invaluable contributions of this invisible technology often go unnoticed. This paper discusses advances in data mining while focusing on the emerging data mining capability. Such data mining applications perform multidimensional mining on a wide variety of heterogeneous data sources, providing solutions to many unresolved problems. This paper also highlights the advantages and disadvantages arising from the ever-expanding scope of data mining. Data Mining augments human intelligence by equipping us with a wealth of knowledge and by empowering us to perform our daily tasks better. As the mining scope and capacity increases, users and organizations become more willing to compromise privacy. The huge data stores of the 'master miners' allow them to gain deep insights into individual lifestyles and their social and behavioural patterns. Data integration and analysis capability of combining business and financial trends together with the ability to deterministically track market changes will drastically affect our lives.
  6. Maurer, H.; Scherbakov, N.: Multimedia authoring for presentation and education : the official guide to HM-card (1996) 0.00
    0.0022345826 = product of:
      0.011172912 = sum of:
        0.011172912 = weight(_text_:of in 3945) [ClassicSimilarity], result of:
          0.011172912 = score(doc=3945,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.17103596 = fieldWeight in 3945, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3945)
      0.2 = coord(1/5)
    
    Abstract
    There are many multimedia authoring packages available under MS Windows. There is none with as many outstanding features as HM-Card: HM-Card does all you expect from a modern multimedia authoring system: it allows you to combine als kinds of media: text, graphics, pictures, audio-and videoclips, and arbitrary executable files created by other programs to give you all the freedom of the world
  7. Andrews, K.; Kappe, F.; Maurer, H.: Serving information to the Web with Hyper-G (1995) 0.00
    0.0018058153 = product of:
      0.009029076 = sum of:
        0.009029076 = weight(_text_:of in 2233) [ClassicSimilarity], result of:
          0.009029076 = score(doc=2233,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.13821793 = fieldWeight in 2233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2233)
      0.2 = coord(1/5)
    
    Abstract
    The provision and maintenance of truly large scale information resources on the WWW necessitates server architectures offering substantially more functionality than simply serving HTML files from the local file system and processing CGI requests. Describes Hyper-G, a large scale, multi protocol, distributed, hypermedia information system which uses an object oriented database layer to provide information structuring and link maintenance facilities in addition to fully integrated attribute and content search, a hierarchical access control scheme, support for multiple languages, interactive link editing, and point-and-click document insertion