Search (54 results, page 1 of 3)

  • × theme_ss:"Internet"
  • × theme_ss:"Suchmaschinen"
  1. Bradley, P.: Advanced Internet searcher's handbook (1998) 0.01
    0.0066512655 = product of:
      0.026605062 = sum of:
        0.026605062 = weight(_text_:information in 5454) [ClassicSimilarity], result of:
          0.026605062 = score(doc=5454,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.43369597 = fieldWeight in 5454, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=5454)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Information world review. 1999, no.146, S.26 (D. Parr)
    LCSH
    World Wide Web (Information retrieval system)
    Information retrieval
    Subject
    World Wide Web (Information retrieval system)
    Information retrieval
  2. Cooke, A.: ¬A guide to finding quality information on the Internet : selection and evaluation strategies (1999) 0.01
    0.0066512655 = product of:
      0.026605062 = sum of:
        0.026605062 = weight(_text_:information in 662) [ClassicSimilarity], result of:
          0.026605062 = score(doc=662,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.43369597 = fieldWeight in 662, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=662)
      0.25 = coord(1/4)
    
    LCSH
    Information retrieval
    Library information networks
    Subject
    Information retrieval
    Library information networks
  3. Charisius, H.: Gängige Suchmaschinen übersehen weite Bereiche des Internet, neue Dienste helfen beim Heben der Info-Schätze : Mehr drin, als man denkt (2003) 0.01
    0.0064218035 = product of:
      0.025687214 = sum of:
        0.025687214 = weight(_text_:digitale in 1721) [ClassicSimilarity], result of:
          0.025687214 = score(doc=1721,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.1424916 = fieldWeight in 1721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1721)
      0.25 = coord(1/4)
    
    Content
    "Wenn Chris Sherman über das Internet spricht, schweift er mitunter ab zu den Sternen. "Wie das Universum ist auch das Web voll dunkler Materie", sagt der Suchmaschinenexperte aus Los Angeles. "Die Astronomen erzählen uns, dass selbst das stärkste Teleskop höchstens zehn Prozent der Himmelskörper im All aufspüren kann. " Der Rest sei dunkel und somit unsichtbar. Genauso verhalte es sich mit Informationen im Web. "Das meiste Wissen", so Sherman, "bleibt selbst den besten Suchmaschinen verborgen." Höchstens ein Zehntel des Internet können die digitalen Spürhunde durchwühlen. Der Rest bleibt unter der Oberfläche im so genannten Deep Web verborgen, verteilt auf Datenbanken, Archive und Foren, oder treibt unverlinkt im Info-Meer - unerreichbar für Suchmaschinen. Eine Studie von Brightplanet, einer US-Firma für Suchtechnologie, rechnet vor, dass im Deep Web 400- bis 550-mal mehr Daten liegen als im oberflächlichen Surface-Web, in dem Google & Co. fischen können." Wenn Informationen die wichtigste Ware im 21. Jahrhundert sind, dann ist das tiefe Web unschätzbar wertvoll", sagt Michael Bergman aus dem Brightplanet-Vorstand. Um ihren Index zu pflegen und zu erweitern, schicken Suchmaschinen ihre Spione, so genannte Spider oder Crawler, durchs Netz. Diese Software-Roboter hangeln sich von Link zu Link und speichern jede neue Seite, die sie erreichen. "Millionen unverlinkter Web-Auftritte oder dynamisch aus Datenbanken erzeugte Dokumente gehen ihnen dabei durch die Maschen", schätzt Wolfgang Sander-Beuermann, Leiter des Suchmaschinenlabors der Uni Hannover. Andere Seiten sperren die Agenten bewusst aus. Ein versteckter Hinweis oder eine Passwortabfrage blockt die Spider ab, zum Beispiel am Eingang zu Firmen-Intranets. An manche Inhalte kommen die Spider nicht heran, weil sie mit deren Datenformat nichts anfangen können: Musikdateien, Bilder und Textdokumente sind schwer verdauliche Brocken für die Agenten, die auf den Internet-Code HTML spezialisiert sind. Den größten Teil des Deep Web füllen "Datenbanken mit gesichertem und für jedermann zugänglichem Wissen", weiß Netz-Ausloter Sherman, der zusammen mit dem Bibliothekar und Informationsspezialisten Gary Price in dem Buch "The Invisible Web" die Tiefenregionen des Internet erstmals für die breite Masse der Anwender sichtbar macht. Zu den wertvollsten Informationsquellen zählen kostenlose Archive, die Kataloge öffentlicher Bibliotheken, Datenbanken von Universitäten, Behörden, Patentämtern oder des Statistischen Bundesamts, ferner Newsgroups, das sind themenspezifische Schwarze Bretter im Netz, und digitale Produktkataloge. "Die Suchmaschinen können nicht in diesen Schätzen stöbem, weil sie erst gar nicht hineingelangen", erklärt Sherman. Vor dem Zugriff zum Beispiel auf das kostenlose Archiv von FOCUS muss der Nutzer per Eingabemaske nach Schlagwörtern recherchieren. Die Crux für Google & Co. bringt Sherman auf den Punkt: "Sie können nicht tippen" -und müssen deshalb draußen bleiben. Dasselbe Spiel beim größten deutschen Buchkatalog: Die digitalen Fahnder finden ihn zwar und führen den Suchenden zur Deutschen Bibliothek unter www.ddb.de. In dem Verzeichnis, das über acht Millionen Druckerzeugnisse listet, muss der Gast dann selbst weitersuchen. Für Suchmaschinen ist der Index unsichtbar. Auch an der gezielten Recherche nach Albert Einsteins Lebenslauf scheitern automatische Findhilfen. So meldet Google zwar 680 000 Treffer für Albert Einstein. Nur die Vita, die neben 25 000 weiteren im Archiv von www.biography.com liegt, findet der beliebte Generalist nicht.
  4. Lütgert, S.: ¬Der Googlehupf als Quantensprung : Content heißt jetzt Context - Warum man mit Websites noch nie Geld verdienen konnte. Linksverkehr (2001) 0.01
    0.0064218035 = product of:
      0.025687214 = sum of:
        0.025687214 = weight(_text_:digitale in 1671) [ClassicSimilarity], result of:
          0.025687214 = score(doc=1671,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.1424916 = fieldWeight in 1671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1671)
      0.25 = coord(1/4)
    
    Content
    Es musste also noch etwas anderes in dieser Anrufung des Content mitschwingen, eine verborgene und doch umso entschiedenere Abgrenzung vom digitalen Nicht-Content, der bloßen "Form", den inhaltsleeren Hüllen und körperlosen Oberflächen des Internet, deren Scheitern und Ende hier verkündet werden sollte. Der bis' dahin als hohl, flach und uneigentlich geltende Cyberspace, sollte mit einer neuen Substanz gefüllt werden, und zwar nicht mit einer bestimmten, sondern mit Substanz schlechthin. Diese Metaphysik des Content fand ihre perfekte Repräsentation in den Berggipfeln von AltaVista - dem Logo jener Suchmaschine, die 1997-99 beinahe jede und jeder von uns benutzt haben dürfte. Zu sehen waren Berge von Content: zwei am linken oberen Bildrand aufragende, schneebedeckte Gipfel, die über einem im Vordergrund gelegenen Hochplateau thronten, das nach rechts sanft abfiel, in einen blau-weiß gepixelten Nebel überging und sich schließlich in den nur unwesentlich helleren Horizont auflöste. Von rechts nach links wurde, gezeigt: das digitale Rauschen, seine Transformation in' Bedeutung, deren Erhebung zu Inhalt und schließlich der Triumph jenes Prinzips, das über diesen Content stets den Überblick behält: AltaVista (der Blick vom Gipfel, auf spanisch). Dieses Bild unterschied sich nicht nur, radikal von den meist bis zur Unkenntlichkeit abstrahierten Emblemen der Konkurrenz, sondern zeigte zugleich das Internet als Ganzes: in jenem Moment, als Content King war und über ein Königreich herrschte, das keine Grenzen kannte. Natürlich hatten auch die Betreiber von AltaVista darauf gewettet, dass sich mit Websites Geld verdienen ließe. Ihre Idee bestand darin, mehr Inhalte auffindbar zu machen als jede Suchmaschine zuvor, und das mit dem Verkauf von Werbebannern zu finanzieren, die auf noch mehr Inhalte verweisen sollten. Dass sich aber mit Websites kein Geld verdienen lässt - und das zeigt sich gerade an AltaVista - liegt weder an fehlenden Usern noch an fehlender Bandbreite (von beiden hatte die Suchmaschine mehr als genug), sondern eben genau am Content, genauer: an dessen sprichwörtlicher Flüchtigkeit. Content nämlich ist nicht bloß personalintensiv (also teuer) in der Herstellung und nur schwer in eine verkäufliche Form zu bringen, sondern hat zudem die Tendenz, sich sowohl permanent wieder in Rauschen aufzulösen wie am rechten Bildrand des AltaVista-Logos), als auch ständig zu solchen Massen von Bedeutung zusammenzuklumpen (linker Bildrand), dass er ins rein Tautologische umkippt. Die beiden letzgenannten Phänomene wurden schließlich zu einem inhaltlichen Problem der Suchmaschine selbst:
  5. Web work : Information seeking and knowledge work on the World Wide Web (2000) 0.01
    0.0050479556 = product of:
      0.020191822 = sum of:
        0.020191822 = weight(_text_:information in 1190) [ClassicSimilarity], result of:
          0.020191822 = score(doc=1190,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3291521 = fieldWeight in 1190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1190)
      0.25 = coord(1/4)
    
    Series
    Information science and knowledge management; vol.1
  6. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.00
    0.004759258 = product of:
      0.019037032 = sum of:
        0.019037032 = weight(_text_:information in 4497) [ClassicSimilarity], result of:
          0.019037032 = score(doc=4497,freq=32.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3103276 = fieldWeight in 4497, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
      0.25 = coord(1/4)
    
    Abstract
    This book provides practical strategies which enable the advanced web user to locate information effectively and to form a precise evaluation of the accuracy of that information. Although the book provides a brief but thorough review of the technologies which are currently available for these purposes, most of the book concerns practical `future-proof' techniques which are independent of changes in the tools available. For example, the book covers: how to retrieve salient information quickly; how to remove or compensate for bias; and tuition of novice Internet users.
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  7. Choo, C.W.; Detlor, B.; Turnbull, D.: Information seeking on the Web : an integrated model of browsing and searching (1999) 0.00
    0.004608132 = product of:
      0.018432528 = sum of:
        0.018432528 = weight(_text_:information in 6692) [ClassicSimilarity], result of:
          0.018432528 = score(doc=6692,freq=30.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3004734 = fieldWeight in 6692, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6692)
      0.25 = coord(1/4)
    
    Abstract
    The paper presents findings from a study of how knowledge workers use the Web to seek external information as part of their daily work. Thirty four users from seven companies took part in the study. Participants were mainly IT specialists, managers, and research/marketing/consulting staff working in organizations that included a large utility company, a major bank, and a consulting firm. Participants answered a detailed questionnaire and were interviewed individually in order to understand their information needs and information seeking preferences. A custom-developed WebTracker Software application was installed an each of their workplace PCs, and participants' Web-use activities were then recorded continuously during two-week periods. The WebTracker recorded how participants used the browser to seek information an the Web: it logged menu choices, button bar selections, and keystroke actions, allowing browsing and searching sequences to be reconstructed. In a second round of personal Interviews, participants recalled critical incidents of using information from the Web.Data from the two Interviews and the WebTracker logs constituted the database for analysis. Sixty one significant episodes of Information seeking were identified. A model was developed to describe the common repertoires of Information seeking that were observed. On one axis of the model, episodes were plotted according to the four scanning modes identified by Aguilar (1967), Weick and Daft (1983): undirected viewing, conditioned viewing, informal search, and formal search. Each mode is characterized by its own Information needs and Information seeking strategies. On the other axis of the model, episodes were plotted according to the occurence of one or more of the six categories of information seeking behaviors identified by Ellis (1989, 1990): starting, chaining, browsing, differentiating, monitoring, and extracting. The study suggests that a behavioral framework that relates motivations (Aguilar) and moves (Ellis) may be helpful in analysing patterns of Web-based Information seeking
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  8. Large, A.; Beheshti, J.; Moukdad, H.: Information seeking on the Web : navigational skills of grade-six primary school students (1999) 0.00
    0.0044618044 = product of:
      0.017847218 = sum of:
        0.017847218 = weight(_text_:information in 6545) [ClassicSimilarity], result of:
          0.017847218 = score(doc=6545,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2909321 = fieldWeight in 6545, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6545)
      0.25 = coord(1/4)
    
    Abstract
    Reports on research into the information-seeking habits of primary schoolchildren conducted under operational conditions. Three workstations with Internet access were installed in a grade-six classroom in suburban Montreal. After a short introductory training session for the entire group followed by short individual sessions for each student, 53 students, working in small groups, used these workstations over a six-week period to seek information on the Web of relevance to a class project assigned by their teacher. The project dealt with the Winter Olympic Games (recently completed at that time). The student objective was to locate relevant information for a poster and an oral presentation on one of the sports represented at the Games. All screen activity was directly captured on videotape and group conversations at the workstation were audiotaped. Demographic and computer literacy information was gathered in a questionnaire. This paper presents a map of the information-seeking landscape based upon an analysis of the descriptive statistics gathered from the Web searches. It reveals that the novice users favored browsing over analytic search strategies, although they did show some sophistication in the construction of the latter. Online help was ignored. The children demonstrated a very high level of interactivity with the interface at the expense of thinking, planning and evaluating. This is a preliminary analysis of data which will subsequently be expanded by the inclusion of qualitative data
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  9. Hupfer, M.E.; Detlor, B.: Gender and Web information seeking : a self-concept orientation model (2006) 0.00
    0.0044618044 = product of:
      0.017847218 = sum of:
        0.017847218 = weight(_text_:information in 5119) [ClassicSimilarity], result of:
          0.017847218 = score(doc=5119,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2909321 = fieldWeight in 5119, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5119)
      0.25 = coord(1/4)
    
    Abstract
    Adapting the consumer behavior selectivity model to the Web environment, this paper's key contribution is the introduction of a self-concept orientation model of Web information seeking. This model, which addresses gender, effort, and information content factors, questions the commonly assumed equivalence of sex and gender by specifying the measurement of gender-related selfconcept traits known as self- and other-orientation. Regression analyses identified associations between self-orientation, other-orientation, and self-reported search frequencies for content with identical subject domain (e.g., medical information, government information) and differing relevance (i.e., important to the individual personally versus important to someone close to him or her). Self- and other-orientation interacted such that when individuals were highly self-oriented, their frequency of search for both self- and other-relevant information depended on their level of other-orientation. Specifically, high-self/high-other individuals, with a comprehensive processing strategy, searched most often, whereas high-self/low-other respondents, with an effort minimization strategy, reported the lowest search frequencies. This interaction pattern was even more pronounced for other-relevant information seeking. We found no sex differences in search frequency for either self-relevant or other-relevant information.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.8, S.1105-1115
  10. Sherman, C.; Price, G.: ¬The invisible Web : uncovering information sources search engines can't see (2001) 0.00
    0.0044618044 = product of:
      0.017847218 = sum of:
        0.017847218 = weight(_text_:information in 62) [ClassicSimilarity], result of:
          0.017847218 = score(doc=62,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2909321 = fieldWeight in 62, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=62)
      0.25 = coord(1/4)
    
    Abstract
    Enormous expanses of the Internet are unreachable with standard Web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible Web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, information in databases is generally inaccessible to the software spiders and crawlers that compile search engine indexes. As Web technology improves, more and more information is being stored in databases that feed into dynamically generated Web pages. The tips provided in this resource will ensure that those databases are exposed and Net-based research will be conducted in the most thorough and effective manner. Discusses the use of online information resources and problems caused by dynamically generated Web pages, paying special attention to information mapping, assessing the validity of information, and the future of Web searching.
  11. Pharo, N.: Web information search strategies : a model for classifying Web interaction (1999) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 3831) [ClassicSimilarity], result of:
          0.016826518 = score(doc=3831,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 3831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3831)
      0.25 = coord(1/4)
    
    Source
    Vocabulary as a central concept in digital libraries: interdisciplinary concepts, challenges, and opportunities : proceedings of the Third International Conference an Conceptions of Library and Information Science (COLIS3), Dubrovnik, Croatia, 23-26 May 1999. Ed. by T. Arpanac et al
  12. Hewett, S.: MathGate - a gateway to Internet resources for mathematicians (2000) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 4877) [ClassicSimilarity], result of:
          0.016657405 = score(doc=4877,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 4877, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4877)
      0.25 = coord(1/4)
    
    Source
    Online information review. 24(2000) no.1, S.83-84
  13. Lu, G.; Williams, B.; You, C.: ¬An effective World Wide Web image search engine (2001) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 5655) [ClassicSimilarity], result of:
          0.016657405 = score(doc=5655,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5655)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 27(2001) no.1, S.27-37
  14. Hiom, D.: SOSIG : an Internet hub for the social sciences, business and law (2000) 0.00
    0.004121639 = product of:
      0.016486555 = sum of:
        0.016486555 = weight(_text_:information in 4871) [ClassicSimilarity], result of:
          0.016486555 = score(doc=4871,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2687516 = fieldWeight in 4871, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4871)
      0.25 = coord(1/4)
    
    Abstract
    SOSIG (Social Science Information Gateway) aims to provide a trusted source of selected, high quality Internet information for researchers and practitioners in the social sciences, business and law. This article tracks the the development of the gateway since its inception in 1994, describes the current features and looks at some of the associated research and development areas that are taking place around the service including the automatic classification of Web resources and experiments with multilingual thesauri
    Source
    Online information review. 24(2000) no.1, S.54-58
  15. Internet searching and indexing : the subject approach (2000) 0.00
    0.004121639 = product of:
      0.016486555 = sum of:
        0.016486555 = weight(_text_:information in 1468) [ClassicSimilarity], result of:
          0.016486555 = score(doc=1468,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2687516 = fieldWeight in 1468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
      0.25 = coord(1/4)
    
    Abstract
    This comprehensive volume offers usable information for people at all levels of Internet savvy. It can teach librarians, students, and patrons how to search the Internet more systematically. It also helps information professionals design more efficient, effective search engines and Web pages.
    Theme
    Information Gateway
  16. Garnsey, M.R.: What distance learners should know about information retrieval on the World Wide Web (2002) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 1626) [ClassicSimilarity], result of:
          0.015963038 = score(doc=1626,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 1626, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1626)
      0.25 = coord(1/4)
    
    Abstract
    The Internet can be a valuable tool allowing distance learners to access information not available locally. Search engines are the most common means of locating relevant information an the Internet, but to use them efficiently students should be taught the basics of searching and how to evaluate the results. This article briefly reviews how Search engines work, studies comparing Search engines, and criteria useful in evaluating the quality of returned Web pages. Research indicates there are statistical differences in the precision of Search engines, with AltaVista ranking high in several studies. When evaluating the quality of Web pages, standard criteria used in evaluating print resources is appropriate, as well as additional criteria which relate to the Web site itself. Giving distance learners training in how to use Search engines and how to evaluate the results will allow them to access relevant information efficiently while ensuring that it is of adequate quality.
    Footnote
    Part of an issue devoted to "Distance learning: information access and services for virtual users", publ. by Haworth Press
  17. Suchen und Finden im Internet (2007) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 484) [ClassicSimilarity], result of:
          0.015963038 = score(doc=484,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 484, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.25 = coord(1/4)
    
    Abstract
    Das Internet hat die Welt von Information, Kommunikation und Medien nachhaltig verändert. Suchmaschinen spielen dabei eine zentrale Rolle. Sie bilden das Tor zum Meer der elektronisch verfügbaren Informationen, leisten dem Nutzer wertvolle Hilfe beim Auffinden von Inhalten, haben sich zwischenzeitlich zum Kristallisationspunkt für vielfältige ergänzende Informations-, Kommunikations- und Mediendienste entwickelt und schicken sich an, Strukturen und Strategien der beteiligten Branchen umzuwälzen. Dabei ist die dynamische Entwicklung der Such- und Finde-Technologien für das Internet weiterhin in vollem Gange. Der MÜNCHNER KREIS hat vor diesem Hintergrund mit exzellenten Fachleuten aus Wirtschaft und Wissenschaft die Entwicklungen analysiert und die Zukunftsperspektiven diskutiert. das vorliegende Buch enthält die Ergebnisse.
    LCSH
    Business Information Systems
    Information Systems Applications (incl.Internet)
    Subject
    Business Information Systems
    Information Systems Applications (incl.Internet)
  18. Koenemann, J.; Lindner, H.-G.; Thomas, C.: Unternehmensportale : Von Suchmaschinen zum Wissensmanagement (2000) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 5233) [ClassicSimilarity], result of:
          0.014425736 = score(doc=5233,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 5233, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5233)
      0.25 = coord(1/4)
    
    Abstract
    Aufgabe des Wissensmanagements ist es, den Mitarbeitern im Unternehmen entscheidungs- und handlungsrelevante Informationen bereitzustellen und die Mitarbeiter bei der intelligenten Verarbeitung dieser Informationen zu unterstützen. Ein hierzu genutztes Werkzeug von wachsender Bedeutung sind Unternehmensportale. Wir beschreiben kurz die Entwicklung von Portalen im World Wide Web (WWW), um dann Web-Portale von verschiedenen Arten von Unternehmensportalen abzugrenzen. Wir zeigen erwartete Funktionalitäten auf und stellen ein 5-Schichten Modell einer Gesamtarchitektur für Portale dar, welche die wesentlichen Komponenten umfasst. Im Anschluss werden die Besonderheiten der organisatorischen Realisierung und im Ausblick der Übergang von Portalen zum ,ubiquitous personalized information supply", der überall verfügbaren und individuellen Informationsversorgung behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 51(2000) H.6, S.325-334
    Theme
    Information Resources Management
  19. Hilberer, T.: Über die Zugänglichkeit der Informationen im Internet : Die Rolle der Bibliotheken (1999) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 4101) [ClassicSimilarity], result of:
          0.014277775 = score(doc=4101,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 4101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4101)
      0.25 = coord(1/4)
    
    Footnote
    Bezugnahme auf den Artikel: Lawrence, S. u. C.L. Giles: Accessibility of information on the web. In: Nature. No.400 vom 8.7.1999, S.107-109.
  20. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 1215) [ClassicSimilarity], result of:
          0.014277775 = score(doc=1215,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 1215, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
      0.25 = coord(1/4)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.

Languages

  • e 38
  • d 15
  • nl 1
  • More… Less…

Types

  • a 43
  • m 9
  • s 3
  • el 2
  • More… Less…