Search (70 results, page 2 of 4)

  • × theme_ss:"Internet"
  • × type_ss:"el"
  1. Galitsky, B.; Levene, M.: On the economy of Web links : Simulating the exchange process (2004) 0.01
    0.01330401 = product of:
      0.07982406 = sum of:
        0.07982406 = weight(_text_:web in 5640) [ClassicSimilarity], result of:
          0.07982406 = score(doc=5640,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5769126 = fieldWeight in 5640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=5640)
      0.16666667 = coord(1/6)
    
  2. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.01
    0.013147181 = product of:
      0.07888308 = sum of:
        0.07888308 = weight(_text_:web in 3752) [ClassicSimilarity], result of:
          0.07888308 = score(doc=3752,freq=20.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5701118 = fieldWeight in 3752, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
    Content
    Bezug zu: Bergman, M.K.: The Deep Web: surfacing hidden value. In: Journal of Electronic Publishing. 7(2001) no.1, S.xxx-xxx. [Vgl. unter: http://www.press.umich.edu/jep/07-01/bergman.html].
  3. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.01
    0.012366909 = product of:
      0.037100725 = sum of:
        0.019956015 = weight(_text_:web in 2) [ClassicSimilarity], result of:
          0.019956015 = score(doc=2,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.14422815 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.017144712 = weight(_text_:retrieval in 2) [ClassicSimilarity], result of:
          0.017144712 = score(doc=2,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.13368362 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
      0.33333334 = coord(2/6)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.01
    0.011820856 = product of:
      0.035462566 = sum of:
        0.022990054 = weight(_text_:wide in 423) [ClassicSimilarity], result of:
          0.022990054 = score(doc=423,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.122383565 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.01247251 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.01247251 = score(doc=423,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
      0.33333334 = coord(2/6)
    
    Abstract
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
  5. Cross, P.: DESIRE: making the most of the Web (2000) 0.01
    0.01164101 = product of:
      0.06984606 = sum of:
        0.06984606 = weight(_text_:web in 2146) [ClassicSimilarity], result of:
          0.06984606 = score(doc=2146,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.50479853 = fieldWeight in 2146, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=2146)
      0.16666667 = coord(1/6)
    
  6. Sowards, S.W.: ¬A typology for ready reference Web sites in libraries (1996) 0.01
    0.011521611 = product of:
      0.06912967 = sum of:
        0.06912967 = weight(_text_:web in 944) [ClassicSimilarity], result of:
          0.06912967 = score(doc=944,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.49962097 = fieldWeight in 944, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=944)
      0.16666667 = coord(1/6)
    
    Abstract
    Many libraries manage Web sites intended to provide their users with online resources suitable for answering reference questions. Most of these sites can be analyzed in terms of their depth, and their organizing and searching features. Composing a typology based on these factors sheds light on the critical design decisions that influence whether users of these sites succees or fail to find information easily, rapidly and accurately. The same analysis highlights some larger design issues, both for Web sites and for information management at large
  7. Danowski, P.: Step one: blow up the silo! : Open bibliographic data, the first step towards Linked Open Data (2010) 0.01
    0.011155753 = product of:
      0.06693452 = sum of:
        0.06693452 = weight(_text_:web in 3962) [ClassicSimilarity], result of:
          0.06693452 = score(doc=3962,freq=10.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.48375595 = fieldWeight in 3962, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3962)
      0.16666667 = coord(1/6)
    
    Abstract
    More and more libraries starting semantic web projects. The question about the license of the data is not discussed or the discussion is deferred to the end of project. In this paper is discussed why the question of the license is so important in context of the semantic web that is should be one of the first aspects in a semantic web project. Also it will be shown why a public domain weaver is the only solution that fulfill the the special requirements of the semantic web and that guaranties the reuseablitly of semantic library data for a sustainability of the projects.
    Object
    Web 2.0
  8. Landwehr, A.: China schafft digitales Punktesystem für den "besseren" Menschen (2018) 0.01
    0.010258361 = product of:
      0.061550163 = sum of:
        0.061550163 = product of:
          0.09232524 = sum of:
            0.04637119 = weight(_text_:29 in 4314) [ClassicSimilarity], result of:
              0.04637119 = score(doc=4314,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.31092256 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
            0.045954052 = weight(_text_:22 in 4314) [ClassicSimilarity], result of:
              0.045954052 = score(doc=4314,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.30952093 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2018 14:29:46
  9. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.01
    0.0101026185 = product of:
      0.060615707 = sum of:
        0.060615707 = weight(_text_:retrieval in 3065) [ClassicSimilarity], result of:
          0.060615707 = score(doc=3065,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.47264296 = fieldWeight in 3065, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.16666667 = coord(1/6)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  10. Choo, C.W.; Detlor, B.; Turnbull, D.: Information seeking on the Web : an integrated model of browsing and searching (2000) 0.01
    0.01008141 = product of:
      0.060488462 = sum of:
        0.060488462 = weight(_text_:web in 4438) [ClassicSimilarity], result of:
          0.060488462 = score(doc=4438,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43716836 = fieldWeight in 4438, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4438)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents findings from a study of how knowledge workers use the Web to seek external information as part of their daily work. 34 users from 7 companies took part in the study. Participants were mainly IT-specialists, managers, and research/marketing/consulting staff working in organizations that included a large utility company; a major bank, and a consulting firm. Participants answered a detailed questionnaire and were interviewed individually in order to understand their information needs and information seeking preferences. A custom-developed WebTracker software application was installed on each of their work place PCs, and participants' Web-use activities were then recorded continuously during two-week periods
  11. CARMEN : Content Analysis, Retrieval und Metadata: Effective Networking (1999) 0.01
    0.010001082 = product of:
      0.06000649 = sum of:
        0.06000649 = weight(_text_:retrieval in 5748) [ClassicSimilarity], result of:
          0.06000649 = score(doc=5748,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.46789268 = fieldWeight in 5748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5748)
      0.16666667 = coord(1/6)
    
  12. Bergman, M.K.: ¬The Deep Web : surfacing hidden value (2001) 0.01
    0.0099780075 = product of:
      0.059868045 = sum of:
        0.059868045 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.059868045 = score(doc=39,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43268442 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=39)
      0.16666667 = coord(1/6)
    
  13. Brooks, T.A.: Where is meaning when form is gone? : Knowledge representation an the Web (2001) 0.01
    0.0099780075 = product of:
      0.059868045 = sum of:
        0.059868045 = weight(_text_:web in 3889) [ClassicSimilarity], result of:
          0.059868045 = score(doc=3889,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43268442 = fieldWeight in 3889, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3889)
      0.16666667 = coord(1/6)
    
  14. Keen, A.; Weinberger, D.: Keen vs. Weinberger : July 18, 2007. (2007) 0.01
    0.0099780075 = product of:
      0.059868045 = sum of:
        0.059868045 = weight(_text_:web in 1304) [ClassicSimilarity], result of:
          0.059868045 = score(doc=1304,freq=18.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43268442 = fieldWeight in 1304, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1304)
      0.16666667 = coord(1/6)
    
    Abstract
    This is the full text of a "Reply All" debate on Web 2.0 between authors Andrew Keen and David Weinberger
    Content
    "Mr. Keen begins: So what, exactly, is Web 2.0? It is the radical democratization of media which is enabling anyone to publish anything on the Internet. Mainstream media's traditional audience has become Web 2.0's empowered author. Web 2.0 transforms all of us -- from 90-year-old grandmothers to eight-year-old third graders -- into digital writers, music artists, movie makers and journalists. Web 2.0 is YouTube, the blogosphere, Wikipedia, MySpace or Facebook. Web 2.0 is YOU! (Time Magazine's Person of the Year for 2006). Is Web 2.0 a dream or a nightmare? Is it a remix of Disney's "Cinderella" or of Kafka's "Metamorphosis"? Have we -- as empowered conversationalists in the global citizen media community -- woken up with the golden slipper of our ugly sister (aka: mainstream media) on our dainty little foot? Or have we -- as authors-formerly-know-as-the-audience -- woken up as giant cockroaches doomed to eternally stare at our hideous selves in the mirror of Web 2.0? Silicon Valley, of course, interprets Web 2.0 as Disney rather than Kafka. After all, as the sales and marketing architects of this great democratization argue, what could be wrong with a radically flattened media? Isn't it dreamy that we can all now publish ourselves, that we each possess digital versions of Johannes Gutenberg's printing press, that we are now able to easily create, distribute and sell our content on the Internet? This is personal liberation with an early 21st Century twist -- a mash-up of the countercultural Sixties, the free market idealism of the Eighties, and the technological determinism and consumer-centricity of the Nineties. The people have finally spoken. The media has become their message and the people are self-broadcasting this message of emancipation on their 70 million blogs, their hundreds of millions of YouTube videos, their MySpace pages and their Wikipedia entries. ..."
  15. Wilson, R.: ¬The role of ontologies in teaching and learning (2004) 0.01
    0.008799776 = product of:
      0.05279866 = sum of:
        0.05279866 = weight(_text_:web in 3387) [ClassicSimilarity], result of:
          0.05279866 = score(doc=3387,freq=14.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.38159183 = fieldWeight in 3387, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3387)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies are currently a buzzword in many communities, hailed as a mechanism for making better use of the Web. They offer a shared definition of a domain that can be understood by computers, enabling them to complete more meaningful tasks. Although ontologies of different descriptions have been in development and use for some time, it is their potential as a key technology in the Semantic Web which is responsible for the current wave of interest. Communities have different expectations of the Semantic Web and how it will be realised, but it is generally believed that ontologies will play a major role. In light of their potential in this new context, much current effort is focusing an developing languages and tools. OWL (Web Ontology Language) has recently become a standard, and builds an top of existing Web languages such as XML and RDF to offer a high degree of expressiveness. A variety of tools are emerging for creating, editing and managing ontologies in OWL. Ontologies have a range of potential benefits and applications in further and higher education, including the sharing of information across educational systems, providing frameworks for learning object reuse, and enabling intelligent and personalised student support. The difficulties inherent in creating a model of a domain are being tackled, and the communities involved in ontology development are working together to achieve their vision of the Semantic Web. This Technology and Standards Watch report discusses ontologies and their role in the Semantic Web, with a special focus an their implications for teaching and learning. This report will introduce ontologies to the further and higher education community, explaining why they are being developed, what they hope to achieve, and their potential benefits to the community. Current ontology tools and standards will be described, and the emphasis will be an introducing the technology to a new audience and exploring its risks and potential applications in teaching and learning. At a time when educational programmes based an ontologies are starting to be developed, the author hopes to increase understanding of the key issues in the wider community.
  16. Lim, E.: Subject Gateways in Südostasien : Anwendung von Klassifikationen (1999) 0.01
    0.008572357 = product of:
      0.051434137 = sum of:
        0.051434137 = weight(_text_:retrieval in 4188) [ClassicSimilarity], result of:
          0.051434137 = score(doc=4188,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.40105087 = fieldWeight in 4188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4188)
      0.16666667 = coord(1/6)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. GERHARD : eine Spezialsuchmaschine für die Wissenschaft (1998) 0.01
    0.008572357 = product of:
      0.051434137 = sum of:
        0.051434137 = weight(_text_:retrieval in 381) [ClassicSimilarity], result of:
          0.051434137 = score(doc=381,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.40105087 = fieldWeight in 381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=381)
      0.16666667 = coord(1/6)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  18. cis: Nationalbibliothek will das deutsche Internet kopieren (2008) 0.01
    0.008054382 = product of:
      0.024163146 = sum of:
        0.017461514 = weight(_text_:web in 4609) [ClassicSimilarity], result of:
          0.017461514 = score(doc=4609,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.12619963 = fieldWeight in 4609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4609)
        0.0067016324 = product of:
          0.020104896 = sum of:
            0.020104896 = weight(_text_:22 in 4609) [ClassicSimilarity], result of:
              0.020104896 = score(doc=4609,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.1354154 = fieldWeight in 4609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4609)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    24.10.2008 14:19:22
    Footnote
    Vgl. unter: http://www.spiegel.de/netzwelt/web/0,1518,586036,00.html.
  19. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.01
    0.0075601074 = product of:
      0.045360643 = sum of:
        0.045360643 = weight(_text_:retrieval in 1669) [ClassicSimilarity], result of:
          0.045360643 = score(doc=1669,freq=14.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.3536936 = fieldWeight in 1669, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
      0.16666667 = coord(1/6)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  20. Van de Sompel, H.; Beit-Arie, O.: Generalizing the OpenURL framework beyond references to scholarly works : the Bison-Futé model (2001) 0.01
    0.0072010076 = product of:
      0.043206044 = sum of:
        0.043206044 = weight(_text_:web in 1223) [ClassicSimilarity], result of:
          0.043206044 = score(doc=1223,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3122631 = fieldWeight in 1223, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1223)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper introduces the Bison-Futé model, a conceptual generalization of the OpenURL framework for open and context-sensitive reference linking in the web-based scholarly information environment. The Bison-Futé model is an abstract framework that identifies and defines components that are required to enable open and context-sensitive linking on the web in general. It is derived from experience gathered from the deployment of the OpenURL framework over the course of the past year. It is a generalization of the current OpenURL framework in several aspects. It aims to extend the scope of open and context-sensitive linking beyond web-based scholarly information. In addition, it offers a generalization of the manner in which referenced items -- as well as the context in which these items are referenced -- can be described for the specific purpose of open and context-sensitive linking. The Bison-Futé model is not suggested as a replacement of the OpenURL framework. On the contrary: it confirms the conceptual foundations of the OpenURL framework and, at the same time, it suggests directions and guidelines as to how the current OpenURL specifications could be extended to become applicable beyond the scholarly information environment.

Years

Languages

  • e 41
  • d 27
  • el 1
  • More… Less…

Types

  • a 20
  • s 3
  • r 2
  • m 1
  • x 1
  • More… Less…