Search (169 results, page 1 of 9)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Internet"
  1. Ku, L.-W.; Ho, H.-W.; Chen, H.-H.: Opinion mining and relationship discovery using CopeOpi opinion analysis system (2009) 0.08
    0.080165744 = product of:
      0.16033149 = sum of:
        0.16033149 = sum of:
          0.12601131 = weight(_text_:mining in 2938) [ClassicSimilarity], result of:
            0.12601131 = score(doc=2938,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.44081625 = fieldWeight in 2938, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
          0.034320172 = weight(_text_:22 in 2938) [ClassicSimilarity], result of:
            0.034320172 = score(doc=2938,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 2938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
      0.5 = coord(1/2)
    
    Abstract
    We present CopeOpi, an opinion-analysis system, which extracts from the Web opinions about specific targets, summarizes the polarity and strength of these opinions, and tracks opinion variations over time. Objects that yield similar opinion tendencies over a certain time period may be correlated due to the latent causal events. CopeOpi discovers relationships among objects based on their opinion-tracking plots and collocations. Event bursts are detected from the tracking plots, and the strength of opinion relationships is determined by the coverage of these plots. To evaluate opinion mining, we use the NTCIR corpus annotated with opinion information at sentence and document levels. CopeOpi achieves sentence- and document-level f-measures of 62% and 74%. For relationship discovery, we collected 1.3M economics-related documents from 93 Web sources over 22 months, and analyzed collocation-based, opinion-based, and hybrid models. We consider as correlated company pairs that demonstrate similar stock-price variations, and selected these as the gold standard for evaluation. Results show that opinion-based and collocation-based models complement each other, and that integrated models perform the best. The top 25, 50, and 100 pairs discovered achieve precision rates of 1, 0.92, and 0.79, respectively.
  2. Lauw, H.W.; Lim, E.-P.: Web social mining (2009) 0.06
    0.062372416 = product of:
      0.12474483 = sum of:
        0.12474483 = product of:
          0.24948967 = sum of:
            0.24948967 = weight(_text_:mining in 3905) [ClassicSimilarity], result of:
              0.24948967 = score(doc=3905,freq=8.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.8727716 = fieldWeight in 3905, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3905)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With increasing user presence in the Web and Web 2.0, Web social mining becomes an important and challenging task that finds a wide range of new applications relevant to e-commerce and social software. In this entry, we describe three Web social mining topics, namely, social network discovery, social network analysis, and social network applications. The essential concepts, models, and techniques of these Web social mining topics will be surveyed so as to establish the basic foundation for developing novel applications and for conducting research.
  3. Visual based retrieval systems and Web mining (2001) 0.05
    0.053462073 = product of:
      0.10692415 = sum of:
        0.10692415 = product of:
          0.2138483 = sum of:
            0.2138483 = weight(_text_:mining in 179) [ClassicSimilarity], result of:
              0.2138483 = score(doc=179,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.74808997 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Fenstermacher, K.D.; Ginsburg, M.: Client-side monitoring for Web mining (2003) 0.05
    0.053462073 = product of:
      0.10692415 = sum of:
        0.10692415 = product of:
          0.2138483 = sum of:
            0.2138483 = weight(_text_:mining in 1611) [ClassicSimilarity], result of:
              0.2138483 = score(doc=1611,freq=8.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.74808997 = fieldWeight in 1611, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Garbage in, garbage out" is a well-known phrase in computer analysis, and one that comes to mind when mining Web data to draw conclusions about Web users. The challenge is that data analysts wish to infer patterns of client-side behavior from server-side data. However, because only a fraction of the user's actions ever reaches the Web server, analysts must rely an incomplete data. In this paper, we propose a client-side monitoring system that is unobtrusive and supports flexible data collection. Moreover, the proposed framework encompasses client-side applications beyond the Web browser. Expanding monitoring beyond the browser to incorporate standard office productivity tools enables analysts to derive a much richer and more accurate picture of user behavior an the Web.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Theme
    Data Mining
  5. Klein, H.: Web Content Mining (2004) 0.05
    0.053462073 = product of:
      0.10692415 = sum of:
        0.10692415 = product of:
          0.2138483 = sum of:
            0.2138483 = weight(_text_:mining in 3154) [ClassicSimilarity], result of:
              0.2138483 = score(doc=3154,freq=18.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.74808997 = fieldWeight in 3154, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web Mining - ein Schlagwort, das mit der Verbreitung des Internets immer öfter zu lesen und zu hören ist. Die gegenwärtige Forschung beschäftigt sich aber eher mit dem Nutzungsverhalten der Internetnutzer, und ein Blick in Tagungsprogramme einschlägiger Konferenzen (z.B. GOR - German Online Research) zeigt, dass die Analyse der Inhalte kaum Thema ist. Auf der GOR wurden 1999 zwei Vorträge zu diesem Thema gehalten, auf der Folgekonferenz 2001 kein einziger. Web Mining ist der Oberbegriff für zwei Typen von Web Mining: Web Usage Mining und Web Content Mining. Unter Web Usage Mining versteht man das Analysieren von Daten, wie sie bei der Nutzung des WWW anfallen und von den Servern protokolliert wenden. Man kann ermitteln, welche Seiten wie oft aufgerufen wurden, wie lange auf den Seiten verweilt wurde und vieles andere mehr. Beim Web Content Mining wird der Inhalt der Webseiten untersucht, der nicht nur Text, sondern auf Bilder, Video- und Audioinhalte enthalten kann. Die Software für die Analyse von Webseiten ist in den Grundzügen vorhanden, doch müssen die meisten Webseiten für die entsprechende Analysesoftware erst aufbereitet werden. Zuerst müssen die relevanten Websites ermittelt werden, die die gesuchten Inhalte enthalten. Das geschieht meist mit Suchmaschinen, von denen es mittlerweile Hunderte gibt. Allerdings kann man nicht davon ausgehen, dass die Suchmaschinen alle existierende Webseiten erfassen. Das ist unmöglich, denn durch das schnelle Wachstum des Internets kommen täglich Tausende von Webseiten hinzu, und bereits bestehende ändern sich der werden gelöscht. Oft weiß man auch nicht, wie die Suchmaschinen arbeiten, denn das gehört zu den Geschäftsgeheimnissen der Betreiber. Man muss also davon ausgehen, dass die Suchmaschinen nicht alle relevanten Websites finden (können). Der nächste Schritt ist das Herunterladen der Websites, dafür gibt es Software, die unter den Bezeichnungen OfflineReader oder Webspider zu finden ist. Das Ziel dieser Programme ist, die Website in einer Form herunterzuladen, die es erlaubt, die Website offline zu betrachten. Die Struktur der Website wird in der Regel beibehalten. Wer die Inhalte einer Website analysieren will, muss also alle Dateien mit seiner Analysesoftware verarbeiten können. Software für Inhaltsanalyse geht davon aus, dass nur Textinformationen in einer einzigen Datei verarbeitet werden. QDA Software (qualitative data analysis) verarbeitet dagegen auch Audiound Videoinhalte sowie internetspezifische Kommunikation wie z.B. Chats.
    Theme
    Data Mining
  6. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.05
    0.047149118 = product of:
      0.094298236 = sum of:
        0.094298236 = product of:
          0.18859647 = sum of:
            0.18859647 = weight(_text_:mining in 2222) [ClassicSimilarity], result of:
              0.18859647 = score(doc=2222,freq=14.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.6597534 = fieldWeight in 2222, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
    Part I, Infrastructure, has two chapters: Chapter 2 on crawling the Web and Chapter 3 an Web search and information retrieval. The second part of the book, containing chapters 4, 5, and 6, is the centerpiece. This part specifically focuses an machine learning in the context of hypertext. Part III is a collection of applications that utilize the techniques described in earlier chapters. Chapter 7 is an social network analysis. Chapter 8 is an resource discovery. Chapter 9 is an the future of Web mining. Overall, this is a valuable reference book for researchers and developers in the field of Web mining. It should be particularly useful for those who would like to design and probably code their own Computer programs out of the equations and pseudocodes an most of the pages. For a student, the most valuable feature of the book is perhaps the formal and consistent treatments of concepts across the board. For what is behind and beyond the technical details, one has to either dig deeper into the bibliographic notes at the end of each chapter, or resort to more in-depth analysis of relevant subjects in the literature. lf you are looking for successful stories about Web mining or hard-way-learned lessons of failures, this is not the book."
    Theme
    Data Mining
  7. Chen, Z.; Wenyin, L.; Zhang, F.; Li, M.; Zhang, H.: Web mining for Web image retrieval (2001) 0.04
    0.038582932 = product of:
      0.077165864 = sum of:
        0.077165864 = product of:
          0.15433173 = sum of:
            0.15433173 = weight(_text_:mining in 6521) [ClassicSimilarity], result of:
              0.15433173 = score(doc=6521,freq=6.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5398875 = fieldWeight in 6521, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The popularity of digital images is rapidly increasing due to improving digital imaging technologies and convenient availability facilitated by the Internet. However, how to find user-intended images from the Internet is nontrivial. The main reason is that the Web images are usually not annotated using semantic descriptors. In this article, we present an effective approach to and a prototype system for image retrieval from the Internet using Web mining. The system can also serve as a Web image search engine. One of the key ideas in the approach is to extract the text information on the Web pages to semantically describe the images. The text description is then combined with other low-level image features in the image similarity assessment. Another main contribution of this work is that we apply data mining on the log of users' feedback to improve image retrieval performance in three aspects. First, the accuracy of the document space model of image representation obtained from the Web pages is improved by removing clutter and irrelevant text information. Second, to construct the user space model of users' representation of images, which is then combined with the document space model to eliminate mismatch between the page author's expression and the user's understanding and expectation. Third, to discover the relationship between low-level and high-level features, which is extremely useful for assigning the low-level features' weights in similarity assessment
  8. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.04
    0.037803393 = product of:
      0.075606786 = sum of:
        0.075606786 = product of:
          0.15121357 = sum of:
            0.15121357 = weight(_text_:mining in 3090) [ClassicSimilarity], result of:
              0.15121357 = score(doc=3090,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5289795 = fieldWeight in 3090, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
  9. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.04
    0.037803393 = product of:
      0.075606786 = sum of:
        0.075606786 = product of:
          0.15121357 = sum of:
            0.15121357 = weight(_text_:mining in 4242) [ClassicSimilarity], result of:
              0.15121357 = score(doc=4242,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5289795 = fieldWeight in 4242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  10. Pernik, V.; Schlögl, C.: Möglichkeiten und Grenzen von Web Structure Mining am Beispiel von informationswissenschaftlichen Hochschulinstituten im deutschsprachigen Raum (2006) 0.04
    0.03564138 = product of:
      0.07128276 = sum of:
        0.07128276 = product of:
          0.14256552 = sum of:
            0.14256552 = weight(_text_:mining in 78) [ClassicSimilarity], result of:
              0.14256552 = score(doc=78,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.49872664 = fieldWeight in 78, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0625 = fieldNorm(doc=78)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.03
    0.031502828 = product of:
      0.063005656 = sum of:
        0.063005656 = product of:
          0.12601131 = sum of:
            0.12601131 = weight(_text_:mining in 3603) [ClassicSimilarity], result of:
              0.12601131 = score(doc=3603,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.44081625 = fieldWeight in 3603, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
    Theme
    Data Mining
  12. (Über-)Leben in der Informationsgesellschaft : Zwischen Informationsüberfluss und Wissensarmut. Festschrift für Prof. Dr. Gernot Wersig zum 60. Geburtstag (2003) 0.03
    0.031186208 = product of:
      0.062372416 = sum of:
        0.062372416 = product of:
          0.12474483 = sum of:
            0.12474483 = weight(_text_:mining in 1320) [ClassicSimilarity], result of:
              0.12474483 = score(doc=1320,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.4363858 = fieldWeight in 1320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1320)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: RAUCH, W.: Neue Informations-Horizonte? VÖLZ, H.: Gedanken zut Verdaulichkeit von Informationen; RATZEK, W.: Suum cuique - Jedem das Seine! Oder: Was wollen wir wissen; VOWE, G.: Das Internet als elektronische Agora? Zum politischen Potential internetbasierter Kommunikation; GRUDOWSKI, S.: Ideen zur Förderung der Fachinformations-Institutionen durch Fachinformationspolitik: Hyperinformationszentren und Informationswissenschaft; ZIMMERMANN, H.H.: Zur Gestaltung eines Internet-Portals als offenes Autor-zentriertes Kommunikationssystem; HENNINGS, R.-D.: Machine Learning, Data Mining and Knowledge Discovery: Von der Generierung zur Entdeckung von Wissen
  13. Ulrich, P.S.: Collaborative Digital Reference Service : Weltweites Projekt (2001) 0.03
    0.027456136 = product of:
      0.054912273 = sum of:
        0.054912273 = product of:
          0.109824546 = sum of:
            0.109824546 = weight(_text_:22 in 5649) [ClassicSimilarity], result of:
              0.109824546 = score(doc=5649,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.61904186 = fieldWeight in 5649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5649)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 4.2002 17:30:22
  14. Stock, M.; Stock, W.G.: Recherchieren im Internet (2004) 0.03
    0.027456136 = product of:
      0.054912273 = sum of:
        0.054912273 = product of:
          0.109824546 = sum of:
            0.109824546 = weight(_text_:22 in 4686) [ClassicSimilarity], result of:
              0.109824546 = score(doc=4686,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.61904186 = fieldWeight in 4686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27.11.2005 18:04:22
  15. Mothe, J.; Chrisment, C.; Dousset, B.; Alaux, J.: DocCube : Multi-dimensional visualisation and exploration of large document sets (2003) 0.03
    0.026731037 = product of:
      0.053462073 = sum of:
        0.053462073 = product of:
          0.10692415 = sum of:
            0.10692415 = weight(_text_:mining in 1613) [ClassicSimilarity], result of:
              0.10692415 = score(doc=1613,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.37404498 = fieldWeight in 1613, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1613)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  16. Vaughan, L.: Visualizing linguistic and cultural differences using Web co-link data (2006) 0.03
    0.026731037 = product of:
      0.053462073 = sum of:
        0.053462073 = product of:
          0.10692415 = sum of:
            0.10692415 = weight(_text_:mining in 184) [ClassicSimilarity], result of:
              0.10692415 = score(doc=184,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.37404498 = fieldWeight in 184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The study examined Web co-links to Canadian university Web sites. Multidimensional scaling (MDS) was used to analyze and visualize co-link data as was done in co-citation analysis. Co-link data were collected in ways that would reflect three different views, the global view, the French Canada view, and the English Canada view. Mapping results of the three data sets accurately reflected the ways Canadians see the universities and clearly showed the linguistic and cultural differences within Canadian society. This shows that Web co-linking is not a random phenomenon and that co-link data contain useful information for Web data mining. It is proposed that the method developed in the study can be applied to other contexts such as analyzing relationships of different organizations or countries. This kind of research is promising because of the dynamics and the diversity of the Web.
  17. Degez, D.; Masse, C.: ¬L'indexation à l'ère d'Internet (2000) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 6140) [ClassicSimilarity], result of:
              0.09609647 = score(doc=6140,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 6140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6140)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  18. Herrmann, C.: Partikulare Konkretion universal zugänglicher Information : Beobachtungen zur Konzeptionierung fachlicher Internet-Seiten am Beispiel der Theologie (2000) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 4364) [ClassicSimilarity], result of:
              0.09609647 = score(doc=4364,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 4364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4364)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2000 19:29:08
  19. Levy, D.M.: Digital libraries and the problem of purpose (2000) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.09609647 = score(doc=5002,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Bulletin of the American Society for Information Science. 26(2000), no.6, Aug/Sept, S.22-25
  20. Gersmann, G.; Dörr, M.: ¬Der Server Frühe Neuzeit als Baustein für eine Virtuelle Fachbibliothek Geschichte (2001) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.09609647 = score(doc=5666,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2001 11:57:52

Languages

  • d 117
  • e 49
  • el 1
  • f 1
  • m 1
  • More… Less…

Types

  • a 141
  • m 21
  • s 9
  • el 5
  • x 1
  • More… Less…

Subjects

Classifications