Search (6 results, page 1 of 1)

  • × classification_ss:"025.04"
  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Huberman, B.: ¬The laws of the Web: : patterns in the ecology of information (2001) 0.03
    0.031932972 = product of:
      0.063865945 = sum of:
        0.063865945 = sum of:
          0.03567564 = weight(_text_:t in 6123) [ClassicSimilarity], result of:
            0.03567564 = score(doc=6123,freq=2.0), product of:
              0.20491594 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.05201693 = queryNorm
              0.17409891 = fieldWeight in 6123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.03125 = fieldNorm(doc=6123)
          0.028190302 = weight(_text_:22 in 6123) [ClassicSimilarity], result of:
            0.028190302 = score(doc=6123,freq=2.0), product of:
              0.18215442 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05201693 = queryNorm
              0.15476047 = fieldWeight in 6123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=6123)
      0.5 = coord(1/2)
    
    Date
    22.10.2006 10:22:33
    Footnote
    Rez. in: nfd 54(2003) H.8, S.497 (T. Mandl): "Gesetze der digitalen Anarchie - Hyperlinks im Internet entstehen als Ergebnis sozialer Prozesse und können auch als formaler Graph im Sinne der Mathematik interpretiert werden. Die Thematik Hyperlinks ist im Information Retrieval höchst aktuell, da Suchmaschinen die Link-Struktur bei der Berechnung ihrer Ergebnisse berücksichtigen. Algorithmen zur Bestimmung des "guten Rufs" einer Seite wie etwa PageRank von Google gewichten eine Seite höher, wenn viele links auf sie verweisen. Zu den neuesten Erkenntnissen über die Netzwerkstruktur des Internets liegen zwei sehr gut lesbare Bücher vor. Der Autor des ersten Buchs, der Wirtschaftswissenschaftler Huberman, ist Leiter einer Forschungsabteilung bei Hewlett Packard. Huberman beschreibt in seinem Buch zunächst die Geschichte des Internet als technologische Revolution und gelangt dann schnell zu dessen Evolution und den darin vorherrschenden Wahrscheinlichkeitsverteilungen. Oberraschenderweise treten im Internet häufig power-law Wahrscheinlichkeitsverteilungen auf, die der Zipf'schen Verteilung ähneln. Auf diese sehr ungleichen Aufteilungen etwa von eingehenden HypertextLinks oder Surfern pro Seite im Internet bezieht sich der Titel des Buchs. Diese immer wieder auftretenden Wahrscheinlichkeitsverteilungen scheinen geradezu ein Gesetz des Internet zu bilden. So gibt es z.B. viele Sites mit sehr wenigen Seiten und einige wenige mit Millionen von Seiten, manche Seiten werden selten besucht und andere ziehen einen Großteil des Internet-Verkehrs auf sich, auf die meisten Seiten verweisen sehr wenige Links während auf einige wenige populäre Seiten Millionen von Links zielen. Das vorletzte Kapitel widmen übrigens beide Autoren den Märkten im Internet. Spätestens hier werden die wirtschaftlichen Aspekte von Netzwerken deutlich. Beide Titel führen den Leser in die neue Forschung zur Struktur des Internet als Netzwerk und sind leicht lesbar. Beides sind wissenschaftliche Bücher, wenden sich aber auch an den interessierten Laien. Das Buch von Barabási ist etwas aktueller, plauderhafter, länger, umfassender und etwas populärwissenschaftlicher."
  2. Information science in transition (2009) 0.02
    0.016274534 = sum of:
      0.0074650636 = product of:
        0.029860254 = sum of:
          0.029860254 = weight(_text_:authors in 634) [ClassicSimilarity], result of:
            0.029860254 = score(doc=634,freq=2.0), product of:
              0.2371355 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05201693 = queryNorm
              0.12592064 = fieldWeight in 634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
        0.25 = coord(1/4)
      0.00880947 = product of:
        0.01761894 = sum of:
          0.01761894 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
            0.01761894 = score(doc=634,freq=2.0), product of:
              0.18215442 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05201693 = queryNorm
              0.09672529 = fieldWeight in 634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
        0.5 = coord(1/2)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Date
    22. 2.2013 11:35:35
  3. Calishain, T.; Dornfest, R.; Adam, D.J.: Google Pocket Guide (2003) 0.01
    0.011148638 = product of:
      0.022297276 = sum of:
        0.022297276 = product of:
          0.044594552 = sum of:
            0.044594552 = weight(_text_:t in 6) [ClassicSimilarity], result of:
              0.044594552 = score(doc=6,freq=2.0), product of:
                0.20491594 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.05201693 = queryNorm
                0.21762364 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Hare, C.E.; McLeod, J.: How to manage records in the e-environment : 2nd ed. (2006) 0.01
    0.010451089 = product of:
      0.020902177 = sum of:
        0.020902177 = product of:
          0.08360871 = sum of:
            0.08360871 = weight(_text_:authors in 1749) [ClassicSimilarity], result of:
              0.08360871 = score(doc=1749,freq=2.0), product of:
                0.2371355 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05201693 = queryNorm
                0.35257778 = fieldWeight in 1749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1749)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    A practical approach to developing and operating an effective programme to manage hybrid records within an organization. This title positions records management as an integral business function linked to the organisation's business aims and objectives. The authors also address the records requirements of new and significant pieces of legislation, such as data protection and freedom of information, as well as exploring strategies for managing electronic records. Bullet points, checklists and examples assist the reader throughout, making this a one-stop resource for information in this area.
  5. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.01
    0.008445755 = product of:
      0.01689151 = sum of:
        0.01689151 = product of:
          0.06756604 = sum of:
            0.06756604 = weight(_text_:authors in 7) [ClassicSimilarity], result of:
              0.06756604 = score(doc=7,freq=4.0), product of:
                0.2371355 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05201693 = queryNorm
                0.28492588 = fieldWeight in 7, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  6. Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008) 0.01
    0.0059720506 = product of:
      0.011944101 = sum of:
        0.011944101 = product of:
          0.047776405 = sum of:
            0.047776405 = weight(_text_:authors in 4041) [ClassicSimilarity], result of:
              0.047776405 = score(doc=4041,freq=2.0), product of:
                0.2371355 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05201693 = queryNorm
                0.20147301 = fieldWeight in 4041, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4041)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Class-tested and coherent, this textbook teaches information retrieval, including web search, text classification, and text clustering from basic concepts. Ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students. Slides and additional exercises are available for lecturers. - This book provides what Salton and Van Rijsbergen both failed to achieve. Even more important, unlike some other books in IR, the authors appear to care about making the theory as accessible as possible to the reader, on occasion including short primers to certain topics or choosing to explain difficult concepts using simplified approaches. Its coverage [is] excellent, the quality of writing high and I was surprised how much I learned from reading it. I think the online resources are impressive.