Search (10 results, page 1 of 1)

  • × classification_ss:"025.04"
  1. Huberman, B.: ¬The laws of the Web: : patterns in the ecology of information (2001) 0.02
    0.016593097 = product of:
      0.04148274 = sum of:
        0.029734775 = weight(_text_:t in 6123) [ClassicSimilarity], result of:
          0.029734775 = score(doc=6123,freq=2.0), product of:
            0.17079243 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.04335484 = queryNorm
            0.17409891 = fieldWeight in 6123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.03125 = fieldNorm(doc=6123)
        0.011747964 = product of:
          0.023495927 = sum of:
            0.023495927 = weight(_text_:22 in 6123) [ClassicSimilarity], result of:
              0.023495927 = score(doc=6123,freq=2.0), product of:
                0.15182126 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04335484 = queryNorm
                0.15476047 = fieldWeight in 6123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6123)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22.10.2006 10:22:33
    Footnote
    Rez. in: nfd 54(2003) H.8, S.497 (T. Mandl): "Gesetze der digitalen Anarchie - Hyperlinks im Internet entstehen als Ergebnis sozialer Prozesse und können auch als formaler Graph im Sinne der Mathematik interpretiert werden. Die Thematik Hyperlinks ist im Information Retrieval höchst aktuell, da Suchmaschinen die Link-Struktur bei der Berechnung ihrer Ergebnisse berücksichtigen. Algorithmen zur Bestimmung des "guten Rufs" einer Seite wie etwa PageRank von Google gewichten eine Seite höher, wenn viele links auf sie verweisen. Zu den neuesten Erkenntnissen über die Netzwerkstruktur des Internets liegen zwei sehr gut lesbare Bücher vor. Der Autor des ersten Buchs, der Wirtschaftswissenschaftler Huberman, ist Leiter einer Forschungsabteilung bei Hewlett Packard. Huberman beschreibt in seinem Buch zunächst die Geschichte des Internet als technologische Revolution und gelangt dann schnell zu dessen Evolution und den darin vorherrschenden Wahrscheinlichkeitsverteilungen. Oberraschenderweise treten im Internet häufig power-law Wahrscheinlichkeitsverteilungen auf, die der Zipf'schen Verteilung ähneln. Auf diese sehr ungleichen Aufteilungen etwa von eingehenden HypertextLinks oder Surfern pro Seite im Internet bezieht sich der Titel des Buchs. Diese immer wieder auftretenden Wahrscheinlichkeitsverteilungen scheinen geradezu ein Gesetz des Internet zu bilden. So gibt es z.B. viele Sites mit sehr wenigen Seiten und einige wenige mit Millionen von Seiten, manche Seiten werden selten besucht und andere ziehen einen Großteil des Internet-Verkehrs auf sich, auf die meisten Seiten verweisen sehr wenige Links während auf einige wenige populäre Seiten Millionen von Links zielen. Das vorletzte Kapitel widmen übrigens beide Autoren den Märkten im Internet. Spätestens hier werden die wirtschaftlichen Aspekte von Netzwerken deutlich. Beide Titel führen den Leser in die neue Forschung zur Struktur des Internet als Netzwerk und sind leicht lesbar. Beides sind wissenschaftliche Bücher, wenden sich aber auch an den interessierten Laien. Das Buch von Barabási ist etwas aktueller, plauderhafter, länger, umfassender und etwas populärwissenschaftlicher."
  2. Anders, V.: Automated information retrieval in libraries : a management handbook (1992) 0.01
    0.014867388 = product of:
      0.07433694 = sum of:
        0.07433694 = weight(_text_:t in 6510) [ClassicSimilarity], result of:
          0.07433694 = score(doc=6510,freq=2.0), product of:
            0.17079243 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.04335484 = queryNorm
            0.43524727 = fieldWeight in 6510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.078125 = fieldNorm(doc=6510)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: IfB 2(1994) H.1, S.9-10 (A. Weber); Library software review. 1993, Fall, S.69-70 (T. Koppel)
  3. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.01
    0.008920433 = product of:
      0.044602163 = sum of:
        0.044602163 = weight(_text_:t in 2605) [ClassicSimilarity], result of:
          0.044602163 = score(doc=2605,freq=2.0), product of:
            0.17079243 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.04335484 = queryNorm
            0.26114836 = fieldWeight in 2605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
      0.2 = coord(1/5)
    
  4. Shiri, A.: Powering search : the role of thesauri in new information environments (2012) 0.01
    0.008920433 = product of:
      0.044602163 = sum of:
        0.044602163 = weight(_text_:t in 1322) [ClassicSimilarity], result of:
          0.044602163 = score(doc=1322,freq=2.0), product of:
            0.17079243 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.04335484 = queryNorm
            0.26114836 = fieldWeight in 1322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
      0.2 = coord(1/5)
    
    Series
    ASIS&T monograph series
  5. Information science in transition (2009) 0.01
    0.0075102793 = product of:
      0.037551396 = sum of:
        0.037551396 = sum of:
          0.02286644 = weight(_text_:index in 634) [ClassicSimilarity], result of:
            0.02286644 = score(doc=634,freq=2.0), product of:
              0.18945041 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.04335484 = queryNorm
              0.12069881 = fieldWeight in 634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
          0.014684956 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
            0.014684956 = score(doc=634,freq=2.0), product of:
              0.15182126 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04335484 = queryNorm
              0.09672529 = fieldWeight in 634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
      0.2 = coord(1/5)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Date
    22. 2.2013 11:35:35
  6. Calishain, T.; Dornfest, R.; Adam, D.J.: Google Pocket Guide (2003) 0.01
    0.007433694 = product of:
      0.03716847 = sum of:
        0.03716847 = weight(_text_:t in 6) [ClassicSimilarity], result of:
          0.03716847 = score(doc=6,freq=2.0), product of:
            0.17079243 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.04335484 = queryNorm
            0.21762364 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6)
      0.2 = coord(1/5)
    
  7. Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008) 0.01
    0.005174085 = product of:
      0.025870424 = sum of:
        0.025870424 = product of:
          0.051740848 = sum of:
            0.051740848 = weight(_text_:index in 4041) [ClassicSimilarity], result of:
              0.051740848 = score(doc=4041,freq=4.0), product of:
                0.18945041 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.04335484 = queryNorm
                0.27311024 = fieldWeight in 4041, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4041)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Inhalt: Boolean retrieval - The term vocabulary & postings lists - Dictionaries and tolerant retrieval - Index construction - Index compression - Scoring, term weighting & the vector space model - Computing scores in a complete search system - Evaluation in information retrieval - Relevance feedback & query expansion - XML retrieval - Probabilistic information retrieval - Language models for information retrieval - Text classification & Naive Bayes - Vector space classification - Support vector machines & machine learning on documents - Flat clustering - Hierarchical clustering - Matrix decompositions & latent semantic indexing - Web search basics - Web crawling and indexes - Link analysis Vgl. die digitale Fassung unter: http://nlp.stanford.edu/IR-book/pdf/irbookprint.pdf.
  8. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.00
    0.0036586304 = product of:
      0.018293152 = sum of:
        0.018293152 = product of:
          0.036586303 = sum of:
            0.036586303 = weight(_text_:index in 7) [ClassicSimilarity], result of:
              0.036586303 = score(doc=7,freq=2.0), product of:
                0.18945041 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.04335484 = queryNorm
                0.1931181 = fieldWeight in 7, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  9. Day, R.E.: Indexing it all : the subject in the age of documentation, information, and data (2014) 0.00
    0.0036586304 = product of:
      0.018293152 = sum of:
        0.018293152 = product of:
          0.036586303 = sum of:
            0.036586303 = weight(_text_:index in 3024) [ClassicSimilarity], result of:
              0.036586303 = score(doc=3024,freq=2.0), product of:
                0.18945041 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.04335484 = queryNorm
                0.1931181 = fieldWeight in 3024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3024)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In this book, Ronald Day offers a critical history of the modern tradition of documentation. Focusing on the documentary index (understood as a mode of social positioning), and drawing on the work of the French documentalist Suzanne Briet, Day explores the understanding and uses of indexicality. He examines the transition as indexes went from being explicit professional structures that mediated users and documents to being implicit infrastructural devices used in everyday information and communication acts. Doing so, he also traces three epistemic eras in the representation of individuals and groups, first in the forms of documents, then information, then data. Day investigates five cases from the modern tradition of documentation. He considers the socio-technical instrumentalism of Paul Otlet, "the father of European documentation" (contrasting it to the hermeneutic perspective of Martin Heidegger); the shift from documentation to information science and the accompanying transformation of persons and texts into users and information; social media's use of algorithms, further subsuming persons and texts; attempts to build android robots -- to embody human agency within an information system that resembles a human being; and social "big data" as a technique of neoliberal governance that employs indexing and analytics for purposes of surveillance. Finally, Day considers the status of critique and judgment at a time when people and their rights of judgment are increasingly mediated, displaced, and replaced by modern documentary techniques.
  10. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.00
    0.0014684956 = product of:
      0.007342478 = sum of:
        0.007342478 = product of:
          0.014684956 = sum of:
            0.014684956 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.014684956 = score(doc=1833,freq=2.0), product of:
                0.15182126 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04335484 = queryNorm
                0.09672529 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)

Types

  • m 10
  • s 1