Search (5574 results, page 279 of 279)

  • × language_ss:"e"
  1. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 1554) [ClassicSimilarity], result of:
              0.014933716 = score(doc=1554,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.
  2. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 5134) [ClassicSimilarity], result of:
              0.014933716 = score(doc=5134,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 5134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5134)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
  3. a cataloger's primer : Metadata (2005) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 133) [ClassicSimilarity], result of:
              0.014933716 = score(doc=133,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Part II consists of five papers on specific metadata standards and applications. Anita Coleman presents an element-by-element description of how to create Dublin Core metadata for Web resources to be included in a library catalog, using principles inspired by cataloging practice, in her paper "From Cataloging to Metadata: Dublin Core Records for the Library Catalog." The next three papers provide especially excellent introductory overviews of three diverse types of metadata-related standards: "Metadata Standards for Archival Control: An Introduction to EAD and EAC" by Alexander C. Thurman, "Introduction to XML" by Patrick Yott, and "METS: the Metadata Encoding and Transmission Standard" by Linda Cantara. Finally, Michael Chopey offers a superb and most useful overview of "Planning and Implementing a Metadata-Driven Digital Repository." Although all of the articles in this book contain interesting, often illuminating, and potentially useful information, not all serve equally well as introductory material for working catalogers not already familiar with metadata. It would be difficult to consider this volume, taken as a whole, as truly a "primer" for catalog librarians, as the subtitle implies. The content of the articles is too much a mix of introductory essays and original research, some of it at a relatively more advanced level. The collection does not approach the topic in the kind of coherent, systematic, or comprehensive way that would be necessary for a true "primer" or introductory textbook. While several of the papers would be quite appropriate for a primer, such a text would need to include, among other things, coverage of other metadata schemes and protocols such as TEI, VRA, and OAI, which are missing here. That having been said, however, Dr. Smiraglia's excellent introduction to the volume itself serves as a kind of concise, well-written "mini-primer" for catalogers new to metadata. It succinctly covers definitions of metadata, basic concepts, content designation and markup languages, metadata for resource description, including short overviews of TEI, DC, EAD, and AACR2/MARC21, and introduces the papers included in the book. In the conclusion to this essay, Dr. Smiraglia says about the book: "In the end the contents go beyond the definition of primer as `introductory textbook.' But the authors have collectively compiled a thought-provoking volume about the uses of metadata" (p. 15). This is a fair assessment of the work taken as a whole. In this reviewer's opinion, there is to date no single introductory textbook on metadata that is fully satisfactory for both working catalogers and for library and information science (LIS) students who may or may not have had exposure to cataloging. But there are a handful of excellent books that serve different aspects of that function. These include the following recent publications:
  4. ¬La interdisciplinariedad y la transdisciplinariedad en la organización del conocimiento científico : actas del VIII Congreso ISKO-España, León, 18, 19 y 20 de Abril de 2007 : Interdisciplinarity and transdisciplinarity in the organization of scientific knowledge (2007) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 1150) [ClassicSimilarity], result of:
              0.014933716 = score(doc=1150,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 1150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1150)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Benavides, C., Aláiz, H., Alfonso, J., Alonso, Á.: An ontology for control engineering; Ferreira de Oliveira, M.F., Schiessl, M.: Estimate of the distance between areas of an organization using the concept of interlinguistic distance; Lara, M.G.L. de.: Ciencias del lenguaje, terminología y ciencia de la información: relaciones interdisciplinarias y transdisciplinariedad; Moreiro González, J.A.: Evolución paralela de los lenguajes documentales y la terminología; Triska, R.: Artificial intelligence, classification theory and the uncertainty reduction process; Casari Boccato, V.R., Spotti Lopes Fujita, M.: Aproximación cualitativa-cognitiva como método de evaluación de lenguajes documentales: una técnica de protocolo verbal; De Brito Neves, D.A., De Albuquerque, M.E.B.C.: Biblioteca digital un convergencia multidisciplinar; Miranda, A., Simeão, E.: Aspectos interdisciplinarios y tecnológicos de la autoría colectiva e individual; San Segundo, R.: Incidencia de aspectos culturales y siciales en la organización del conocimento transdisciplinar; Barber, E., et al.: Los catálogos en línea de acceso público disponibles en entorno web de las bibliotecas universitarias y especializadas en Argentina y Brasil: diagnóstico de situación; Forsman, M.: Diffusion of a new concept: the case of social capital; Pajor, E.: Una aplicación de topic map que puede ser un modelo posible; Moreiro González, J.A., Franco Álvarez, G., Garcia Martul, D.: Un vocabulario controlado para una hemerotecá: posibilidades y características de los topicsets; Cavalcanti de Miranda, M.L.: Organización y representación del conocimiento: fundamentos teóricos y metológicos para la recuperación de la información en entornos virtuales; Moacir Francelin, M.: Espacios de significación y representación del conocimiento: un análisis sobre teorias y métodos de organización de conceptos en ciencia de la información; Spiteri, L.: The structure and form of folksonomy tags: the road to the public library catalogue;
  5. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 1182) [ClassicSimilarity], result of:
              0.014933716 = score(doc=1182,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  6. Ansorge, K.: Das was 2007 (2007) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 2405) [ClassicSimilarity], result of:
              0.014933716 = score(doc=2405,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 2405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2405)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Standardisierung - Auch 2007 ist die Arbeitsstelle für Standardisierung (AfS) auf dem Weg zur Internationalisierung der deutschen Regelwerke, Formate und Normdateien entscheidende Schritte vorangekommen. Im Mittelpunkt der Vorbereitungen für den Format-umstieg standen eine Konkordanz von MAB2 nach MARC 21 und die Festlegung neuer Felder, die für den Umstieg auf nationaler Ebene notwendig sind. Neben einer Vielzahl anderer Aktivitäten hat die DNB zwei Veranstaltungen zum Format-umstieg durchgeführt. In Zusammenarbeit mit den Expertengruppen des Standardisierungsausschusses wurden drei Stellungnahmen zu Entwürfen des Regelwerkes »Resource Description and Access (RDA)« erarbeitet; es fand eine Beteiligung an der internationalen Diskussion zu wichtigen Grundlagen statt. Der Erfüllung des Wunsches nach Einführung der Onlinekommunikation mit Normdateien ist die DNB im vergangenen Jahr deutlich nähergekommen: Änderungen an Normdaten sollen gleichzeitig in die zentral bei der DNB gehaltenen Dateien und in der Verbunddatenbank vollzogen werden. Seit Anfang September ist die erste Stufe der Onlinekommunikation im produktiven Einsatz: Die PND-Redaktionen in den Aleph-Verbünden arbeiten online zusammen. Das neue Verfahren wird sich auf alle bei der DNB geführten Normdaten erstrecken und in einem gestuften Verfahren eingeführt werden. Die DNB war in zahlreichen Standardisierungsgremien zur Weiterentwicklung von Metadatenstandards wie z.B. Dublin Core und ONIX (Online Information eXchange) sowie bei den Entwicklungsarbeiten für The European Library beteiligt. Die Projektarbeiten im Projekt KIM - Kompetenzzentrum Interoperable Metadaten wurden maßgeblich unterstützt. Im Rahmen der Arbeiten zum Gesetz über die Deutsche Nationalbibliothek wurde ein Metadatenkernset für die Übermittlung von Metadaten an die DNB entwickelt und in einer ersten Stufe mit einem ONIX-Mapping versehen. Innerhalb des Projektes »Virtual International Authority File - VIAF« entwickelten die Library of Congress (LoC), die DNB und OCLC - zunächst für Personennamen - gemeinsam eine virtuelle, internationale Normdatei, in der die Normdatensätze der nationalen Normdateien im Web frei zugänglich miteinander verbunden werden sollen. Die bisherigen Projektergebnisse haben die Machbarkeit einer internationalen Normdatei eindrucksvoll unter Beweis gestellt. Darum haben die Projektpartner in einem neuen Abkommen, das auch die Bibliothèque Nationale de France einschließt, im Oktober 2007 ihr Engagement für den VIAF nochmals bekräftigt und damit eine Konsolidierungs- und Erweiterungsphase eingeleitet."
  7. Culture and identity in knowledge organization : Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada (2008) 0.00
    0.0024889526 = product of:
      0.007466858 = sum of:
        0.007466858 = product of:
          0.014933716 = sum of:
            0.014933716 = weight(_text_:web in 2494) [ClassicSimilarity], result of:
              0.014933716 = score(doc=2494,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.09014259 = fieldWeight in 2494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    EPISTEMOLOGICAL FOUNDATIONS OF KNOWLEDGE ORGANIZATION H. Peter Ohly. Knowledge Organization Pro and Retrospective. Judith Simon. Knowledge and Trust in Epistemology and Social Software/Knowledge Technologies. - D. Grant Campbell. Derrida, Logocentrism, and the Concept of Warrant on the Semantic Web. - Jian Qin. Controlled Semantics Versus Social Semantics: An Epistemological Analysis. - Hope A. Olson. Wind and Rain and Dark of Night: Classification in Scientific Discourse Communities. - Thomas M. Dousa. Empirical Observation, Rational Structures, and Pragmatist Aims: Epistemology and Method in Julius Otto Kaiser's Theory of Systematic Indexing. - Richard P. Smiraglia. Noesis: Perception and Every Day Classification. Birger Hjorland. Deliberate Bias in Knowledge Organization? Joseph T. Tennis and Elin K. Jacob. Toward a Theory of Structure in Information Organization Frameworks. - Jack Andersen. Knowledge Organization as a Cultural Form: From Knowledge Organization to Knowledge Design. - Hur-Li Lee. Origins of the Main Classes in the First Chinese Bibliographie Classification. NON-TEXTUAL MATERIALS Abby Goodrum, Ellen Hibbard, Deborah Fels and Kathryn Woodcock. The Creation of Keysigns American Sign Language Metadata. - Ulrika Kjellman. Visual Knowledge Organization: Towards an International Standard or a Local Institutional Practice?
  8. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0022925914 = product of:
      0.006877774 = sum of:
        0.006877774 = product of:
          0.013755548 = sum of:
            0.013755548 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.013755548 = score(doc=1858,freq=2.0), product of:
                0.17776565 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05
  9. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.00
    0.0022925914 = product of:
      0.006877774 = sum of:
        0.006877774 = product of:
          0.013755548 = sum of:
            0.013755548 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
              0.013755548 = score(doc=5580,freq=2.0), product of:
                0.17776565 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07738023 = fieldWeight in 5580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5580)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  10. Gonzalez, L.: What is FRBR? (2005) 0.00
    0.001991162 = product of:
      0.005973486 = sum of:
        0.005973486 = product of:
          0.011946972 = sum of:
            0.011946972 = weight(_text_:web in 3401) [ClassicSimilarity], result of:
              0.011946972 = score(doc=3401,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07211407 = fieldWeight in 3401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."
  11. Ratzan, L.: Understanding information systems : what they do and why we need them (2004) 0.00
    0.001991162 = product of:
      0.005973486 = sum of:
        0.005973486 = product of:
          0.011946972 = sum of:
            0.011946972 = weight(_text_:web in 4581) [ClassicSimilarity], result of:
              0.011946972 = score(doc=4581,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07211407 = fieldWeight in 4581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4581)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    In "Organizing Information" various fundamental organizational schemes are compared. These include hierarchical, relational, hypertext, and random access models. Each is described initially and then expanded an by listing advantages and disadvantages. This comparative format-not found elsewhere in the book-improves access to the subject and overall understanding. The author then affords considerable space to Boolean searching in the chapter "Retrieving Information." Throughout this chapter, the intricacies and problems of pattern matching and relevance are highlighted. The author elucidates the fact that document retrieval by simple pattern matching is not the same as problem solving. Therefore, "always know the nature of the problem you are trying to solve" (p. 56). This chapter is one of the more important ones in the book, covering a large topic swiftly and concisely. Chapters 5 through 11 then delve deeper into various specific issues of information systems. The chapters an securing and concealing information are exceptionally good. Without mentioning specific technologies, Mr. Ratzan is able to clearly present fundamental aspects of information security. Principles of backup security, password management, and encryption are also discussed in some detail. The latter is illustrated with some fascinating examples, from the Navajo Code Talkers to invisible ink and others. The chapters an measuring, counting, and numbering information complement each other well. Some of the more math-centric discussions and examples are found here. "Measuring Information" begins with a brief overview of bibliometrics and then moves quickly through Lotka's law, Zipf's law, and Bradford's law. For an LIS student, exposure to these topics is invaluable. Baseball statistics and web metrics are used for illustration purposes towards the end. In "counting Information," counting devices and methods are first presented, followed by discussion of the Fibonacci sequence and golden ratio. This relatively long chapter ends with examples of the tower of Hanoi, the changes of winning the lottery, and poker odds. The bulk of "Numbering Information" centers an prime numbers and pi. This chapter reads more like something out of an arithmetic book and seems somewhat extraneous here. Three specific types of information systems are presented in the second half of the book, each afforded its own chapter. These examples are universal as not to become dated or irrelevant over time. "The Computer as an Information System" is relatively short and focuses an bits, bytes, and data compression. Considering the Internet as an information system-chapter 13-is an interesting illustration. It brings up issues of IP addressing and the "privilege-vs.-right" access issue. We are reminded that the distinction between information rights and privileges is often unclear. A highlight of this chapter is the discussion of metaphors people use to describe the Internet, derived from the author's own research. He has found that people have varying mental models of the Internet, potentially affecting its perception and subsequent use.
  12. Mossberger, K.; Tolbert, C.J.; Stansbury, M.: Virtual inequality : beyond the digital divide (2003) 0.00
    0.001991162 = product of:
      0.005973486 = sum of:
        0.005973486 = product of:
          0.011946972 = sum of:
            0.011946972 = weight(_text_:web in 1795) [ClassicSimilarity], result of:
              0.011946972 = score(doc=1795,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07211407 = fieldWeight in 1795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1795)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    The economic opportunity divide is predicated an the hypothesis that there has, indeed, been a major shift in opportunities driven by changes in the information environment. The authors document this paradigm shift well with arguments from the political and economic right and left. This chapter might be described as an "attitudinal" chapter. The authors are concerned here with the perceptions of their respondents of their information skills and skill levels with their economic outlook and opportunities. Technological skills and economic opportunities are correlated, one finds, in the minds of all across all ages, genders, races, ethnicities, and income levels. African Americans in particular are ". . attuned to the use of technology for economic opportunity" (p. 80). The fourth divide is the democratic divide. The Internet may increase political participation, the authors posit, but only among groups predisposed to participate and perhaps among those with the skills necessary to take advantage of the electronic environment (p. 86). Certainly the Web has played an important role in disseminating and distributing political messages and in some cases in political fund raising. But by the analysis here, we must conclude that the message does not reach everyone equally. Thus, the Internet may widen the political participation gap rather than narrow it. The book has one major, perhaps fatal, flaw: its methodology and statistical application. The book draws upon a survey performed for the authors in June and July 2001 by the Kent State University's Computer Assisted Telephone Interviewing (CATI) lab (pp. 7-9). CATI employed a survey protocol provided to the reader as Appendix 2. An examination of the questionnaire reveals that all questions yield either nominal or ordinal responses, including the income variable (pp. 9-10). Nevertheless, Mossberger, Tolbert, and Stansbury performed a series of multiple regression analyses (reported in a series of tables in Appendix 1) utilizing these data. Regression analysis requires interval/ratio data in order to be valid although nominal and ordinal data can be incorporated by building dichotomous dummy variables. Perhaps Mossberger, Tolbert, and Stansbury utilized dummy variables; but 1 do not find that discussed. Moreover, 1 would question a multiple regression made up completely of dichotomous dummy variables. I come away from Virtual Inequality with mixed feelings. It is useful to think of the digital divide as more than one phenomenon. The four divides that Mossberger, Tolbert, and Stansbury offeraccess, skills, economic opportunity, and democratic-are useful as a point of departure and debate. No doubt, other divides will be identified and documented. This book will lead the way. Second, without question, Mossberger, Tolbert, and Stansbury provide us with an extremely well-documented, -written, and -argued work. Third, the authors are to be commended for the multidisciplinarity of their work. Would that we could see more like it. My reservations about their methodological approach, however, hang over this review like a shroud."
  13. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.001991162 = product of:
      0.005973486 = sum of:
        0.005973486 = product of:
          0.011946972 = sum of:
            0.011946972 = weight(_text_:web in 2924) [ClassicSimilarity], result of:
              0.011946972 = score(doc=2924,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.07211407 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: Mitt. VÖB 60(2007) H.1, S.98-101 (O. Oberhauser): "Die Autorin von Essential thesaurus construction (and essential taxonomy construction, so der implizite Untertitel, vgl. S. 1) ist durch ihre Lehrtätigkeit an der bekannten School of Library, Archive and Information Studies des University College London und durch ihre bisherigen Publikationen auf den Gebieten (Facetten-)Klassifikation und Thesaurus fachlich einschlägig ausgewiesen. Nach Essential classification liegt nun ihr Thesaurus-Lehrbuch vor, mit rund 200 Seiten Text und knapp 100 Seiten Anhang ein handliches Werk, das seine Genese zum Grossteil dem Lehrbetrieb verdankt, wie auch dem kurzen Einleitungskapitel zu entnehmen ist. Das Buch ist der Schule von Jean Aitchison et al. verpflichtet und wendet sich an "the indexer" im weitesten Sinn, d.h. an alle Personen, die ein strukturiertes, kontrolliertes Fachvokabular für die Zwecke der sachlichen Erschliessung und Suche erstellen wollen bzw. müssen. Es möchte dieser Zielgruppe das nötige methodische Rüstzeug für eine solche Aufgabe vermitteln, was einschliesslich der Einleitung und der Schlussbemerkungen in zwanzig Kapiteln geschieht - eine ansprechende Strukturierung, die ein wohldosiertes Durcharbeiten möglich macht. Zu letzterem tragen auch die von der Autorin immer wieder gestellten Übungsaufgaben bei (Lösungen jeweils am Kapitelende). Zu Beginn der Darstellung wird der "information retrieval thesaurus" von dem (zumindest im angelsächsischen Raum) weit öfter mit dem Thesaurusbegriff assoziierten "reference thesaurus" abgegrenzt, einem nach begrifflicher Ähnlichkeit angeordneten Synonymenwörterbuch, das gerne als Mittel zur stilistischen Verbesserung beim Abfassen von (wissenschaftlichen) Arbeiten verwendet wird. Ohne noch ins Detail zu gehen, werden optische Erscheinungsform und Anwendungsgebiete von Thesauren vorgestellt, der Thesaurus als postkoordinierte Indexierungssprache erläutert und seine Nähe zu facettierten Klassifikationssystemen erwähnt. In der Folge stellt Broughton die systematisch organisierten Systeme (Klassifikation/ Taxonomie, Begriffs-/Themendiagramme, Ontologien) den alphabetisch angeordneten, wortbasierten (Schlagwortlisten, thesaurusartige Schlagwortsysteme und Thesauren im eigentlichen Sinn) gegenüber, was dem Leser weitere Einordnungshilfen schafft. Die Anwendungsmöglichkeiten von Thesauren als Mittel der Erschliessung (auch als Quelle für Metadatenangaben bei elektronischen bzw. Web-Dokumenten) und der Recherche (Suchformulierung, Anfrageerweiterung, Browsing und Navigieren) kommen ebenso zur Sprache wie die bei der Verwendung natürlichsprachiger Indexierungssysteme auftretenden Probleme. Mit Beispielen wird ausdrücklich auf die mehr oder weniger starke fachliche Spezialisierung der meisten dieser Vokabularien hingewiesen, wobei auch Informationsquellen über Thesauren (z.B. www.taxonomywarehouse.com) sowie Thesauren für nicht-textuelle Ressourcen kurz angerissen werden.
  14. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    0.0017422669 = product of:
      0.0052268007 = sum of:
        0.0052268007 = product of:
          0.010453601 = sum of:
            0.010453601 = weight(_text_:web in 6119) [ClassicSimilarity], result of:
              0.010453601 = score(doc=6119,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.06309982 = fieldWeight in 6119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.

Languages

Types

  • a 4740
  • m 466
  • el 400
  • s 244
  • b 35
  • x 24
  • r 23
  • i 15
  • n 14
  • p 8
  • h 2
  • ? 1
  • A 1
  • EL 1
  • More… Less…

Themes

Subjects

Classifications