Search (3 results, page 1 of 1)

  • × author_ss:"Austin, D."
  1. Austin, D.: Automatisierung in der Sacherschließung der British Library (1984) 0.02
    0.024298662 = product of:
      0.12149331 = sum of:
        0.0620765 = weight(_text_:forschung in 999) [ClassicSimilarity], result of:
          0.0620765 = score(doc=999,freq=2.0), product of:
            0.16498606 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.03391332 = queryNorm
            0.376253 = fieldWeight in 999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=999)
        0.059416812 = weight(_text_:daten in 999) [ClassicSimilarity], result of:
          0.059416812 = score(doc=999,freq=2.0), product of:
            0.16141292 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.03391332 = queryNorm
            0.36810443 = fieldWeight in 999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.0546875 = fieldNorm(doc=999)
      0.2 = coord(2/10)
    
    Abstract
    Dieser Aufsatz beschäftigt sich mit Management-Aspekten der Sacherschließung in der British Library, Bibliographic Services Division, wo computergestützte, nicht völlig "automatische" Verfahren angewendet werden. In einer ausführlichen Darstellung des Arbeitsablaufes im Subject Systems Office wird der Weg eines Dokumentes durch die verschiedenen Sektionen verfolgt, und die betriebswirtschaftlichen Folgen der besonderen Rolle von PRECIS in diesem Arbeitsablauf werden erörtert. Das Mehrdateiensystem der British-Library-Datenbank wird beschrieben; es wird gezeigt, wie diese Struktur den effektiven Wiedergebrauch von Daten ermöglicht. Weiterhin wird die Verbesserung des on-line Retrieval durch den Einbau von präkoordinierten Themenangaben in den Suchablauf behandelt; abschließend wird die Rolle des Computers in der Sacherschließung einer IuD-Einrichtung wie der British Library diskutiert
    Source
    Bibliothek: Forschung und Praxis. 8(1984), S.45-57
  2. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.00
    0.003950558 = product of:
      0.03950558 = sum of:
        0.03950558 = weight(_text_:web in 93) [ClassicSimilarity], result of:
          0.03950558 = score(doc=93,freq=16.0), product of:
            0.11067648 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03391332 = queryNorm
            0.35694647 = fieldWeight in 93, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
      0.1 = coord(1/10)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  3. Austin, D.: ¬A proposal for an International Standard Object Numberworks (1999) 0.00
    0.0019953332 = product of:
      0.019953331 = sum of:
        0.019953331 = weight(_text_:web in 6540) [ClassicSimilarity], result of:
          0.019953331 = score(doc=6540,freq=2.0), product of:
            0.11067648 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03391332 = queryNorm
            0.18028519 = fieldWeight in 6540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6540)
      0.1 = coord(1/10)
    
    Abstract
    It is a fact that those involved with the humanities use visual resources for references in their work. Yet access to visual resources is no where near as certain or assured as print material. This is equally true for resources that may be discovered at a museum, an archive, in a slide collection, or on the Web. Inception of an International Standard Object Number, similar to International Standard Bibliographic Numbers for books and International Standard Serials Numbers for periodicals will advance accurate and timely access to visual resources. Unique numbers or codes which refer not only to the object but to any digital or non digital surrogate is desired by those whose interests lie in visual resources, digital objects or metadata. This paper discusses extant paradigms (ISBN, ISSN, ISMN, and the emerging ISAN) and models a procedure for assigning ISONs to objects and their surrogates. Resources requisite to the construction of the ISON are described, and a clear outline of the necessarily cooperative work ahead are discussed if an ISON can become a standard which will help in the discovery of visual resources in an open, shared environment