Search (460 results, page 1 of 23)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.31
    0.3101627 = product of:
      0.7237129 = sum of:
        0.10338756 = product of:
          0.31016266 = sum of:
            0.31016266 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.31016266 = score(doc=1826,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.31016266 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31016266 = score(doc=1826,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.31016266 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31016266 = score(doc=1826,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.42857143 = coord(3/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.25
    0.24813014 = product of:
      0.5789703 = sum of:
        0.08271005 = product of:
          0.24813014 = sum of:
            0.24813014 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24813014 = score(doc=230,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24813014 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24813014 = score(doc=230,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.24813014 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24813014 = score(doc=230,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.42857143 = coord(3/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.15508135 = product of:
      0.36185646 = sum of:
        0.05169378 = product of:
          0.15508133 = sum of:
            0.15508133 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15508133 = score(doc=4388,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15508133 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15508133 = score(doc=4388,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.15508133 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15508133 = score(doc=4388,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.42857143 = coord(3/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Hawking, S.: This is the most dangerous time for our planet (2016) 0.04
    0.037063666 = product of:
      0.086481884 = sum of:
        0.009793503 = weight(_text_:based in 3273) [ClassicSimilarity], result of:
          0.009793503 = score(doc=3273,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.083222985 = fieldWeight in 3273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.034204133 = weight(_text_:great in 3273) [ClassicSimilarity], result of:
          0.034204133 = score(doc=3273,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.15552977 = fieldWeight in 3273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.04248425 = product of:
          0.0849685 = sum of:
            0.0849685 = weight(_text_:britain in 3273) [ClassicSimilarity], result of:
              0.0849685 = score(doc=3273,freq=4.0), product of:
                0.29147226 = queryWeight, product of:
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.03905679 = queryNorm
                0.29151487 = fieldWeight in 3273, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3273)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
  5. Zia, L.L.: Growing a national learning environments and resources network for science, mathematics, engineering, and technology education : current issues and opportunities for the NSDL program (2001) 0.04
    0.035749383 = product of:
      0.12512283 = sum of:
        0.015669605 = weight(_text_:based in 1217) [ClassicSimilarity], result of:
          0.015669605 = score(doc=1217,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 1217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
        0.10945322 = weight(_text_:great in 1217) [ClassicSimilarity], result of:
          0.10945322 = score(doc=1217,freq=8.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.49769527 = fieldWeight in 1217, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
      0.2857143 = coord(2/7)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) program seeks to create, develop, and sustain a national digital library supporting science, mathematics, engineering, and technology (SMET) education at all levels -- preK-12, undergraduate, graduate, and life-long learning. The resulting virtual institution is expected to catalyze and support continual improvements in the quality of science, mathematics, engineering, and technology (SMET) education in both formal and informal settings. The vision for this program has been explored through a series of workshops over the past several years and documented in accompanying reports and monographs. (See [1-7, 10, 12, and 13].) These efforts have led to a characterization of the digital library as a learning environments and resources network for science, mathematics, engineering, and technology education, that is: * designed to meet the needs of learners, in both individual and collaborative settings; * constructed to enable dynamic use of a broad array of materials for learning primarily in digital format; and * managed actively to promote reliable anytime, anywhere access to quality collections and services, available both within and without the network. Underlying the NSDL program are several working assumptions. First, while there is currently no lack of "great piles of content" on the Web, there is an urgent need for "piles of great content". The difficulties in discovering and verifying the authority of appropriate Web-based material are certainly well known, yet there are many examples of learning resources of great promise available (particularly those exploiting the power of multiple media), with more added every day. The breadth and interconnectedness of the Web are simultaneously a great strength and shortcoming. Second, the "unit" or granularity of educational content can and will shrink, affording the opportunity for users to become creators and vice versa, as learning objects are reused, repackaged, and repurposed. To be sure, this scenario cannot take place without serious attention to intellectual property and digital rights management concerns. But new models and technologies are being explored (see a number of recent articles in the January issue of D-Lib Magazine). Third, there is a need for an "organizational infrastructure" that facilitates connections between distributed users and distributed content, as alluded to in the third bullet above. Finally, while much of the ongoing use of the library is envisioned to be "free" in the sense of the public good, there is an opportunity and a need to consider multiple alternative models of sustainability, particularly in the area of services offered by the digital library. More details about the NSDL program including information about proposal deadlines and current awards may be found at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl>.
  6. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.04
    0.03519811 = product of:
      0.12319338 = sum of:
        0.02742181 = weight(_text_:based in 3061) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3061,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.09577157 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.09577157 = score(doc=3061,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.2857143 = coord(2/7)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  7. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.03
    0.02514151 = product of:
      0.087995276 = sum of:
        0.019587006 = weight(_text_:based in 6470) [ClassicSimilarity], result of:
          0.019587006 = score(doc=6470,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 6470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6470)
        0.068408266 = weight(_text_:great in 6470) [ClassicSimilarity], result of:
          0.068408266 = score(doc=6470,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 6470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6470)
      0.2857143 = coord(2/7)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  8. Kelley, D.: Relevance feedback : getting to know your user (2008) 0.03
    0.02514151 = product of:
      0.087995276 = sum of:
        0.019587006 = weight(_text_:based in 1924) [ClassicSimilarity], result of:
          0.019587006 = score(doc=1924,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 1924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1924)
        0.068408266 = weight(_text_:great in 1924) [ClassicSimilarity], result of:
          0.068408266 = score(doc=1924,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 1924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1924)
      0.2857143 = coord(2/7)
    
    Abstract
    Relevance feedback was one of the first interactive information retrieval techniques to help systems learn more about users' interests. Relevance feedback has been used in a variety of IR applications including query expansion, term disambiguation, user profiling, filtering and personalization. Initial relevance feedback techniques were explicit, in that they required the user's active participation. Many of today's relevance feedback techniques are implicit and based on users' information seeking behaviors, such as the pages they choose to visit, the frequency with which they visit pages, and the length of time pages are displayed. Although this type of information is available in great abundance, it is difficult to interpret without understanding more about the user's search goals and context. In this talk, I will address the following questions: what techniques are available to help us learn about users' interests and preferences? What types of evidence are available through a user's interactions with the system and with the information provided by the system? What do we need to know to accurately interpret and use this evidence? I will address the first two questions by presenting an overview of relevance feedback research in information retrieval. I will address the third question by presenting results of some of my own research that examined the online information seeking behaviors of users during a 14-week period and the context in which these behaviors took place.
  9. Crane, G.: What do you do with a million books? (2006) 0.02
    0.021967653 = product of:
      0.07688678 = sum of:
        0.022160169 = weight(_text_:based in 1180) [ClassicSimilarity], result of:
          0.022160169 = score(doc=1180,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.18831211 = fieldWeight in 1180, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1180)
        0.05472661 = weight(_text_:great in 1180) [ClassicSimilarity], result of:
          0.05472661 = score(doc=1180,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 1180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1180)
      0.2857143 = coord(2/7)
    
    Abstract
    The Greek historian Herodotus has the Athenian sage Solon estimate the lifetime of a human being at c. 26,250 days (Herodotus, The Histories, 1.32). If we could read a book on each of those days, it would take almost forty lifetimes to work through every volume in a single million book library. The continuous tradition of written European literature that began with the Iliad and Odyssey in the eighth century BCE is itself little more than a million days old. While libraries that contain more than one million items are not unusual, print libraries never possessed a million books of use to any one reader. The great libraries that took shape in the nineteenth and twentieth centuries were meta-structures, whose catalogues and finding aids allowed readers to create their own customized collections, building on the fixed classification schemes and disciplinary structures that took shape in the nineteenth century. The digital libraries of the early twenty-first century can be searched and their contents transmitted around the world. They can contain time-based media, images, quantitative data, and a far richer array of content than print, with visualization technologies blurring the boundaries between library and museum. But our digital libraries remain filled with digital incunabula - digital objects whose form remains firmly rooted in traditions of print, with HTML and PDF largely mimicking the limitations of their print predecessors. Vast collections based on image books - raw digital pictures of books with searchable but uncorrected text from OCR - could arguably retard our long-term progress, reinforcing the hegemony of structures that evolved to minimize the challenges of a world where paper was the only medium of distribution and where humans alone could read. Already the books in a digital library are beginning to read one another and to confer among themselves before creating a new synthetic document for review by their human readers.
  10. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.02
    0.021516455 = product of:
      0.07530759 = sum of:
        0.02742181 = weight(_text_:based in 2362) [ClassicSimilarity], result of:
          0.02742181 = score(doc=2362,freq=8.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 2362, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.047885787 = weight(_text_:great in 2362) [ClassicSimilarity], result of:
          0.047885787 = score(doc=2362,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.21774168 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  11. Datentracking in der Wissenschaft : Aggregation und Verwendung bzw. Verkauf von Nutzungsdaten durch Wissenschaftsverlage. Ein Informationspapier des Ausschusses für Wissenschaftliche Bibliotheken und Informationssysteme der Deutschen Forschungsgemeinschaft (2021) 0.02
    0.021285411 = product of:
      0.14899787 = sum of:
        0.14899787 = weight(_text_:businesses in 248) [ClassicSimilarity], result of:
          0.14899787 = score(doc=248,freq=2.0), product of:
            0.29628533 = queryWeight, product of:
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03905679 = queryNorm
            0.5028864 = fieldWeight in 248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.046875 = fieldNorm(doc=248)
      0.14285715 = coord(1/7)
    
    Abstract
    Das Informationspapier beschreibt die digitale Nachverfolgung von wissenschaftlichen Aktivitäten. Wissenschaftlerinnen und Wissenschaftler nutzen täglich eine Vielzahl von digitalen Informationsressourcen wie zum Beispiel Literatur- und Volltextdatenbanken. Häufig fallen dabei Nutzungsspuren an, die Aufschluss geben über gesuchte und genutzte Inhalte, Verweildauern und andere Arten der wissenschaftlichen Aktivität. Diese Nutzungsspuren können von den Anbietenden der Informationsressourcen festgehalten, aggregiert und weiterverwendet oder verkauft werden. Das Informationspapier legt die Transformation von Wissenschaftsverlagen hin zu Data Analytics Businesses dar, verweist auf die Konsequenzen daraus für die Wissenschaft und deren Einrichtungen und benennt die zum Einsatz kommenden Typen der Datengewinnung. Damit dient es vor allem der Darstellung gegenwärtiger Praktiken und soll zu Diskussionen über deren Konsequenzen für die Wissenschaft anregen. Es richtet sich an alle Wissenschaftlerinnen und Wissenschaftler sowie alle Akteure in der Wissenschaftslandschaft.
  12. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.02
    0.01865998 = product of:
      0.06530993 = sum of:
        0.05472661 = weight(_text_:great in 3608) [ClassicSimilarity], result of:
          0.05472661 = score(doc=3608,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.010583311 = product of:
          0.021166623 = sum of:
            0.021166623 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.021166623 = score(doc=3608,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  13. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.018563617 = product of:
      0.064972654 = sum of:
        0.038780294 = weight(_text_:based in 3450) [ClassicSimilarity], result of:
          0.038780294 = score(doc=3450,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.3295462 = fieldWeight in 3450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3450)
        0.02619236 = product of:
          0.05238472 = sum of:
            0.05238472 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.05238472 = score(doc=3450,freq=4.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  14. Paskin, N.: DOI: a 2003 progress report (2003) 0.02
    0.017599056 = product of:
      0.06159669 = sum of:
        0.013710905 = weight(_text_:based in 1203) [ClassicSimilarity], result of:
          0.013710905 = score(doc=1203,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.11651218 = fieldWeight in 1203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1203)
        0.047885787 = weight(_text_:great in 1203) [ClassicSimilarity], result of:
          0.047885787 = score(doc=1203,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.21774168 = fieldWeight in 1203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1203)
      0.2857143 = coord(2/7)
    
    Abstract
    The International DOI Foundation (IDF) recently published the third edition of its DOI Handbook, which sets the scene for DOI's expansion into much wider applications. Edition 3 is not simply an updated user guide. A great deal has happened in the underlying technologies and in the practical deployment and development of DOIs (Digital Object Identifiers) since the last edition was published a year ago. Much of the program of technical work foreseen at the inception of DOIs has now been completed. The initial simple implementation of DOI as a persistent name linked to redirection continues to grow, with approaching ten million DOIs assigned from several hundred organisations through a number of Registration Agencies in USA, Europe, and Australasia, supporting large scale business uses. Implementations of more sophisticated applications (offering associated services) have been developing well but on a smaller scale: a framework for building these has been completed as part of the latest release and promises to stimulate a new wave of growth. From its original starting point in text publishing, there has been gradual embrace by a number of communities: these include national libraries (a consortium of national libraries recently joined the IDF); government documentation (with the appointment of TSO The Stationery Office in the UK as a DOI agency and the announced intention of the EC Office of Publications to use DOIs); non-English language markets (France, Germany, Spain, Italy, Korea). However implementations in non-text sectors have been far slower to develop, though several are now under discussion. The DOI community can point to several significant achievements over the past few years: * A practical successful open implementation of naming objects, treating content as information objects, not simply packets of bits; * The IDF's role in co-sponsoring, championing, and now implementing the <indecs>T framework as a semantic tool for structured metadata - an essential step for treating content as information in Semantic-Web-like applications; * A template for building advanced applications, connecting resolution and metadata technologies, and offering hooks to web services and similar applications; * The development of a policy framework that allows multiple communities autonomy; * The practical implementation of DOIs with emerging related standards such as the OpenURL framework in contextual linking.
    A number of issues remain to be solved. In the main these are no longer technical in nature, but more concerned with perception and outreach to other communities. They include: correctly positioning the DOI in the standards community as a practical implementation (based on standards, but more than standards); offering the benefits of DOI to other communities working in related identifier development whilst allowing them to remain largely autonomous; demonstrating how DOIs can complement, rather than compete with, other activities; and ensuring that a sustainable long-term infrastructure for any application (commercial and non-commercial alike) is in place. Persistent, actionable identifiers with a fully managed sustainable infrastructure are not appropriate for every activity; but they are suitable for many, and where they are used, the key to providing a successful and widely adopted system is encouraging economy of scale (and so, where possible, convergence with other related efforts), flexibility of use, and a low barrier to use. DOI is well on the way to providing this, but not yet guaranteed of success without the further effort that is now being applied.
  15. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.02
    0.01616737 = product of:
      0.056585796 = sum of:
        0.04071083 = weight(_text_:based in 1289) [ClassicSimilarity], result of:
          0.04071083 = score(doc=1289,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.34595144 = fieldWeight in 1289, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.031749934 = score(doc=1289,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  16. Bates, M.E.: Quick answers to odd questions (2004) 0.02
    0.015084905 = product of:
      0.052797165 = sum of:
        0.011752204 = weight(_text_:based in 3071) [ClassicSimilarity], result of:
          0.011752204 = score(doc=3071,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.09986758 = fieldWeight in 3071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
        0.04104496 = weight(_text_:great in 3071) [ClassicSimilarity], result of:
          0.04104496 = score(doc=3071,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.18663573 = fieldWeight in 3071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
      0.2857143 = coord(2/7)
    
    Content
    "One of the things I enjoyed the most when I was a reference librarian was the wide range of questions my clients sent my way. What was the original title of the first Godzilla movie? (Gojira, released in 1954) Who said 'I'm as pure as the driven slush'? (Tallulah Bankhead) What percentage of adults have gone to a jazz performance in the last year? (11%) I have found that librarians, speech writers and journalists have one thing in common - we all need to find information on all kinds of topics, and we usually need the answers right now. The following are a few of my favorite sites for finding answers to those there-must-be-an-answer-out-there questions. - For the electronic equivalent to the "ready reference" shelf of resources that most librarians keep hidden behind their desks, check out RefDesk . It is particularly good for answering factual questions - Where do I get the new Windows XP Service Pack? Where is the 386 area code? How do I contact my member of Congress? - Another resource for lots of those quick-fact questions is InfoPlease, the publishers of the Information Please almanac .- Right now, it's full of Olympics data, but it also has links to facts and factoids that you would look up in an almanac, atlas, or encyclopedia. - If you want numbers, start with the Statistical Abstract of the US. This source, produced by the U.S. Census Bureau, gives you everything from the divorce rate by state to airline cost indexes going back to 1980. It is many librarians' secret weapon for pulling numbers together quickly. - My favorite question is "how does that work?" Haven't you ever wondered how they get that Olympic torch to continue to burn while it is being carried by runners from one city to the next? Or how solar sails manage to propel a spacecraft? For answers, check out the appropriately-named How Stuff Works. - For questions about movies, my first resource is the Internet Movie Database. It is easy to search, is such a popular site that mistakes are corrected quickly, and is a fun place to catch trailers of both upcoming movies and those dating back to the 30s. - When I need to figure out who said what, I still tend to rely on the print sources such as Bartlett's Familiar Quotations . No, the current edition is not available on the web, but - and this is the librarian in me - I really appreciate the fact that I not only get the attribution but I also see the source of the quote. There are far too many quotes being attributed to a celebrity, but with no indication of the publication in which the quote appeared. Take, for example, the much-cited quote of Margaret Meade, "Never doubt that a small group of thoughtful committed people can change the world; indeed, it's the only thing that ever has!" Then see the page on the Institute for Intercultural Studies site, founded by Meade, and read its statement that it has never been able to verify this alleged quote from Meade. While there are lots of web-based sources of quotes (see QuotationsPage.com and Bartleby, for example), unless the site provides the original source for the quotation, I wouldn't rely on the citation. Of course, if you have a hunch as to the source of a quote, and it was published prior to 1923, head over to Project Gutenberg , which includes the full text of over 12,000 books that are in the public domain. When I needed to confirm a quotation of the Red Queen in "Through the Looking Glass", this is where I started. - And if you are stumped as to where to go to find information, instead of Googling it, try the Librarians' Index to the Internet. While it is somewhat US-centric, it is a great directory of web resources."
  17. Report on the future of bibliographic control : draft for public comment (2007) 0.02
    0.015084905 = product of:
      0.052797165 = sum of:
        0.011752204 = weight(_text_:based in 1271) [ClassicSimilarity], result of:
          0.011752204 = score(doc=1271,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.09986758 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.04104496 = weight(_text_:great in 1271) [ClassicSimilarity], result of:
          0.04104496 = score(doc=1271,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.18663573 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
      0.2857143 = coord(2/7)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
  18. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.015001667 = product of:
      0.052505832 = sum of:
        0.03133921 = weight(_text_:based in 1149) [ClassicSimilarity], result of:
          0.03133921 = score(doc=1149,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 1149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.042333245 = score(doc=1149,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  19. Stapleton, M.; Adams, M.: Faceted categorisation for the corporate desktop : visualisation and interaction using metadata to enhance user experience (2007) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 718) [ClassicSimilarity], result of:
          0.03324025 = score(doc=718,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 718, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=718)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 718) [ClassicSimilarity], result of:
              0.031749934 = score(doc=718,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=718)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Mark Stapleton and Matt Adamson began their presentation by describing how Dow Jones' Factiva range of information services processed an average of 170,000 documents every day, drawn from over 10,000 sources in 22 languages. These documents are categorized within five facets: Company, Subject, Industry, Region and Language. The digital feeds received from information providers undergo a series of processing stages, initially to prepare them for automatic categorization and then to format them ready for distribution. The categorization stage is able to handle 98% of documents automatically, the remaining 2% requiring some form of human intervention. Depending on the source, categorization can involve any combination of 'Autocoding', 'Dictionary-based Categorizing', 'Rules-based Coding' or 'Manual Coding'
  20. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 3449) [ClassicSimilarity], result of:
          0.03324025 = score(doc=3449,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 3449, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=3449)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
              0.031749934 = score(doc=3449,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 3449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3449)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49

Years

Languages

  • e 351
  • d 95
  • a 3
  • el 2
  • i 2
  • f 1
  • nl 1
  • More… Less…

Types

  • a 228
  • s 15
  • i 10
  • r 10
  • x 8
  • m 7
  • p 6
  • n 3
  • b 2
  • More… Less…

Themes