Search (198 results, page 10 of 10)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.00
    0.004434768 = product of:
      0.017739072 = sum of:
        0.017739072 = product of:
          0.035478145 = sum of:
            0.035478145 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
              0.035478145 = score(doc=3449,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.23214069 = fieldWeight in 3449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3449)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2017 12:53:49
  2. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.00
    0.004434768 = product of:
      0.017739072 = sum of:
        0.017739072 = product of:
          0.035478145 = sum of:
            0.035478145 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.035478145 = score(doc=479,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    23. 1.2022 10:22:18
  3. Beall, J.: Approaches to expansions : case studies from the German and Vietnamese translations (2003) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 1748) [ClassicSimilarity], result of:
              0.029565124 = score(doc=1748,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 1748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  4. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.029565124 = score(doc=1291,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 1291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  5. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.029565124 = score(doc=3628,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  6. Open MIND (2015) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.029565124 = score(doc=1648,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 1.2015 11:48:22
  7. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.029565124 = score(doc=2564,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16. 1.2016 10:22:28
  8. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
              0.029565124 = score(doc=3200,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 3200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3200)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28.10.2016 18:22:33
  9. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.00
    0.0036956405 = product of:
      0.014782562 = sum of:
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.029565124 = score(doc=4553,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16.11.2018 14:22:01
  10. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0035858164 = product of:
      0.014343265 = sum of:
        0.014343265 = weight(_text_:c in 1182) [ClassicSimilarity], result of:
          0.014343265 = score(doc=1182,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.09527725 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.25 = coord(1/4)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  11. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    0.0035858164 = product of:
      0.014343265 = sum of:
        0.014343265 = weight(_text_:c in 1216) [ClassicSimilarity], result of:
          0.014343265 = score(doc=1216,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.09527725 = fieldWeight in 1216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
      0.25 = coord(1/4)
    
  12. Knutsen, U.: Working in a distributed electronic environment : Experiences with the Norwegian edition (2003) 0.00
    0.0029565122 = product of:
      0.011826049 = sum of:
        0.011826049 = product of:
          0.023652097 = sum of:
            0.023652097 = weight(_text_:22 in 1937) [ClassicSimilarity], result of:
              0.023652097 = score(doc=1937,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.15476047 = fieldWeight in 1937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1937)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  13. Encyclopædia Britannica 2003 : Ultmate Reference Suite (2002) 0.00
    0.0029565122 = product of:
      0.011826049 = sum of:
        0.011826049 = product of:
          0.023652097 = sum of:
            0.023652097 = weight(_text_:22 in 2182) [ClassicSimilarity], result of:
              0.023652097 = score(doc=2182,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.15476047 = fieldWeight in 2182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2182)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: c't 2002, H.23, S.229 (T.J. Schult): "Mac-Anwender hatten bisher keine große Auswahl bei Multimedia-Enzyklopädien: entweder ein grottenschlechtes Kosmos Kompaktwissen, das dieses Jahr letztmalig erscheinen soll und sich dabei als Systhema Universallexikon tarnt. Oder ein Brockhaus in Text und Bild mit exzellenten Texten, aber flauer Medienausstattung. Die von Acclaim in Deutschland vertriebenen Britannica-Enzyklopädien stellen eine ausgezeichnete Alternative für den des Englischen Kundigen dar. Während früher nur Einfach-Britannicas auf dem Mac liefen, gilt dies nun für alle drei Versionen Student, Deluxe und Ultimate Reference Suite. Die Suite enthält dabei nicht nur alle 75 000 Artikel der 32 Britannica-Bände, sondern auch die 15 000 der Student Encyclopaedia, eines eigenen Schülerlexikons, das durch sein einfaches Englisch gerade für Nicht-Muttersprachler als Einstieg taugt. Wer es noch elementarer haben möchte, klickt sich zur Britannica Elementary Encyclopaedia, welche unter der gleichen Oberfläche wie die anderen Werke zugänglich ist. Schließlich umfasst die Suite einen Weltatlas sowie einsprachige Wörterbücher und Thesauri von Merriam-Webster in der Collegiate- und Student-Ausbaustufe mit allein 555 000 Definitionen, Synonymen und Antonymen. Wer viel in englischer Sprache recherchiert oder gar schreibt, leckt sich angesichts dieses Angebots (EUR 99,95) die Finger, zumal die Printausgabe gut 1600 Euro kostet. Die Texte sind einfach kolossal - allein das Inhaltsverzeichnis des Artikels Germany füllt sieben Bildschirmseiten. Schon die Inhalte aus den BritannicaBänden bieten mehr als doppelt so viel Text wie die rund tausend Euro kostende Brockhaus Enzyklopädie digital (c't 22/02, S. 38). Allein die 220 000 thematisch einsortierten Web-Links sind das Geld wert. Wer die 2,4 Gigabyte belegende Komplettinstallation wählt, muss sogar nie mehr die DVD (alternativ vier CD-ROMs) einlegen. Dieses Jahr muss sich niemand mehr mit dem Britannica-typischen Kuddelmuddel aus Lexikonartikeln und vielen, vielen Jahrbüchern herumschlagen - außer dem Basistext der drei Enzyklopädien sind 'nur' die zwei Jahrbücher 2001 und 2002 getrennt aufgeführt. Wer des Englischen mächtig ist, mag hier die gute Gelegenheit zum Kauf nutzen."
  14. Baker, T.: ¬A grammar of Dublin Core (2000) 0.00
    0.0029565122 = product of:
      0.011826049 = sum of:
        0.011826049 = product of:
          0.023652097 = sum of:
            0.023652097 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.023652097 = score(doc=1236,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 14:01:22
  15. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.00
    0.0029565122 = product of:
      0.011826049 = sum of:
        0.011826049 = product of:
          0.023652097 = sum of:
            0.023652097 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.023652097 = score(doc=1163,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  16. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.00
    0.0029565122 = product of:
      0.011826049 = sum of:
        0.011826049 = product of:
          0.023652097 = sum of:
            0.023652097 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.023652097 = score(doc=3608,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  17. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.002217384 = product of:
      0.008869536 = sum of:
        0.008869536 = product of:
          0.017739072 = sum of:
            0.017739072 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.017739072 = score(doc=1184,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 14:08:22
  18. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.002217384 = product of:
      0.008869536 = sum of:
        0.008869536 = product of:
          0.017739072 = sum of:
            0.017739072 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.017739072 = score(doc=405,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]

Authors

Years

Types

  • a 103
  • s 12
  • m 3
  • n 3
  • r 3
  • x 3
  • b 1
  • i 1
  • p 1
  • More… Less…