Search (73 results, page 1 of 4)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.08135515 = product of:
      0.4067757 = sum of:
        0.4067757 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.4067757 = score(doc=1826,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.07
    0.067064844 = product of:
      0.1676621 = sum of:
        0.13990225 = weight(_text_:books in 3608) [ClassicSimilarity], result of:
          0.13990225 = score(doc=3608,freq=14.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.565117 = fieldWeight in 3608, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.027759846 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
          0.027759846 = score(doc=3608,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.15476047 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
      0.4 = coord(2/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Object
    Google books
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.040677574 = product of:
      0.20338786 = sum of:
        0.20338786 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
          0.20338786 = score(doc=4388,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2 = coord(1/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. "Google Books" darf weitermachen wie bisher : Entscheidung des Supreme Court in den USA (2016) 0.03
    0.032381076 = product of:
      0.16190538 = sum of:
        0.16190538 = weight(_text_:books in 2923) [ClassicSimilarity], result of:
          0.16190538 = score(doc=2923,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6539958 = fieldWeight in 2923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2923)
      0.2 = coord(1/5)
    
    Abstract
    Der Internet-Riese darf sein Projekt "Google Books" wie gehabt fortsetzen. Der Oberste US-Gerichtshof lehnte die von einer Autorenvereingung geforderte Revision ab. Google teste mit seinem Projekt zwar die Grenzen der Fairness aus, handele aber rechtens, sagten die Richter.
    Content
    " Im Streit mit Google um Urheberrechte ist eine Gruppe von Buchautoren am Obersten US-Gericht gescheitert. Der Supreme Court lehnte es ab, die google-freundliche Entscheidung eines niederen Gerichtes zur Revision zuzulassen. In dem Fall geht es um die Online-Bibliothek "Google Books", für die der kalifornische Konzern Gerichtsunterlagen zufolge mehr als 20 Millionen Bücher digitalisiert hat. Durch das Projekt können Internet-Nutzer innerhalb der Bücher nach Stichworten suchen und die entsprechenden Textstellen lesen. Die drei zuständigen Richter entschieden einstimmig, dass in dem Fall zwar die Grenzen der Fairness ausgetestet würden, aber das Vorgehen von Google letztlich rechtens sei. Entschädigungen in Milliardenhöhe gefürchtet Die von dem Interessensverband Authors Guild angeführten Kläger sahen ihre Urheberrechte durch "Google Books" verletzt. Dazu gehörten auch prominente Künstler wie die Schriftstellerin und Dichterin Margaret Atwood. Google führte dagegen an, die Internet-Bibliothek kurbele den Bücherverkauf an, weil Leser dadurch zusätzlich auf interessante Werke aufmerksam gemacht würden. Google reagierte "dankbar" auf die Entscheidung des Supreme Court. Der Konzern hatte befürchtet, bei einer juristischen Niederlage Entschädigungen in Milliardenhöhe zahlen zu müssen."
    Object
    Google books
    Source
    https://www.tagesschau.de/wirtschaft/google-books-entscheidung-101.html
  5. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.03
    0.032381076 = product of:
      0.16190538 = sum of:
        0.16190538 = weight(_text_:books in 3870) [ClassicSimilarity], result of:
          0.16190538 = score(doc=3870,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6539958 = fieldWeight in 3870, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.2 = coord(1/5)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  6. Koch, C.: Can a photodiode be conscious? (2013) 0.03
    0.029912358 = product of:
      0.1495618 = sum of:
        0.1495618 = weight(_text_:books in 4560) [ClassicSimilarity], result of:
          0.1495618 = score(doc=4560,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.60413545 = fieldWeight in 4560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=4560)
      0.2 = coord(1/5)
    
    Content
    Erwiderung auf die Rezension von John Searle zu: Koch, C.: Consciousness: confessions of a romantic reductionist. Cambridge, Massachusetts: MIT Press 2012 in:The New York Review of Books, 10.01.2013 [https://www.nybooks.com/articles/2013/03/07/can-photodiode-be-conscious/?pagination=false&printpage=true]
    Source
    New York Review of Books, [https://www.nybooks.com/articles/2013/01/10/can-information-theory-explain-consciousness/]. 2013
  7. Tozer, J.: How long is the perfect book? : Bigger really is better. What the numbers say (2019) 0.03
    0.029912358 = product of:
      0.1495618 = sum of:
        0.1495618 = weight(_text_:books in 4686) [ClassicSimilarity], result of:
          0.1495618 = score(doc=4686,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.60413545 = fieldWeight in 4686, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=4686)
      0.2 = coord(1/5)
    
    Abstract
    British novelist E.M. Forster once complained that long books "are usually overpraised" because "the reader wishes to convince others and himself that he has not wasted his time." To test his theory we collected reader ratings for 737 books tagged as "classic literature" on Goodreads.com, a review aggregator with 80m members. The bias towards chunky tomes was substantial. Slim volumes of 100 to 200 pages scored only 3.87 out of 5, whereas those over 1,000 pages scored 4.19. Longer is better, say the readers.
  8. Standage, T.: Information overload is nothing new (2018) 0.02
    0.02289688 = product of:
      0.1144844 = sum of:
        0.1144844 = weight(_text_:books in 4473) [ClassicSimilarity], result of:
          0.1144844 = score(doc=4473,freq=6.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.4624449 = fieldWeight in 4473, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4473)
      0.2 = coord(1/5)
    
    Content
    "Overflowing inboxes, endlessly topped up by incoming emails. Constant alerts, notifications and text messages on your smartphone and computer. Infinitely scrolling streams of social-media posts. Access to all the music ever recorded, whenever you want it. And a deluge of high-quality television, with new series released every day on Netflix, Amazon Prime and elsewhere. The bounty of the internet is a marvellous thing, but the ever-expanding array of material can leave you feeling overwhelmed, constantly interrupted, unable to concentrate or worried that you are missing out or falling behind. No wonder some people are quitting social media, observing "digital sabbaths" when they unplug from the internet for a day, or buying old-fashioned mobile phones in an effort to avoid being swamped. This phenomenon may seem quintessentially modern, but it dates back centuries, as Ann Blair of Harvard University observes in "Too Much to Know", a history of information overload. Half a millennium ago, the printing press was to blame. "Is there anywhere on Earth exempt from these swarms of new books?" moaned Erasmus in 1525. New titles were appearing in such abundance, thousands every year. How could anyone figure out which ones were worth reading? Overwhelmed scholars across Europe worried that good ideas were being lost amid the deluge. Francisco Sanchez, a Spanish philosopher, complained in 1581 that 10m years was not long enough to read all the books in existence. The German polymath Gottfried Wilhelm Leibniz grumbled in 1680 of "that horrible mass of books which keeps on growing"."
  9. Denton, W.: On dentographs, a new method of visualizing library collections (2012) 0.02
    0.021151232 = product of:
      0.105756156 = sum of:
        0.105756156 = weight(_text_:books in 580) [ClassicSimilarity], result of:
          0.105756156 = score(doc=580,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.42718828 = fieldWeight in 580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=580)
      0.2 = coord(1/5)
    
    Abstract
    A dentograph is a visualization of a library's collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
  10. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.019629175 = product of:
      0.09814587 = sum of:
        0.09814587 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
          0.09814587 = score(doc=3582,freq=4.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.54716086 = fieldWeight in 3582, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3582)
      0.2 = coord(1/5)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  11. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.019431893 = product of:
      0.09715946 = sum of:
        0.09715946 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
          0.09715946 = score(doc=8365,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.5416616 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
      0.2 = coord(1/5)
    
    Date
    22. 6.2015 16:08:38
  12. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.02
    0.018507326 = product of:
      0.09253663 = sum of:
        0.09253663 = weight(_text_:books in 1717) [ClassicSimilarity], result of:
          0.09253663 = score(doc=1717,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.37378973 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
      0.2 = coord(1/5)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
  13. Gutknecht, C.: Zahlungen der ETH Zürich an Elsevier, Springer und Wiley nun öffentlich (2015) 0.02
    0.018507326 = product of:
      0.09253663 = sum of:
        0.09253663 = weight(_text_:books in 4324) [ClassicSimilarity], result of:
          0.09253663 = score(doc=4324,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.37378973 = fieldWeight in 4324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4324)
      0.2 = coord(1/5)
    
    Abstract
    Was bezahlt die ETH Bibliothek an Elsevier, Springer und Wiley? Die Antwort auf diese einfache Frage liegt nun nach gut 14 Monaten und einem Entscheid der ersten Rekursinstanz (EDÖB) vor. Werfen wir nun also einen Blick in diese nun erstmals öffentlich zugänglichen Daten (auch als XLSX). Die ETH-Bibliothek schlüsselte die Ausgaben wie von mir gewünscht in Datenbanken, E-Books und Zeitschriften auf.
  14. Röthler, D.: "Lehrautomaten" oder die MOOC-Vision der späten 60er Jahre (2014) 0.02
    0.016655907 = product of:
      0.083279535 = sum of:
        0.083279535 = weight(_text_:22 in 1552) [ClassicSimilarity], result of:
          0.083279535 = score(doc=1552,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.46428138 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=1552)
      0.2 = coord(1/5)
    
    Date
    22. 6.2018 11:04:35
  15. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.02
    0.015863424 = product of:
      0.079317115 = sum of:
        0.079317115 = weight(_text_:books in 2116) [ClassicSimilarity], result of:
          0.079317115 = score(doc=2116,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.3203912 = fieldWeight in 2116, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
      0.2 = coord(1/5)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
  16. Jörs, B.: ¬Die überzogenen Ansprüche der Informationswissenschaft : Förderung auf der Basis von Fachkompetenz und im Bewusstsein des eigenen Irrtums: Informationskompetenz (2019) 0.02
    0.015863424 = product of:
      0.079317115 = sum of:
        0.079317115 = weight(_text_:books in 5316) [ClassicSimilarity], result of:
          0.079317115 = score(doc=5316,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.3203912 = fieldWeight in 5316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=5316)
      0.2 = coord(1/5)
    
    Abstract
    Was ist Informationskompetenz? Was macht Informationskompetenz ist Kern aus? Wie weit greift sie, wo endet sie? Inwiefern spielt Informationskompetenz mit weiteren Kompetenzen zusammen? Der heute ziemlich abgenutzte Begriff "Informationskompetenz" entstand in den 1970er Jahren aus Furcht vor dem "Information Overload" und dem "Wissens- und Entwicklungsvorsprung", der durch die Raumfahrtindustrie bewirkt wurde. Er wurde insbesondere vom amerikanischen und britischen Bibliotheks- und Dokumentationswesen übernommen. Bis heute beansprucht insbesondere das Archiv-, Dokumentations- und Bibliothekswesen unter anderem zur Legitimierung ihrer Daseinsberechtigung diese "Kernkompetenz" der Informationsbedarfsdeckung und "Findability" mit Bezug auf das analoge und digitale bibliothekarische Informationsangebot (Kataloge, Datenbanken, E-Journals, E-Books usw.).
  17. Schultz, S.: ¬Die eine App für alles : Mobile Zukunft in China (2016) 0.02
    0.015703341 = product of:
      0.0785167 = sum of:
        0.0785167 = weight(_text_:22 in 4313) [ClassicSimilarity], result of:
          0.0785167 = score(doc=4313,freq=4.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.4377287 = fieldWeight in 4313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4313)
      0.2 = coord(1/5)
    
    Date
    22. 6.2018 14:22:02
  18. Dousa, T.M.: E. Wyndham Hulme's classification of the attributes of books : On an early model of a core bibliographical entity (2017) 0.01
    0.014956179 = product of:
      0.0747809 = sum of:
        0.0747809 = weight(_text_:books in 3859) [ClassicSimilarity], result of:
          0.0747809 = score(doc=3859,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.30206773 = fieldWeight in 3859, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=3859)
      0.2 = coord(1/5)
    
    Abstract
    Modelling bibliographical entities is a prominent activity within knowledge organization today. Current models of bibliographic entities, such as Functional Requirements for Bibliographical Records (FRBR) and the Bibliographic Framework (BIBFRAME), take inspiration from data - modelling methods developed by computer scientists from the mid - 1970s on. Thus, it would seem that the modelling of bibliographic entities is an activity of very recent vintage. However, it is possible to find examples of bibliographical models from earlier periods of knowledge organization. The purpose of this paper is to draw attention to one such model, outlined by the early 20th - century British classification theorist E. Wyndham Hulme in his essay on "Principles of Book Classification" (1911 - 1912). There, Hulme set forth a classification of various attributes by which books can conceivably be classified. These he first divided into accidental and inseparable attributes. Accidental attributes were subdivided into edition - level and copy - level attributes and inseparable attitudes, into physical and non - physical attributes. Comparison of Hulme's classification of attributes with those of FRBR and BIBFRAME 2.0 reveals that the different classes of attributes in Hulme's classification correspond to groups of attributes associated with different bibliographical entities in those models. These later models assume the existence of different bibliographic entities in an abstraction hierarchy among which attributes are distributed, whereas Hulme posited only a single entity - the book - , whose various aspects he clustered into different classes of attributes. Thus, Hulme's model offers an interesting alternative to current assumptions about how to conceptualize the relationship between attributes and entities in the bibliographical universe.
  19. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.01
    0.013879924 = product of:
      0.06939962 = sum of:
        0.06939962 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
          0.06939962 = score(doc=5865,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.38690117 = fieldWeight in 5865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
      0.2 = coord(1/5)
    
    Date
    22. 2.2017 12:51:57
  20. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.013879924 = product of:
      0.06939962 = sum of:
        0.06939962 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
          0.06939962 = score(doc=5576,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.38690117 = fieldWeight in 5576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5576)
      0.2 = coord(1/5)
    
    Date
    13.12.2017 14:17:22

Languages

  • d 46
  • e 25
  • a 1
  • More… Less…

Types

  • a 49
  • r 2
  • m 1
  • s 1
  • x 1
  • More… Less…