Search (198 results, page 1 of 10)

  • × type_ss:"el"
  1. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.33
    0.32786316 = product of:
      0.4917947 = sum of:
        0.4406232 = weight(_text_:me in 5837) [ClassicSimilarity], result of:
          0.4406232 = score(doc=5837,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            1.2844719 = fieldWeight in 5837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.125 = fieldNorm(doc=5837)
        0.051171508 = product of:
          0.102343015 = sum of:
            0.102343015 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.102343015 = score(doc=5837,freq=2.0), product of:
                0.16532487 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047210995 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    30.11.1996 13:22:37
  2. Englisch I Plus : eine Windows CD-ROM (1995) 0.22
    0.2217955 = product of:
      0.33269325 = sum of:
        0.2203116 = weight(_text_:me in 5479) [ClassicSimilarity], result of:
          0.2203116 = score(doc=5479,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.64223593 = fieldWeight in 5479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0625 = fieldNorm(doc=5479)
        0.11238166 = product of:
          0.22476332 = sum of:
            0.22476332 = weight(_text_:plus in 5479) [ClassicSimilarity], result of:
              0.22476332 = score(doc=5479,freq=4.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.7714293 = fieldWeight in 5479, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5479)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Englisch I Plus richtet sich an alle, die schon irgendwann einmal mit der englischen Sprache in berührung gekommen sind. Das Programm ist daher für Schüler zur Begleitung des Unterrichts (ab dem 1. Englischjahr) genauso geeignet wir für Erwachsene, die ihre Kenntnisse auffrischen wollen
    Series
    Teach me PC!
  3. Woldering, B.: Connecting with users : Europe and multilinguality (2006) 0.10
    0.10385589 = product of:
      0.31156766 = sum of:
        0.31156766 = weight(_text_:me in 5032) [ClassicSimilarity], result of:
          0.31156766 = score(doc=5032,freq=4.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.9082588 = fieldWeight in 5032, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0625 = fieldNorm(doc=5032)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper introduces to the new Internet service The European Library, provided by the Conference of European National Librarians (CENL), and gives an overview of activities in Europe for multilingual library services, developed and tested in various projects: TEL-ME-MOR, MACS (Multilingual Access to Subjects), MSAC (Multilingual Subject Access to Catalogues of National Libraries), Crisscross, and VIAF (Virtual International Authority File).
    Object
    TEL-ME-MOR
  4. McGrath, K.: Thoughts on FRBR and moving images (2014) 0.09
    0.0917965 = product of:
      0.2753895 = sum of:
        0.2753895 = weight(_text_:me in 2431) [ClassicSimilarity], result of:
          0.2753895 = score(doc=2431,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.80279493 = fieldWeight in 2431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.078125 = fieldNorm(doc=2431)
      0.33333334 = coord(1/3)
    
    Abstract
    I'd like to talk about some things that have come up for me as I've thought about how FRBR might apply to moving images.
  5. Bates, M.J.: ¬The nature of browsing (2019) 0.06
    0.064257555 = product of:
      0.19277266 = sum of:
        0.19277266 = weight(_text_:me in 2265) [ClassicSimilarity], result of:
          0.19277266 = score(doc=2265,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.56195647 = fieldWeight in 2265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2265)
      0.33333334 = coord(1/3)
    
    Abstract
    The recent article by McKay et al. on browsing (2019) provides a valuable addition to the empirical literature of information science on this topic, and I read the descriptions of the various browsing cases with interest. However, the authors refer to my article on browsing (Bates, 2007) in ways that do not make sense to me and which do not at all conform to what I actually said.
  6. Landry, P.; Zumer, M.; Clavel-Merrin, G.: Report on cross-language subject access options (2006) 0.06
    0.055077896 = product of:
      0.16523369 = sum of:
        0.16523369 = weight(_text_:me in 2433) [ClassicSimilarity], result of:
          0.16523369 = score(doc=2433,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.48167694 = fieldWeight in 2433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
      0.33333334 = coord(1/3)
    
    Object
    TEL-ME-MOR
  7. Books in print plus with Book reviews plus : BIP + REV (1993) 0.05
    0.046825692 = product of:
      0.14047708 = sum of:
        0.14047708 = product of:
          0.28095415 = sum of:
            0.28095415 = weight(_text_:plus in 1302) [ClassicSimilarity], result of:
              0.28095415 = score(doc=1302,freq=4.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.9642866 = fieldWeight in 1302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1302)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  8. EndNote Plus 2.3 : Enhanced reference database and bibliography maker. With EndLink 2.1, link to on-line and CD-ROM databases (1997) 0.05
    0.046825692 = product of:
      0.14047708 = sum of:
        0.14047708 = product of:
          0.28095415 = sum of:
            0.28095415 = weight(_text_:plus in 1717) [ClassicSimilarity], result of:
              0.28095415 = score(doc=1717,freq=4.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.9642866 = fieldWeight in 1717, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Object
    EndNote Plus
  9. LISA Plus for Windows (1997) 0.05
    0.04635507 = product of:
      0.1390652 = sum of:
        0.1390652 = product of:
          0.2781304 = sum of:
            0.2781304 = weight(_text_:plus in 1312) [ClassicSimilarity], result of:
              0.2781304 = score(doc=1312,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.954595 = fieldWeight in 1312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Publishers' international ISBN directory plus : CD-ROM ed (1999) 0.05
    0.04635507 = product of:
      0.1390652 = sum of:
        0.1390652 = product of:
          0.2781304 = sum of:
            0.2781304 = weight(_text_:plus in 4246) [ClassicSimilarity], result of:
              0.2781304 = score(doc=4246,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.954595 = fieldWeight in 4246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4246)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  11. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 1234) [ClassicSimilarity], result of:
          0.13769475 = score(doc=1234,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 1234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1234)
      0.33333334 = coord(1/3)
    
    Abstract
    I have become "brutally honest" of late, at least according to one listener who heard my remarks during a recent whistle stop speaking tour of publishing conventions. This comment caught me a little off guard. Not that I haven't always been frank, but I do try never to be brutal. The truth, I guess, can be painful, even if the intention of the teller is simply objectivity. This paper is based on a "brutally honest" talk I have been giving to publishers, first, in February, to the Association of American Publishers' Professional and Scholarly Publishing Division, at which point I was calling the piece, "The Illusion of Free Information." It was this initial rendition that led to the invitation to publish something here. Since then I've been working on the talk. I gave a second version of it in March to the assembly of the American Society of Information Dissemination Centers, where I called it, "When Sectors Clash: Public Access vs. Private Interest." And, most recently, I gave yet a third version of it to the governing board of the American Institute of Physics. This time I called it: "The Future of Society Publishing." The notion of free information, our government's proper role in distributing free information, and the future of scholarly publishing in a world of free information . . . these are the issues that are floating around in my head. My goal here is to tell you where my thinking is only at this moment, for I reserve the right to continue thinking and developing new permutations on this mentally challenging theme.
  12. Spero, S.: LCSH is to thesaurus as doorbell is to mammal (2008) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 2628) [ClassicSimilarity], result of:
          0.13769475 = score(doc=2628,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 2628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2628)
      0.33333334 = coord(1/3)
    
    Content
    "When you look at the Library of Congress Subject headings as individual entries it's almost impossible to understand just how confused much of the hierarchical reference structure has become. I've written some code to generate graphical representations for the Broader Terms of entries in the LCSH. The starting term appears at the bottom of the graph; according to the rules, this term is a specialization of every other term on the graph. Top level terms are highlighted using double circles. Layout and rendering is courtesy of the wonderful graphviz (AT&T Research). I have generated dot files for all entries in the LCSH; I need to set up an dynamic renderer so they can be viewed online, but a p7zip archive raw dot is available here. (5M compressed, 672M uncompressed) Lets see what the LCSH has to tell us about Doorbells. Doorbells are a Social science. Doorbells are Souls. Doorbells are even Ontologies - which would explain why Protege keeps beeping at me. But most of all, Doorbells are mammals. Obviously this conclusion is absurd. Everyone knows that doorbells aren't hairy. But where are the errors that lead us to this mistaken conclusion, and how can we start to correct them? That's the subject of tomorrow's post."
  13. Karpathy, A.: ¬The unreasonable effectiveness of recurrent neural networks (2015) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 1865) [ClassicSimilarity], result of:
          0.13769475 = score(doc=1865,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 1865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1865)
      0.33333334 = coord(1/3)
    
    Abstract
    There's something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I've in fact reached the opposite conclusion). Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you. By the way, together with this post I am also releasing code on Github (https://github.com/karpathy/char-rnn) that allows you to train character-level language models based on multi-layer LSTMs. You give it a large chunk of text and it will learn to generate text like it one character at a time. You can also use it to reproduce my experiments below. But we're getting ahead of ourselves; What are RNNs anyway?
  14. Hawking, S.: This is the most dangerous time for our planet (2016) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 3273) [ClassicSimilarity], result of:
          0.13769475 = score(doc=3273,freq=8.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 3273, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
      0.33333334 = coord(1/3)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
    This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms which it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive. We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent. It is also the case that another unintended consequence of the global spread of the internet and social media is that the stark nature of these inequalities are far more apparent than they have been in the past. For me, the ability to use technology to communicate has been a liberating and positive experience. Without it, I would not have been able to continue working these many years past. But it also means that the lives of the richest people in the most prosperous parts of the world are agonisingly visible to anyone, however poor and who has access to a phone. And since there are now more people with a telephone than access to clean water in Sub-Saharan Africa, this will shortly mean nearly everyone on our increasingly crowded planet will not be able to escape the inequality.
    The consequences of this are plain to see; the rural poor flock to cities, to shanty towns, driven by hope. And then often, finding that the Instagram nirvana is not available there, they seek it overseas, joining the ever greater numbers of economic migrants in search of a better life. These migrants in turn place new demands on the infrastructures and economies of the countries in which they arrive, undermining tolerance and further fuelling political populism. For me, the really concerning aspect of this, is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges. Climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans. Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it. Perhaps in a few hundred years, we will have established human colonies amidst the stars, but right now we only have one planet, and we need to work together to protect it. To do that, we need to break down not build up barriers within and between nations. If we are to stand a chance of doing that, the world's leaders need to acknowledge that they have failed and are failing the many. With resources increasingly concentrated in the hands of a few, we are going to have to learn to share far more than at present. With not only jobs but entire industries disappearing, we must help people to re-train for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home. We can do this, I am an enormous optimist for my species, but it will require the elites, from London to Harvard, from Cambridge to Hollywood, to learn the lessons of the past month. To learn above all a measure of humility."
  15. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.04
    0.041657545 = product of:
      0.124972634 = sum of:
        0.124972634 = product of:
          0.3749179 = sum of:
            0.3749179 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3749179 = score(doc=1826,freq=2.0), product of:
                0.40025535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047210995 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  16. International books in print plus : [Computerdatei] (1996) 0.04
    0.039732914 = product of:
      0.11919874 = sum of:
        0.11919874 = product of:
          0.23839748 = sum of:
            0.23839748 = weight(_text_:plus in 6469) [ClassicSimilarity], result of:
              0.23839748 = score(doc=6469,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.8182243 = fieldWeight in 6469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6469)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Publishers' international ISBN directory plus : CD-ROM ed (1996) 0.04
    0.039732914 = product of:
      0.11919874 = sum of:
        0.11919874 = product of:
          0.23839748 = sum of:
            0.23839748 = weight(_text_:plus in 6484) [ClassicSimilarity], result of:
              0.23839748 = score(doc=6484,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.8182243 = fieldWeight in 6484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6484)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  18. Yearbook plus of international organizations and biographies (1996) 0.04
    0.039732914 = product of:
      0.11919874 = sum of:
        0.11919874 = product of:
          0.23839748 = sum of:
            0.23839748 = weight(_text_:plus in 7221) [ClassicSimilarity], result of:
              0.23839748 = score(doc=7221,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.8182243 = fieldWeight in 7221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7221)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Publishers' international ISBN directory plus : CD-ROM ed (1997) 0.04
    0.039732914 = product of:
      0.11919874 = sum of:
        0.11919874 = product of:
          0.23839748 = sum of:
            0.23839748 = weight(_text_:plus in 2397) [ClassicSimilarity], result of:
              0.23839748 = score(doc=2397,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.8182243 = fieldWeight in 2397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  20. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.04
    0.035017196 = product of:
      0.105051585 = sum of:
        0.105051585 = sum of:
          0.07946583 = weight(_text_:plus in 3284) [ClassicSimilarity], result of:
            0.07946583 = score(doc=3284,freq=2.0), product of:
              0.29135957 = queryWeight, product of:
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.047210995 = queryNorm
              0.27274144 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.025585754 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.025585754 = score(doc=3284,freq=2.0), product of:
              0.16532487 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047210995 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24

Years

Languages

  • e 97
  • d 91
  • m 4
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 80
  • i 16
  • b 6
  • m 6
  • r 3
  • s 2
  • n 1
  • x 1
  • More… Less…