Search (177 results, page 1 of 9)

  • × type_ss:"el"
  1. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.10
    0.09587076 = product of:
      0.19174153 = sum of:
        0.19174153 = sum of:
          0.13673347 = weight(_text_:assessment in 7411) [ClassicSimilarity], result of:
            0.13673347 = score(doc=7411,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4879938 = fieldWeight in 7411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0625 = fieldNorm(doc=7411)
          0.05500805 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
            0.05500805 = score(doc=7411,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.30952093 = fieldWeight in 7411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7411)
      0.5 = coord(1/2)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  2. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.08
    0.07761824 = product of:
      0.15523648 = sum of:
        0.15523648 = sum of:
          0.12085646 = weight(_text_:assessment in 330) [ClassicSimilarity], result of:
            0.12085646 = score(doc=330,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.43132967 = fieldWeight in 330, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
          0.03438003 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.03438003 = score(doc=330,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
      0.5 = coord(1/2)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06717118 = product of:
      0.13434236 = sum of:
        0.13434236 = product of:
          0.40302706 = sum of:
            0.40302706 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40302706 = score(doc=1826,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.05373694 = product of:
      0.10747388 = sum of:
        0.10747388 = product of:
          0.32242164 = sum of:
            0.32242164 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32242164 = score(doc=230,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  5. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 2906) [ClassicSimilarity], result of:
              0.14502776 = score(doc=2906,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 2906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.03358559 = product of:
      0.06717118 = sum of:
        0.06717118 = product of:
          0.20151353 = sum of:
            0.20151353 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20151353 = score(doc=4388,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03358559 = product of:
      0.06717118 = sum of:
        0.06717118 = product of:
          0.20151353 = sum of:
            0.20151353 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20151353 = score(doc=5669,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  8. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.03
    0.03139943 = product of:
      0.06279886 = sum of:
        0.06279886 = product of:
          0.12559772 = sum of:
            0.12559772 = weight(_text_:assessment in 2417) [ClassicSimilarity], result of:
              0.12559772 = score(doc=2417,freq=12.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.44825095 = fieldWeight in 2417, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded. - Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics. - While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations. - The sole reliance on citation data provides at best an incomplete and often shallow understanding of research - an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
    Imprint
    Joint IMU/ICIAM/IMS-Committee on Quantitative Assessment of Research : o.O.
  9. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 3144) [ClassicSimilarity], result of:
              0.12085646 = score(doc=3144,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 3144, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
  10. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 849) [ClassicSimilarity], result of:
              0.12085646 = score(doc=849,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 849, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  11. McQueen, T.F.; Fleck, R.A. Jr.: Changing patterns of Internet usage and challenges at colleges and universities (2005) 0.03
    0.029910447 = product of:
      0.059820894 = sum of:
        0.059820894 = product of:
          0.11964179 = sum of:
            0.11964179 = weight(_text_:assessment in 769) [ClassicSimilarity], result of:
              0.11964179 = score(doc=769,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4269946 = fieldWeight in 769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Increased enrollments, changing student expectations, and shifting patterns of Internet access and usage continue to generate resource and administrative challenges for colleges and universities. Computer center staff and college administrators must balance increased access demands, changing system loads, and system security within constrained resources. To assess the changing academic computing environment, computer center directors from several geographic regions were asked to respond to an online questionnaire that assessed patterns of usage, resource allocation, policy formulation, and threats. Survey results were compared with data from a study conducted by the authors in 1999. The analysis includes changing patterns in Internet usage, access, and supervision. The paper also presents details of usage by institutional type and application as well as recommendations for more precise resource assessment by college administrators.
  12. Goodchild, M.F.: ¬The Alexandria Digital Library Project : review, assessment, and prospects (2004) 0.03
    0.029910447 = product of:
      0.059820894 = sum of:
        0.059820894 = product of:
          0.11964179 = sum of:
            0.11964179 = weight(_text_:assessment in 1153) [ClassicSimilarity], result of:
              0.11964179 = score(doc=1153,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4269946 = fieldWeight in 1153, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1153)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.03
    0.029910447 = product of:
      0.059820894 = sum of:
        0.059820894 = product of:
          0.11964179 = sum of:
            0.11964179 = weight(_text_:assessment in 48) [ClassicSimilarity], result of:
              0.11964179 = score(doc=48,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4269946 = fieldWeight in 48, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
  14. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.027504025 = product of:
      0.05500805 = sum of:
        0.05500805 = product of:
          0.1100161 = sum of:
            0.1100161 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.1100161 = score(doc=5449,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34
  15. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.03
    0.027504025 = product of:
      0.05500805 = sum of:
        0.05500805 = product of:
          0.1100161 = sum of:
            0.1100161 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.1100161 = score(doc=5837,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.11.1996 13:22:37
  16. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.03
    0.027504025 = product of:
      0.05500805 = sum of:
        0.05500805 = product of:
          0.1100161 = sum of:
            0.1100161 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.1100161 = score(doc=4085,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:39
  17. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.03
    0.02690663 = product of:
      0.05381326 = sum of:
        0.05381326 = product of:
          0.10762652 = sum of:
            0.10762652 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.10762652 = score(doc=1936,freq=10.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  18. Functional Requirements for Subject Authority Data (FRSAD) : a conceptual model (2010) 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 268) [ClassicSimilarity], result of:
              0.1025501 = score(doc=268,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=268)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The primary purpose of this study is to produce a framework that will provide a clearly and commonly shared understanding of what the subject authority data/record/file aims to provide information about, and the expectation of what such data should achieve in terms of answering user needs. The role of the FRSAR Working Group was defined in the following terms of reference: - To build a conceptual model of Group 3 entities within the FRBR framework as they relate to the aboutness of works; - To provide a clearly defined, structured frame of reference for relating the data that are recorded in subject authority records to the needs of the users of that data; - To assist in an assessment of the potential for international sharing and use of subject authority data both within the library sector and beyond.
  19. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 870) [ClassicSimilarity], result of:
              0.1025501 = score(doc=870,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
  20. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024310352 = product of:
      0.048620705 = sum of:
        0.048620705 = product of:
          0.09724141 = sum of:
            0.09724141 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09724141 = score(doc=3925,freq=4.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28

Years

Languages

  • d 86
  • e 85
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 80
  • i 10
  • m 5
  • r 4
  • b 2
  • s 2
  • n 1
  • p 1
  • x 1
  • More… Less…