Search (223 results, page 3 of 12)

  • × type_ss:"el"
  1. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.010928897 = product of:
      0.04371559 = sum of:
        0.04371559 = product of:
          0.08743118 = sum of:
            0.08743118 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.08743118 = score(doc=4643,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2007 15:41:14
  2. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.01
    0.010070836 = product of:
      0.040283345 = sum of:
        0.040283345 = weight(_text_:communication in 3380) [ClassicSimilarity], result of:
          0.040283345 = score(doc=3380,freq=4.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.2024006 = fieldWeight in 3380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3380)
      0.25 = coord(1/4)
    
    Abstract
    In April 2010, Bill Gates gave a talk at MIT in which he asked: 'are the brightest minds working on the most important problems?' Gates meant improving the lives of the poorest; improving education, health, and nutrition. We could easily add improving peaceful interactions, human rights, environmental conditions, living standards and so on. Philosophy of Information (PI) proponents think that Gates has a point - but this doesn't mean we should all give up philosophy. Philosophy can be part of this project, because philosophy understood as conceptual design forges and refines the new ideas, theories, and perspectives that we need to understand and address these important problems that press us so urgently. Of course, this naturally invites us to wonder which ideas, theories, and perspectives philosophers should be designing now. In our global information society, many crucial challenges are linked to information and communication technologies: the constant search for novel solutions and improvements demands, in turn, changing conceptual resources to understand and cope with them. Rapid technological development now pervades communication, education, work, entertainment, industrial production and business, healthcare, social relations and armed conflicts. There is a rich mine of philosophical work to do on the new concepts created right here, right now.
  3. Karpathy, A.: ¬The unreasonable effectiveness of recurrent neural networks (2015) 0.01
    0.010070496 = product of:
      0.040281985 = sum of:
        0.040281985 = product of:
          0.08056397 = sum of:
            0.08056397 = weight(_text_:networks in 1865) [ClassicSimilarity], result of:
              0.08056397 = score(doc=1865,freq=4.0), product of:
                0.21802035 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046093877 = queryNorm
                0.369525 = fieldWeight in 1865, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1865)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    There's something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I've in fact reached the opposite conclusion). Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you. By the way, together with this post I am also releasing code on Github (https://github.com/karpathy/char-rnn) that allows you to train character-level language models based on multi-layer LSTMs. You give it a large chunk of text and it will learn to generate text like it one character at a time. You can also use it to reproduce my experiments below. But we're getting ahead of ourselves; What are RNNs anyway?
  4. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.010070496 = product of:
      0.040281985 = sum of:
        0.040281985 = product of:
          0.08056397 = sum of:
            0.08056397 = weight(_text_:networks in 1873) [ClassicSimilarity], result of:
              0.08056397 = score(doc=1873,freq=4.0), product of:
                0.21802035 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046093877 = queryNorm
                0.369525 = fieldWeight in 1873, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1873)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  5. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.009969283 = product of:
      0.03987713 = sum of:
        0.03987713 = product of:
          0.07975426 = sum of:
            0.07975426 = weight(_text_:networks in 5719) [ClassicSimilarity], result of:
              0.07975426 = score(doc=5719,freq=2.0), product of:
                0.21802035 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046093877 = queryNorm
                0.36581108 = fieldWeight in 5719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  6. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.01
    0.009969283 = product of:
      0.03987713 = sum of:
        0.03987713 = product of:
          0.07975426 = sum of:
            0.07975426 = weight(_text_:networks in 5798) [ClassicSimilarity], result of:
              0.07975426 = score(doc=5798,freq=2.0), product of:
                0.21802035 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046093877 = queryNorm
                0.36581108 = fieldWeight in 5798, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  7. Trant, J.; Bearman, D.: Social terminology enhancement through vernacular engagement : exploring collaborative annotation to encourage interaction with museum collections (2005) 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 1185) [ClassicSimilarity], result of:
          0.0379795 = score(doc=1185,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 1185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=1185)
      0.25 = coord(1/4)
    
    Abstract
    From their earliest encounters with the Web, museums have seen an opportunity to move beyond uni-directional communication into an environment that engages their users and reflects a multiplicity of perspectives. Shedding the "Unassailable Voice" (Walsh 1997) in favor of many "Points of View" (Sledge 1995) has challenged traditional museum approaches to the creation and delivery of content. Novel approaches are required in order to develop and sustain user engagement (Durbin 2004). New models of exhibit creation that democratize the curatorial functions of object selection and interpretation offer one way of opening up the museum (Coldicutt and Streten 2005). Another is to use the museum as a forum and focus for community story-telling (Howard, Pratty et al. 2005). Unfortunately, museum collections remain relatively inaccessible even when 'made available' through searchable on-line databases. Museum documentation seldom satisfies the on-line access needs of the broad public, both because it is written using professional terminology and because it may not address what is important to - or remembered by - the museum visitor. For example, an exhibition now on-line at The Metropolitan Museum of Art acknowledges "Coco" Chanel only in the brief, textual introduction (The Metropolitan Museum of Art 2005a). All of the images of her delightful fashion designs are attributed to "Gabrielle Chanel" (The Metropolitan Museum of Art 2005a). Interfaces that organize collections along axes of time or place - such of that of the Timeline of Art History (The Metropolitan Museum of Art 2005e) - often fail to match users' world-views, despite the care that went into their structuring or their significant pedagogical utility. Critically, as professionals working with art museums we realize that when cataloguers and curators describe works of art, they usually do not include the "subject" of the image itself. Simply put, we rarely answer the question "What is it a picture of?" Unfortunately, visitors will often remember a work based on its visual characteristics, only to find that Web-based searches for any of the things they recall do not produce results.
  8. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 1194) [ClassicSimilarity], result of:
          0.0379795 = score(doc=1194,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 1194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=1194)
      0.25 = coord(1/4)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.
  9. Gladney, H.M.; Bennett, J.L.: What do we mean by authentic? : what's the real McCoy? (2003) 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 1201) [ClassicSimilarity], result of:
          0.0379795 = score(doc=1201,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 1201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=1201)
      0.25 = coord(1/4)
    
    Abstract
    Authenticity is among digital document security properties needing attention. Literature focused on preservation reveals uncertainty - even confusion - about what we might mean by authentic. The current article provides a definition that spans vernacular usage of "authentic", ranging from digital documents through material artifacts to natural objects. We accomplish this by modeling entity transmission through time and space by signal sequences and object representations at way stations, and by carefully distinguishing objective facts from subjective values and opinions. Our model can be used to clarify other words that denote information quality, such as "evidence", "essential", and "useful". Digital documents are becoming important in most kinds of human activity. Whenever we buy something valuable, agree to a contract, design and build a machine, or provide a service, we should understand exactly what we intend and be ready to describe this as accurately as the occasion demands. This makes worthwhile whatever care is needed to devise definitions that are sufficiently precise and distinct from each other to explain what we are doing and to minimize community confusion. When we set out, some months ago, to describe answers to the open technical challenges of digital preservation, we took for granted the existence of a broad, unambiguous definition for authentic. Document authenticity is of fundamental importance not only for scholarly work, but also for practical affairs, including legal matters, regulatory requirements, military and other governmental information, and financial transactions. Trust, and evidence for deciding what can be trusted as authentic are considered in many works about digital preservation. These topics are broad, deep, and subtle, raising many questions. Among these, the current work addresses a single question, "What is a useful meaning of authentic or of authenticity for digital documents - a meaning that is not itself a source of confusion?" Progress in managing digital information would be hampered without a clear answer that is sufficiently objective to guide the evaluation of communication and computing technology. Our approach to constructing an answer to this question is to break each object transmission into pieces whose treatment we can describe explicitly and with attention to potential imperfections.
  10. Veltman, K.H.: Towards a Semantic Web for culture 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 4040) [ClassicSimilarity], result of:
          0.0379795 = score(doc=4040,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
      0.25 = coord(1/4)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
  11. Kashyap, M.M.: Application of integrative approach in the teaching of library science techniques and application of information technology (2011) 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 4395) [ClassicSimilarity], result of:
          0.0379795 = score(doc=4395,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 4395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
      0.25 = coord(1/4)
    
    Abstract
    Today many libraries are using computers and allied information technologies to improve their work methods and services. Consequently, the libraries need such professional staff, or need to train the present one, who could face the challenges placed by the introduction of these technologies in the libraries. To meet the demand of such professional staff, the departments of Library and Information Science in India introduced new courses of studies to expose their students in the use and application of computers and other allied technologies. Some courses introduced are: Computer Application in Libraries; Systems Analysis and Design Technique; Design and Development of Computer-based Library Information Systems; Database Organisation and Design; Library Networking; Use and Application of Communication Technology, and so forth. It is felt that the computer and information technologies biased courses need to be restructured, revised, and more harmoniously blended with the traditional main stream courses of library and information science discipline. We must alter the strategy of teaching library techniques, such as classification, cataloguing, and library procedures, and the techniques of designing computer-based library information systems and services. The use and application of these techniques get interwoven when we shift from a manually operated library system's environment to computer-based library system's environment. As such, it becomes necessary that we must follow an integrative approach, when we teach these techniques to the students of library and information science or train library staff in the use and application of these techniques to design, develop and implement computer-based library information systems and services. In the following sections of this paper, we shall outline the likeness or correspondence between certain concepts and techniques formed by computer specialist and the one developed by the librarians, in their respective domains. We make use of these techniques (i.e. the techniques of both the domains) in the design and implementation of computer-based library information systems and services. As such, it is essential that lessons of study concerning the exposition of these supplementary and complementary techniques must be integrated.
  12. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.01
    0.009494875 = product of:
      0.0379795 = sum of:
        0.0379795 = weight(_text_:communication in 250) [ClassicSimilarity], result of:
          0.0379795 = score(doc=250,freq=2.0), product of:
            0.19902779 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.046093877 = queryNorm
            0.1908251 = fieldWeight in 250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.25 = coord(1/4)
    
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
  13. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.07494101 = score(doc=8959,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16. 7.2002 20:22:55
  14. Birmingham, J.: Internet search engines (1996) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.07494101 = score(doc=5664,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10.11.1996 16:36:22
  15. Zumer, M.; Clavel, G.: EDLproject : one more step towards the European digtial library (2007) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 3184) [ClassicSimilarity], result of:
              0.07494101 = score(doc=3184,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 3184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3184)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Vortrag anläasslich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  16. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07494101 = score(doc=4888,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 3.2013 14:56:22
  17. Bourdon, F.: Funktionale Anforderungen an bibliographische Datensätze und ein internationales Nummernsystem für Normdaten : wie weit kann Normierung durch Technik unterstützt werden? (2001) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 6888) [ClassicSimilarity], result of:
              0.07494101 = score(doc=6888,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 6888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6888)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 12:30:22
  18. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.07494101 = score(doc=3895,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24. 8.2005 19:20:22
  19. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07494101 = score(doc=6048,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2007 15:41:14
  20. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.009367626 = product of:
      0.037470505 = sum of:
        0.037470505 = product of:
          0.07494101 = sum of:
            0.07494101 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.07494101 = score(doc=100,freq=2.0), product of:
                0.16141291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046093877 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2007 15:41:14

Authors

Years

Languages

  • e 130
  • d 86
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 108
  • i 10
  • m 6
  • s 3
  • b 2
  • n 2
  • r 2
  • x 2
  • p 1
  • More… Less…