Search (86 results, page 1 of 5)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.066924065 = product of:
      0.13384813 = sum of:
        0.13384813 = product of:
          0.40154436 = sum of:
            0.40154436 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40154436 = score(doc=1826,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.06
    0.058687486 = product of:
      0.11737497 = sum of:
        0.11737497 = sum of:
          0.09682284 = weight(_text_:light in 1184) [ClassicSimilarity], result of:
            0.09682284 = score(doc=1184,freq=6.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.33156 = fieldWeight in 1184, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.02055213 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.02055213 = score(doc=1184,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.053539254 = product of:
      0.10707851 = sum of:
        0.10707851 = product of:
          0.3212355 = sum of:
            0.3212355 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.3212355 = score(doc=230,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Sowards, S.W.: ¬A typology for ready reference Web sites in libraries (1996) 0.04
    0.03726713 = product of:
      0.07453426 = sum of:
        0.07453426 = product of:
          0.14906852 = sum of:
            0.14906852 = weight(_text_:light in 944) [ClassicSimilarity], result of:
              0.14906852 = score(doc=944,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.51047 = fieldWeight in 944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=944)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many libraries manage Web sites intended to provide their users with online resources suitable for answering reference questions. Most of these sites can be analyzed in terms of their depth, and their organizing and searching features. Composing a typology based on these factors sheds light on the critical design decisions that influence whether users of these sites succees or fail to find information easily, rapidly and accurately. The same analysis highlights some larger design issues, both for Web sites and for information management at large
  5. Bianchini, C.; Guerrini, M.: ¬The international diffusion of RDA : a wide overview on the new guidelines (2016) 0.04
    0.03726713 = product of:
      0.07453426 = sum of:
        0.07453426 = product of:
          0.14906852 = sum of:
            0.14906852 = weight(_text_:light in 2944) [ClassicSimilarity], result of:
              0.14906852 = score(doc=2944,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.51047 = fieldWeight in 2944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2944)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This issue of Jlis.it is focused on RDA, Resource Description and Access. In light of increasing international acceptance of this new cataloging content standard, the editors of Jlis.it wish to capture the background of how RDA came to be and the implications of its implementation at this time. This special issue offers a wide overview on the new guidelines from their making to their spreading around the world.
  6. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.04
    0.03645768 = product of:
      0.07291536 = sum of:
        0.07291536 = product of:
          0.14583072 = sum of:
            0.14583072 = weight(_text_:light in 1536) [ClassicSimilarity], result of:
              0.14583072 = score(doc=1536,freq=10.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.49938247 = fieldWeight in 1536, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
    In this thesis, we focused on the automatic detection of multiword expressions in natural language texts. On the basis of the main contributions, we can argue that: - Supervised machine learning methods can be successfully applied for the automatic detection of different types of multiword expressions in natural language texts. - Machine learning-based multiword expression detection can be successfully carried out for English as well as for Hungarian. - Our supervised machine learning-based model was successfully applied to the automatic detection of nominal compounds from English raw texts. - We developed a Wikipedia-based dictionary labeling method to automatically detect English nominal compounds. - A prior knowledge of nominal compounds can enhance Named Entity Recognition, while previously identified named entities can assist the nominal compound identification process. - The machine learning-based method can also provide acceptable results when it was trained on an automatically generated silver standard corpus. - As named entities form one semantic unit and may consist of more than one word and function as a noun, we can treat them in a similar way to nominal compounds. - Our sequence labelling-based tool can be successfully applied for identifying verbal light verb constructions in two typologically different languages, namely English and Hungarian. - Domain adaptation techniques may help diminish the distance between domains in the automatic detection of light verb constructions. - Our syntax-based method can be successfully applied for the full-coverage identification of light verb constructions. As a first step, a data-driven candidate extraction method can be utilized. After, a machine learning approach that makes use of an extended and rich feature set selects LVCs among extracted candidates. - When a precise syntactic parser is available for the actual domain, the full-coverage identification can be performed better. In other cases, the usage of the sequence labeling method is recommended.
  7. Sirapyan, N.: In Search of... (2001) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 5661) [ClassicSimilarity], result of:
              0.11180139 = score(doc=5661,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 5661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a series of capsule reviews of 20 search engines Sirapyan gives a good overview of the state of Internet search tools. She starts out with a clear discussion of the types of search tools available, the availability of advanced features such as Boolean queries and differences between directories, regular search engines and metasearch engines. It is unclear from the article whether the author and other testers used the same searches across all of the 20 tools but each review clearly outlines perceived strengths and weaknesses, gives tips on the advanced features, if any, of the search tool in question and suggests the types of searches that are most successful. The tools which receive top honors are Google, Northern Light, HotBot and Oingo. Finally, there is an extra sidebar the discusses meta and specialized search tools such as Infozoid and FirstGov. I can't help thinking that the usefulness of this article is related to the fact that Sirapyan is PC Magazine's librarian and goes into greater depth on those features that are of interest to information professionals
  8. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 3403) [ClassicSimilarity], result of:
              0.11180139 = score(doc=3403,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  9. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 4703) [ClassicSimilarity], result of:
              0.11180139 = score(doc=4703,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 4703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
  10. Genetasio, G.: ¬The International Cataloguing Principles and their future", in: JLIS.it 3/1 (2012) (2012) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 2625) [ClassicSimilarity], result of:
              0.11180139 = score(doc=2625,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 2625, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2625)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article aims to provide an update on the 2009 Statement of International Cataloguing Principles (ICP) and on the status of work on the Statement by the IFLA Cataloguing Section. The article begins with a summary of the drafting process of the ICP by the IME ICC, International Meeting of Experts on an International Cataloguing Code, focusing in particular on the first meeting (IME ICC1) and on the earlier drafts of the 2009 Statement. It then analyzes both the major innovations and the unsatisfactory aspects of the ICP. Finally, it explains and comments on the recent documents by the IFLA Cataloguing Section relating to the ICP, which express their intention to revise the Statement and to verify the convenience of drawing up an international cataloguing code. The latter intention is considered in detail and criticized by the author in the light of the recent publication of the RDA, Resource Description and Access. The article is complemented by an updated bibliography on the ICP.
  11. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 3195) [ClassicSimilarity], result of:
              0.11180139 = score(doc=3195,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 3195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3195)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.03
    0.026807645 = product of:
      0.05361529 = sum of:
        0.05361529 = product of:
          0.10723058 = sum of:
            0.10723058 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.10723058 = score(doc=1936,freq=10.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  13. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.02422092 = product of:
      0.04844184 = sum of:
        0.04844184 = product of:
          0.09688368 = sum of:
            0.09688368 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09688368 = score(doc=3925,freq=4.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  14. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.023977486 = product of:
      0.047954973 = sum of:
        0.047954973 = product of:
          0.095909946 = sum of:
            0.095909946 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.095909946 = score(doc=6021,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  15. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.023977486 = product of:
      0.047954973 = sum of:
        0.047954973 = product of:
          0.095909946 = sum of:
            0.095909946 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.095909946 = score(doc=4643,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  16. Tudhope, D.; Alani, H.; Jones, C.: Augmenting thesaurus relationships : possibilities for retrieval (2001) 0.02
    0.023291955 = product of:
      0.04658391 = sum of:
        0.04658391 = product of:
          0.09316782 = sum of:
            0.09316782 = weight(_text_:light in 1520) [ClassicSimilarity], result of:
              0.09316782 = score(doc=1520,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.31904373 = fieldWeight in 1520, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1520)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses issues concerning the augmentation of thesaurus relationships, in light of new application possibilities for retrieval. We first discuss a case study that explored the retrieval potential of an augmented set of thesaurus relationships by specialising standard relationships into richer subtypes, in particular hierarchical geographical containment and the associative relationship. We then locate this work in a broader context by reviewing various attempts to build taxonomies of thesaurus relationships, and conclude by discussing the feasibility of hierarchically augmenting the core set of thesaurus relationships, particularly the associative relationship. We discuss the possibility of enriching the specification and semantics of Related Term (RT relationships), while maintaining compatibility with traditional thesauri via a limited hierarchical extension of the associative (and hierarchical) relationships. This would be facilitated by distinguishing the type of term from the (sub)type of relationship and explicitly specifying semantic categories for terms following a faceted approach. We first illustrate how hierarchical spatial relationships can be used to provide more flexible retrieval for queries incorporating place names in applications employing online gazetteers and geographical thesauri. We then employ a set of experimental scenarios to investigate key issues affecting use of the associative (RT) thesaurus relationships in semantic distance measures. Previous work has noted the potential of RTs in thesaurus search aids but also the problem of uncontrolled expansion of query term sets. Results presented in this paper suggest the potential for taking account of the hierarchical context of an RT link and specialisations of the RT relationship
  17. Frederick, D.E.: ChatGPT: a viral data-driven disruption in the information environment (2023) 0.02
    0.023291955 = product of:
      0.04658391 = sum of:
        0.04658391 = product of:
          0.09316782 = sum of:
            0.09316782 = weight(_text_:light in 983) [ClassicSimilarity], result of:
              0.09316782 = score(doc=983,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.31904373 = fieldWeight in 983, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=983)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study aims to introduce librarians to ChatGPT and challenge them to think about how it fits into their work and what learning they will need to do in order to stay relevant in the realm of artificial intelligence. Design/methodology/approach Popular and scientific media sources were monitored over the course of two months to gather current discussions about the uses of and opinions about ChatGPT. This was analyzed in light of historical developments in education and libraries. Additional sources of information on the topic were described and discussed so that the issue is made relevant to librarians and libraries. Findings The potential risks and benefits of ChatGPT are highly relevant for librarians but also currently not fully understood. We are in a very early stage of understanding and using this technology but it does appear to have the possibility of becoming disruptive to libraries as well as many other aspects of life. Originality/value ChatGPT-3 has only been publicly available since the end of November 2022. We are just now starting to take a deeper dive into what this technology means for libraries. This paper is one of the early ones that provide librarians with some direction in terms of where to focus their interest and attention in learning about it.
  18. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.02
    0.023291955 = product of:
      0.04658391 = sum of:
        0.04658391 = product of:
          0.09316782 = sum of:
            0.09316782 = weight(_text_:light in 1048) [ClassicSimilarity], result of:
              0.09316782 = score(doc=1048,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.31904373 = fieldWeight in 1048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  19. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.02
    0.02055213 = product of:
      0.04110426 = sum of:
        0.04110426 = product of:
          0.08220852 = sum of:
            0.08220852 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.08220852 = score(doc=8959,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 7.2002 20:22:55
  20. Birmingham, J.: Internet search engines (1996) 0.02
    0.02055213 = product of:
      0.04110426 = sum of:
        0.04110426 = product of:
          0.08220852 = sum of:
            0.08220852 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.08220852 = score(doc=5664,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.11.1996 16:36:22

Years

Types

  • a 38
  • i 1
  • m 1
  • n 1
  • s 1
  • x 1
  • More… Less…