Search (757 results, page 1 of 38)

  • × year_i:[2010 TO 2020}
  1. Maungwa, T.; Fourie, I.: Competitive intelligence failures : an information behaviour lens to key intelligence and information needs (2018) 0.15
    0.1546493 = product of:
      0.3092986 = sum of:
        0.3092986 = sum of:
          0.27481773 = weight(_text_:intelligence in 4636) [ClassicSimilarity], result of:
            0.27481773 = score(doc=4636,freq=24.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              1.0164795 = fieldWeight in 4636, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4636)
          0.03448087 = weight(_text_:22 in 4636) [ClassicSimilarity], result of:
            0.03448087 = score(doc=4636,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.19345059 = fieldWeight in 4636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4636)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Competitive intelligence failures have devastating effects in marketplaces. They are attributed to various factors but seldom explicitly to information behaviour. This paper addresses causes of competitive intelligence failures from an information behaviour lens focussing on problems with key intelligence and information needs. The exploratory study was conducted in 2016/2017. Managers (end-users) identify key intelligence needs on which information is needed, and often other staff members seek the information (proxy information seeking). The purpose of this paper is to analyse problems related to key intelligence and information needs, and make recommendations to address the problems. Design/methodology/approach The study is placed in a post-positivism research paradigm, using qualitative and limited quantitative research approaches. In total, 15 participants (competitive intelligence professionals and educators/trainers originating from South Africa and the USA) contributed rich data through in-depth individual interviews. Findings Problems associated with articulation of information needs (key intelligence needs is the competitive intelligence term - with a broader scope) include inadequate communication between the person in need of information and the proxy information searcher; awareness and recognition of information needs; difficulty in articulation, incomplete and partial sharing of details of needs. Research limitations/implications Participant recruitment was difficult, representing mostly from South Africa. The findings from this exploratory study can, however, direct further studies with a very understudied group. Practical implications However, revealed valuable findings that can guide research. Originality/value Little has been published on competitive intelligence from an information behaviour perspective. Frameworks guiding the study (a combination of Leckie et al.'s 1996 and Wilson's, 1981 models and a competitive intelligence life cycle), however, revealed valuable findings that can guide research.
    Date
    20. 1.2015 18:30:22
  2. Yi, K.: Harnessing collective intelligence in social tagging using Delicious (2012) 0.09
    0.08594487 = product of:
      0.17188974 = sum of:
        0.17188974 = sum of:
          0.13740887 = weight(_text_:intelligence in 515) [ClassicSimilarity], result of:
            0.13740887 = score(doc=515,freq=6.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.50823975 = fieldWeight in 515, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0390625 = fieldNorm(doc=515)
          0.03448087 = weight(_text_:22 in 515) [ClassicSimilarity], result of:
            0.03448087 = score(doc=515,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.19345059 = fieldWeight in 515, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=515)
      0.5 = coord(1/2)
    
    Abstract
    A new collaborative approach in information organization and sharing has recently arisen, known as collaborative tagging or social indexing. A key element of collaborative tagging is the concept of collective intelligence (CI), which is a shared intelligence among all participants. This research investigates the phenomenon of social tagging in the context of CI with the aim to serve as a stepping-stone towards the mining of truly valuable social tags for web resources. This study focuses on assessing and evaluating the degree of CI embedded in social tagging over time in terms of two-parameter values, number of participants, and top frequency ranking window. Five different metrics were adopted and utilized for assessing the similarity between ranking lists: overlapList, overlapRank, Footrule, Fagin's measure, and the Inverse Rank measure. The result of this study demonstrates that a substantial degree of CI is most likely to be achieved when somewhere between the first 200 and 400 people have participated in tagging, and that a target degree of CI can be projected by controlling the two factors along with the selection of a similarity metric. The study also tests some experimental conditions for detecting social tags with high CI degree. The results of this study can be applicable to the study of filtering social tags based on CI; filtered social tags may be utilized for the metadata creation of tagged resources and possibly for the retrieval of tagged resources.
    Date
    25.12.2012 15:22:37
  3. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.08084183 = product of:
      0.16168366 = sum of:
        0.16168366 = product of:
          0.48505098 = sum of:
            0.48505098 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48505098 = score(doc=973,freq=2.0), product of:
                0.43152615 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050899457 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  4. Mazzucchelli, A.; Sartori , F.: String similarity in CBR platforms : a preliminary study (2014) 0.08
    0.07966973 = product of:
      0.15933946 = sum of:
        0.15933946 = sum of:
          0.11106625 = weight(_text_:intelligence in 1568) [ClassicSimilarity], result of:
            0.11106625 = score(doc=1568,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.41080526 = fieldWeight in 1568, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1568)
          0.048273213 = weight(_text_:22 in 1568) [ClassicSimilarity], result of:
            0.048273213 = score(doc=1568,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.2708308 = fieldWeight in 1568, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1568)
      0.5 = coord(1/2)
    
    Abstract
    Case Based Reasoning is a very important research trend in Artificial Intelligence and can be a powerful approach in the solution of complex problems characterized by heterogeneous knowledge. In this paper we present an ongoing research project where CBR is exploited to support the identification of enterprises potentially going to bankruptcy, through a comparison of their balance indexes with the ones of similar and already closed firms. In particular, the paper focuses on how developing similarity measures for strings can be profitably supported by metadata models of case structures and semantic methods like Query Expansion.
    Pages
    S.22-29
  5. Walsh, T.: Machines that think : the future of artificial intelligence (2018) 0.08
    0.075261936 = product of:
      0.15052387 = sum of:
        0.15052387 = product of:
          0.30104774 = sum of:
            0.30104774 = weight(_text_:intelligence in 4479) [ClassicSimilarity], result of:
              0.30104774 = score(doc=4479,freq=20.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                1.1134975 = fieldWeight in 4479, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A scientist who has spent a career developing Artificial Intelligence takes a realistic look at the technological challenges and assesses the likely effect of AI on the future. How will Artificial Intelligence (AI) impact our lives? Toby Walsh, one of the leading AI researchers in the world, takes a critical look at the many ways in which "thinking machines" will change our world. Based on a deep understanding of the technology, Walsh describes where Artificial Intelligence is today, and where it will take us. Will automation take away most of our jobs? Is a "technological singularity" near? What is the chance that robots will take over? How do we best prepare for this future? The author concludes that, if we plan well, AI could be our greatest legacy, the last invention human beings will ever need to make.
    LCSH
    Artificial intelligence / Popular works
    Artificial intelligence / Forecasting / Popular works
    Computational intelligence / Popular works
    Subject
    Artificial intelligence / Popular works
    Artificial intelligence / Forecasting / Popular works
    Computational intelligence / Popular works
  6. Erkal, E.: Allegations linking Sci-Hub with Russian intelligence (2019) 0.07
    0.068704434 = product of:
      0.13740887 = sum of:
        0.13740887 = product of:
          0.27481773 = sum of:
            0.27481773 = weight(_text_:intelligence in 4625) [ClassicSimilarity], result of:
              0.27481773 = score(doc=4625,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                1.0164795 = fieldWeight in 4625, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4625)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Washington Post reports that the US Justice Department has launched a criminal and intelligence investigation into Alexandra Elbakyan, founder of Sci-Hub
    Source
    https://www.elsevier.com/connect/allegations-linking-sci-hub-with-russian-intelligence
  7. Walsh, T.: Android dreams : the past, present and future of artificial intelligence (2017) 0.07
    0.068013914 = product of:
      0.13602783 = sum of:
        0.13602783 = product of:
          0.27205566 = sum of:
            0.27205566 = weight(_text_:intelligence in 4477) [ClassicSimilarity], result of:
              0.27205566 = score(doc=4477,freq=12.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                1.0062634 = fieldWeight in 4477, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4477)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The development of thinking machines is an adventure as bold and ambitious as any that humans have attempted. And the truth is that Artificial Intelligence is already an indispensable part of our daily lives. Without it, Google wouldn't have answers and your smartphone would just be a phone. But how will AI change society by 2050? Will it destroy jobs? Or even pose an existential threat? Android Dreams is a lively exploration of how AI will transform our societies, economies and selves. From robot criminals to cyber healthcare, and a sky full of empty planes, Toby Walsh's predictions about AI are guaranteed to surprise you.
    LCSH
    Artificial intelligence
    Artificial intelligence / Forecasting
    Subject
    Artificial intelligence
    Artificial intelligence / Forecasting
  8. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.067368194 = product of:
      0.13473639 = sum of:
        0.13473639 = product of:
          0.40420917 = sum of:
            0.40420917 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40420917 = score(doc=1826,freq=2.0), product of:
                0.43152615 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050899457 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  9. Raghavan, K.S.; Rao, I.K.R.: Facets of facet analysis : a domain analysis (2014) 0.06
    0.056906953 = product of:
      0.11381391 = sum of:
        0.11381391 = sum of:
          0.07933304 = weight(_text_:intelligence in 1411) [ClassicSimilarity], result of:
            0.07933304 = score(doc=1411,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.29343233 = fieldWeight in 1411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1411)
          0.03448087 = weight(_text_:22 in 1411) [ClassicSimilarity], result of:
            0.03448087 = score(doc=1411,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.19345059 = fieldWeight in 1411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1411)
      0.5 = coord(1/2)
    
    Abstract
    Facet analysis is considered the most distinct approach to knowledge organization within LIS. This paper views 'Facet Analysis' as a domain and visualizes its parameters using domain analytic techniques. Over 150 papers on 'Facet Analysis' published since 2008 were identified via a search in Web of Science. The subject terms and other associated metadata of each of the records were taken to represent the different facets of the domain. The basic research question is: what are the facets of facet analysis? The study seeks to graphically represent the contours of the domain. The analysis of the data suggests that while the traditional aspects continue to dominate research in the area, aspects such as Artificial Intelligence, Information Architecture, etc are emerging as new areas of application. The distribution of authors suggests that interest in the area is now by no means confined to India, Europe and North America. There is evidence to suggest that Facet Analysis as a tool is growing in importance.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  10. Vom Buch zur Datenbank : Paul Otlets Utopie der Wissensvisualisierung (2012) 0.06
    0.056906953 = product of:
      0.11381391 = sum of:
        0.11381391 = sum of:
          0.07933304 = weight(_text_:intelligence in 3074) [ClassicSimilarity], result of:
            0.07933304 = score(doc=3074,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.29343233 = fieldWeight in 3074, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3074)
          0.03448087 = weight(_text_:22 in 3074) [ClassicSimilarity], result of:
            0.03448087 = score(doc=3074,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.19345059 = fieldWeight in 3074, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3074)
      0.5 = coord(1/2)
    
    Abstract
    Gegen Ende des 19. Jahrhunderts geriet das Dokumentationswesen in eine Krise: wie lässt sich das kulturelle Wissen nachhaltiger organisieren? Paul Otlet (1868-1944), ein belgischer Industriellenerbe und studierter Rechtsanwalt, entwickelte zusammen mit Henri La Fontaine ab 1895 ein Ordnungs- und Klassifikationssystem, das das millionenfach publizierte "Weltwissen" dokumentieren sollte. Otlets Anspruch war die Schaffung eines "Instrument d'ubiquité", das zur "Hyper-Intelligence" führen sollte. Jahrzehnte vor Web und Wikis weisen diese Ideen auf eine globale Vernetzung des Wissens hin. Der vorliegende Titel erinnert an den Pionier Paul Otlet mit einer ausführlichen Einleitung von Frank Hartmann (Bauhaus-Universität Weimar), Beiträgen von W. Boyd Rayward (University of Illinois), Charles van den Heuvel (Königlich Niederländische Akademie der Wissenschaften) und Wouter Van Acker (Universität Gent).
    Date
    22. 8.2016 16:06:54
  11. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.06
    0.056906953 = product of:
      0.11381391 = sum of:
        0.11381391 = sum of:
          0.07933304 = weight(_text_:intelligence in 3403) [ClassicSimilarity], result of:
            0.07933304 = score(doc=3403,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.29343233 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.03448087 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
            0.03448087 = score(doc=3403,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.19345059 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
      0.5 = coord(1/2)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
  12. Cronin, B.: ¬The intelligence disconnect (2011) 0.06
    0.055533126 = product of:
      0.11106625 = sum of:
        0.11106625 = product of:
          0.2221325 = sum of:
            0.2221325 = weight(_text_:intelligence in 4930) [ClassicSimilarity], result of:
              0.2221325 = score(doc=4930,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.8216105 = fieldWeight in 4930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4930)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Ekbia, H.: Fifty years of research in artificial intelligence (2010) 0.06
    0.055533126 = product of:
      0.11106625 = sum of:
        0.11106625 = product of:
          0.2221325 = sum of:
            0.2221325 = weight(_text_:intelligence in 1598) [ClassicSimilarity], result of:
              0.2221325 = score(doc=1598,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.8216105 = fieldWeight in 1598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1598)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.05
    0.047157735 = product of:
      0.09431547 = sum of:
        0.09431547 = product of:
          0.2829464 = sum of:
            0.2829464 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.2829464 = score(doc=5955,freq=2.0), product of:
                0.43152615 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050899457 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  15. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.05
    0.045525562 = product of:
      0.091051124 = sum of:
        0.091051124 = sum of:
          0.06346643 = weight(_text_:intelligence in 168) [ClassicSimilarity], result of:
            0.06346643 = score(doc=168,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.23474586 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.027584694 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.027584694 = score(doc=168,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  16. Hackett, P.M.W.: Facet theory and the mapping sentence : evolving philosophy, use and application (2014) 0.05
    0.045525562 = product of:
      0.091051124 = sum of:
        0.091051124 = sum of:
          0.06346643 = weight(_text_:intelligence in 2258) [ClassicSimilarity], result of:
            0.06346643 = score(doc=2258,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.23474586 = fieldWeight in 2258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
          0.027584694 = weight(_text_:22 in 2258) [ClassicSimilarity], result of:
            0.027584694 = score(doc=2258,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.15476047 = fieldWeight in 2258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
      0.5 = coord(1/2)
    
    Content
    1 Introduction; 2 Ontological Categorisation and Mereology; Human assessment; Categories and the properties of experiential events; Mathematical, computing, artificial intelligence and library classification approaches; Sociological approaches; Psychological approaches; Personal Construct Theory; Philosophical approaches to categories; Mereology: facet theory and relationships between categories; Neuroscience and categories; Conclusions; 3 Facet Theory and Thinking about Human Behaviour Generating knowledge in facet theory: a brief overviewWhat is facet theory?; Facets and facet elements; The mapping sentence; Designing a mapping sentence; Narrative; Roles that facets play; Single-facet structures: axial role and modular role; Polar role; Circumplex; Two-facet structures; Radex; Three-facet structures; Cylindrex; Analysing facet theory research; Conclusions; 4 Evolving Facet Theory Applications; The evolution of facet theory; Mapping a domain: the mapping sentence as a stand-alone approach and integrative tool; Making and understanding fine art; Defining the grid: a mapping sentence for grid imagesFacet sort-technique; Facet mapping therapy: using the mapping sentence and the facet structures to explore client issues; Research program coordination; Conclusions and Future Directions; Glossary of Terms; Bibliography; Index
    Date
    17.10.2015 17:22:01
  17. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications (2012) 0.04
    0.041222658 = product of:
      0.082445316 = sum of:
        0.082445316 = product of:
          0.16489063 = sum of:
            0.16489063 = weight(_text_:intelligence in 110) [ClassicSimilarity], result of:
              0.16489063 = score(doc=110,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6098877 = fieldWeight in 110, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=110)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search engines are the major means of information retrieval over the Internet. People's dependence on them increases over time as SEs introduce new and sophisticated technologies. The developments in the Artificial Intelligence (AI) will transform the current search engines Artificial Intelligence Enabled Search Engines (AIESE). Search engines already play a critical role in classifying, sorting and delivering the information over the Internet. However, as Internet's mainstream role becomes more apparent and AI technology increases the sophistication of the tools of the SEs, their roles will become much more critical. Since, the future of search engines are examined, the technological singularity concept is analyzed in detail. Second and third order indirect side effects are analyzed. A four-stage evolution-model is suggested.
  18. Jin, T.; Ju, B.: ¬The corporate information agency : do competitive intelligence practitioners utilize it? (2014) 0.04
    0.041222658 = product of:
      0.082445316 = sum of:
        0.082445316 = product of:
          0.16489063 = sum of:
            0.16489063 = weight(_text_:intelligence in 1223) [ClassicSimilarity], result of:
              0.16489063 = score(doc=1223,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6098877 = fieldWeight in 1223, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on a research study investigating the use and perceptions of corporate information agencies by competitive intelligence (CI) practitioners. The corporate information agency is a corporate library or an information/knowledge center. CI practitioner refers to those business professionals, in various organizations, who are particularly committed to strategic and competitive intelligence analysis and production activities. In this study, we administered a survey to a sample of 214 CI practitioners to ascertain the extent to which they utilize, are aware of, and perceive the usefulness of corporate information agencies provided by their organizations. With 63 valid responses, we observed high degrees of use, awareness, and perceived usefulness. Multiple regression results also show significant correlations between perceived usefulness and use of the corporate information agency among the responding CI practitioners. Supported by empirical evidence, these findings provide a benchmark of knowledge regarding the value of corporate information agencies in CI practices.
  19. Liu, X.; Guo, C.; Zhang, L.: Scholar metadata and knowledge generation with human and artificial intelligence (2014) 0.04
    0.041222658 = product of:
      0.082445316 = sum of:
        0.082445316 = product of:
          0.16489063 = sum of:
            0.16489063 = weight(_text_:intelligence in 1287) [ClassicSimilarity], result of:
              0.16489063 = score(doc=1287,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6098877 = fieldWeight in 1287, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scholar metadata have traditionally centered on descriptive representations, which have been used as a foundation for scholarly publication repositories and academic information retrieval systems. In this article, we propose innovative and economic methods of generating knowledge-based structural metadata (structural keywords) using a combination of natural language processing-based machine-learning techniques and human intelligence. By allowing low-barrier participation through a social media system, scholars (both as authors and users) can participate in the metadata editing and enhancing process and benefit from more accurate and effective information retrieval. Our experimental web system ScholarWiki uses machine learning techniques, which automatically produce increasingly refined metadata by learning from the structural metadata contributed by scholars. The cumulated structural metadata add intelligence and automatically enhance and update recursively the quality of metadata, wiki pages, and the machine-learning model.
  20. Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.: Strategic intelligence on emerging technologies : scientometric overlay mapping (2017) 0.04
    0.041222658 = product of:
      0.082445316 = sum of:
        0.082445316 = product of:
          0.16489063 = sum of:
            0.16489063 = weight(_text_:intelligence in 3322) [ClassicSimilarity], result of:
              0.16489063 = score(doc=3322,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6098877 = fieldWeight in 3322, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3322)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines the use of scientometric overlay mapping as a tool of "strategic intelligence" to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical, social, and cognitive spaces. To do so, we longitudinally analyze (with publication and patent data) three case studies of emerging technologies in the medical domain. These are RNA interference (RNAi), human papillomavirus (HPV) testing technologies for cervical cancer, and thiopurine methyltransferase (TPMT) genetic testing. Given the flexibility (i.e., adaptability to different sources of data) and granularity (i.e., applicability across multiple levels of data aggregation) of overlay mapping techniques, we argue that these techniques can favor the integration and comparison of results from different contexts and cases, thus potentially functioning as a platform for "distributed" strategic intelligence for analysts and decision makers.

Languages

  • e 561
  • d 187
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • a 632
  • m 75
  • el 73
  • s 31
  • x 13
  • r 8
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications