Search (94 results, page 1 of 5)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.18
    0.17911148 = sum of:
      0.1371676 = product of:
        0.4115028 = sum of:
          0.4115028 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
            0.4115028 = score(doc=1826,freq=2.0), product of:
              0.43931273 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051817898 = queryNorm
              0.93669677 = fieldWeight in 1826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.078125 = fieldNorm(doc=1826)
        0.33333334 = coord(1/3)
      0.041943885 = product of:
        0.08388777 = sum of:
          0.08388777 = weight(_text_:indexing in 1826) [ClassicSimilarity], result of:
            0.08388777 = score(doc=1826,freq=2.0), product of:
              0.19835205 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.051817898 = queryNorm
              0.42292362 = fieldWeight in 1826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=1826)
        0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.09
    0.08620159 = product of:
      0.17240319 = sum of:
        0.17240319 = sum of:
          0.1162383 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.1162383 = score(doc=1149,freq=6.0), product of:
              0.19835205 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.051817898 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.056164876 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.056164876 = score(doc=1149,freq=2.0), product of:
              0.18145745 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051817898 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  3. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.08
    0.07704693 = product of:
      0.15409386 = sum of:
        0.15409386 = sum of:
          0.08388777 = weight(_text_:indexing in 5865) [ClassicSimilarity], result of:
            0.08388777 = score(doc=5865,freq=2.0), product of:
              0.19835205 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.051817898 = queryNorm
              0.42292362 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.0702061 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.0702061 = score(doc=5865,freq=2.0), product of:
              0.18145745 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051817898 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.5 = coord(1/2)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  4. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.0342919 = product of:
      0.0685838 = sum of:
        0.0685838 = product of:
          0.2057514 = sum of:
            0.2057514 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.2057514 = score(doc=4388,freq=2.0), product of:
                0.43931273 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051817898 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  5. Gödert, W.: ¬An ontology-based model for indexing and retrieval (2013) 0.03
    0.03355511 = product of:
      0.06711022 = sum of:
        0.06711022 = product of:
          0.13422044 = sum of:
            0.13422044 = weight(_text_:indexing in 1510) [ClassicSimilarity], result of:
              0.13422044 = score(doc=1510,freq=8.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.6766778 = fieldWeight in 1510, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1510)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Starting from an unsolved problem of information retrieval this paper presents an ontology-based model for indexing and retrieval. The model combines the methods and experiences of cognitive-to-interpret indexing languages with the strengths and possibilities of formal knowledge representation. The core component of the model uses inferences along the paths of typed relations between the entities of a knowledge representation for enabling the determination of hit quantities in the context of retrieval processes. The entities are arranged in aspect-oriented facets to ensure a consistent hierarchical structure. The possible consequences for indexing and retrieval are discussed.
  6. Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others? (2018) 0.03
    0.032826282 = product of:
      0.065652564 = sum of:
        0.065652564 = product of:
          0.13130513 = sum of:
            0.13130513 = weight(_text_:indexing in 4548) [ClassicSimilarity], result of:
              0.13130513 = score(doc=4548,freq=10.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.6619802 = fieldWeight in 4548, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I believe that Google Scholar is the most popular academic indexing way for researchers and citations. However, some other indexing institutions may be more professional than Google Scholar but not as popular as Google Scholar. Other indexing websites like Scopus and Clarivate are providing more statistical figures for scholars, institutions or even journals. On account of publication citations, always Google Scholar shows higher citations for a paper than other indexing websites since Google Scholar consider most of the publication platforms so he can easily count the citations. While other databases just consider the citations come from those journals that are already indexed in their database
  7. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.024821604 = product of:
      0.049643207 = sum of:
        0.049643207 = product of:
          0.099286415 = sum of:
            0.099286415 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.099286415 = score(doc=3582,freq=4.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  8. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.024572134 = product of:
      0.049144268 = sum of:
        0.049144268 = product of:
          0.098288536 = sum of:
            0.098288536 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.098288536 = score(doc=8365,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2015 16:08:38
  9. Gödert, W.: Detecting multiword phrases in mathematical text corpora (2012) 0.02
    0.023727044 = product of:
      0.04745409 = sum of:
        0.04745409 = product of:
          0.09490818 = sum of:
            0.09490818 = weight(_text_:indexing in 466) [ClassicSimilarity], result of:
              0.09490818 = score(doc=466,freq=4.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.47848347 = fieldWeight in 466, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
  10. Cumyn, M.; Reiner, G.; Mas, S.; Lesieur, D.: Legal knowledge representation using a faceted scheme (2019) 0.02
    0.023727044 = product of:
      0.04745409 = sum of:
        0.04745409 = product of:
          0.09490818 = sum of:
            0.09490818 = weight(_text_:indexing in 5788) [ClassicSimilarity], result of:
              0.09490818 = score(doc=5788,freq=4.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.47848347 = fieldWeight in 5788, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5788)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A database supports legal research by matching a user's request for information with documents of the database that contain it. Indexes are among the oldest tools to achieve that aim. Many legal publishers continue to provide manual subject indexing of legal documents, in addition to automatic full-text indexing, which improves the performance of a full-text search.
  11. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.02
    0.022194618 = product of:
      0.044389237 = sum of:
        0.044389237 = product of:
          0.08877847 = sum of:
            0.08877847 = weight(_text_:indexing in 3109) [ClassicSimilarity], result of:
              0.08877847 = score(doc=3109,freq=14.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.4475803 = fieldWeight in 3109, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  12. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.02
    0.02179468 = product of:
      0.04358936 = sum of:
        0.04358936 = product of:
          0.08717872 = sum of:
            0.08717872 = weight(_text_:indexing in 2810) [ClassicSimilarity], result of:
              0.08717872 = score(doc=2810,freq=6.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.4395151 = fieldWeight in 2810, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    The Library is consulting with stakeholders concerning the potential impact of these proposals. No firm decisions have yet been taken regarding either of these standards. FAST 1. The British Library proposes to adopt FAST selectively to extend the scope of subject indexing of current and legacy content. 2. The British Library proposes to implement FAST as a replacement for LCSH in all current cataloguing, subject to mitigation of the risks identified above, in particular the question of sustainability. DDC 3. The British Library proposes to implement Abridged DDC selectively to extend the scope of subject indexing of current and legacy content.
  13. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.02179468 = product of:
      0.04358936 = sum of:
        0.04358936 = product of:
          0.08717872 = sum of:
            0.08717872 = weight(_text_:indexing in 3829) [ClassicSimilarity], result of:
              0.08717872 = score(doc=3829,freq=6.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.4395151 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  14. Röthler, D.: "Lehrautomaten" oder die MOOC-Vision der späten 60er Jahre (2014) 0.02
    0.021061828 = product of:
      0.042123657 = sum of:
        0.042123657 = product of:
          0.08424731 = sum of:
            0.08424731 = weight(_text_:22 in 1552) [ClassicSimilarity], result of:
              0.08424731 = score(doc=1552,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.46428138 = fieldWeight in 1552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1552)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 11:04:35
  15. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.02
    0.020761164 = product of:
      0.041522328 = sum of:
        0.041522328 = product of:
          0.083044656 = sum of:
            0.083044656 = weight(_text_:indexing in 1717) [ClassicSimilarity], result of:
              0.083044656 = score(doc=1717,freq=4.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.41867304 = fieldWeight in 1717, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
  16. Schultz, S.: ¬Die eine App für alles : Mobile Zukunft in China (2016) 0.02
    0.019857284 = product of:
      0.039714567 = sum of:
        0.039714567 = product of:
          0.079429135 = sum of:
            0.079429135 = weight(_text_:22 in 4313) [ClassicSimilarity], result of:
              0.079429135 = score(doc=4313,freq=4.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.4377287 = fieldWeight in 4313, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 14:22:02
  17. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.02
    0.018162236 = product of:
      0.03632447 = sum of:
        0.03632447 = product of:
          0.07264894 = sum of:
            0.07264894 = weight(_text_:indexing in 3868) [ClassicSimilarity], result of:
              0.07264894 = score(doc=3868,freq=6.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.3662626 = fieldWeight in 3868, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This paper reports on an analysis of the loss levels that would result if a bibliographic database, namely the Australian Education Index (AEI), were missing the subject descriptors and identifiers assigned by its professional indexers, employing the methodology developed by Gross and Taylor (2005), and later by Gross et al. (2015). The results indicate that AEI users would lose a similar proportion of hits per query to that experienced by library catalog users: on average, 27% of the resources found by a sample of keyword queries on the AEI database would not have been found without the subject indexing, based on the Australian Thesaurus of Education Descriptors (ATED). The paper also discusses the methodological limitations of these studies, pointing out that real-life users might still find some of the resources missed by a particular query through follow-up searches, while additional resources might also be found through iterative searching on the subject vocabulary. The paper goes on to describe a new research design, based on a before - and - after experiment, which addresses some of these limitations. It is argued that this alternative design will provide a more realistic picture of the value that professionally assigned subject indexing and controlled subject vocabularies can add to literature searching of a more scholarly and thorough kind.
  18. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.02
    0.017795283 = product of:
      0.035590567 = sum of:
        0.035590567 = product of:
          0.07118113 = sum of:
            0.07118113 = weight(_text_:indexing in 1057) [ClassicSimilarity], result of:
              0.07118113 = score(doc=1057,freq=4.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.3588626 = fieldWeight in 1057, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1057)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this document we describe the final release of the toolset for entity and semantic associations, integrating two versions (language dependent and language independent) of Unsupervised Document Similarity implemented by MU (using gensim tool) and Citation Indexing, Resolution and Matching (UJF/CMD). We give a brief description of tools, the rationale behind decisions made, and provide elementary evaluation. Tools are integrated in the main project result, EuDML website, and they deliver the needed functionality for exploratory searching and browsing the collected documents. EuDML users and content providers thus benefit from millions of algorithmically generated similarity and citation links, developed using state of the art machine learning and matching methods.
    Object
    Latent Semantic Indexing
  19. Líska, M.; Sojka, P.: MIaS 1.5 (2014) 0.02
    0.017795283 = product of:
      0.035590567 = sum of:
        0.035590567 = product of:
          0.07118113 = sum of:
            0.07118113 = weight(_text_:indexing in 1652) [ClassicSimilarity], result of:
              0.07118113 = score(doc=1652,freq=4.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.3588626 = fieldWeight in 1652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1652)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A math-aware, full-text indexing based search engine that enables users to search for mathematical formulae inside documents. Search engine is unique because it is able to index and search structural information like representation of mathematical formulae. There is no other software or IR system that is able to store three billions of formulae in its index and search it with response time below a second. MIaS processes documents containing mathematical notation in MathML format. The system is built as an extension to any full-text indexing engine and has been verifiend on state-of-the-art Lucene core. It is scalable - it was verified to index almost whole arxiv.org (440,000 papers) having more than 160,000,000 formulae. Software is being used in EuDML (eudml.org) and other digital libraries. For more details see papers in peer reviewed conferences: [1] Sojka, Petr; Líska, Martin. In Matthew R. B. Hardy, Frank Wm. Tompa. Proceedings of the 2011 ACM Symposium on Document Engineering. Mountain View, CA, USA : ACM, 2011. pp.57--60. [2] Sojka, Petr; Líska, Martin. In J.H.Davenport, W.M. Farmer, J.Urban, F. Rabe. Intelligent Computer Mathematics LNCS 6824. Springer, 2011, pp.228--243.
  20. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.0702061 = score(doc=5576,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22

Languages

  • d 46
  • e 46
  • a 1
  • More… Less…

Types

  • a 55
  • r 3
  • x 2
  • m 1
  • s 1
  • More… Less…