Search (97 results, page 1 of 5)

  • × year_i:[2010 TO 2020}
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.11
    0.11449196 = product of:
      0.17173794 = sum of:
        0.13152078 = product of:
          0.39456233 = sum of:
            0.39456233 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39456233 = score(doc=1826,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.04021717 = product of:
          0.08043434 = sum of:
            0.08043434 = weight(_text_:indexing in 1826) [ClassicSimilarity], result of:
              0.08043434 = score(doc=1826,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.42292362 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.08
    0.0783509 = product of:
      0.11752635 = sum of:
        0.10143948 = weight(_text_:systematic in 2547) [ClassicSimilarity], result of:
          0.10143948 = score(doc=2547,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.35721707 = fieldWeight in 2547, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
        0.016086869 = product of:
          0.032173738 = sum of:
            0.032173738 = weight(_text_:indexing in 2547) [ClassicSimilarity], result of:
              0.032173738 = score(doc=2547,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.16916946 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  3. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.06
    0.055101942 = product of:
      0.16530582 = sum of:
        0.16530582 = sum of:
          0.11145309 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.11145309 = score(doc=1149,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.053852726 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.053852726 = score(doc=1149,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  4. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.05
    0.04925008 = product of:
      0.14775024 = sum of:
        0.14775024 = sum of:
          0.08043434 = weight(_text_:indexing in 5865) [ClassicSimilarity], result of:
            0.08043434 = score(doc=5865,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.42292362 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.06731591 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.06731591 = score(doc=5865,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  5. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 4837) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4837,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.33333334 = coord(1/3)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  6. Mixter, J.; Childress, E.R.: FAST (Faceted Application of Subject Terminology) users : summary and case studies (2013) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 2011) [ClassicSimilarity], result of:
          0.08966068 = score(doc=2011,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 2011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the past ten years, various organizations, both public and private, have expressed interest in implementing FAST in their cataloging workflows. As interest in FAST has grown, so too has interest in knowing how FAST is being used and by whom. Since 2002 eighteen institutions (see table 1) in six countries have expressed interest in learning more about FAST and how it could be implemented in cataloging workflows. Currently OCLC is aware of nine agencies that have actually adopted or support FAST for resource description. This study, the first systematic census of FAST users undertaken by OCLC, was conducted, in part, to address these inquiries. Its purpose was to examine: how FAST is being utilized; why FAST was chosen as the cataloging vocabulary; what benefits FAST provides; and what can be done to enhance the value of FAST. Interview requests were sent to all parties that had previously contacted OCLC about FAST. Of the eighteen organizations contacted, sixteen agreed to provide information about their decision whether to use FAST (nine adopters, seven non-adopters).
  7. Cecchini, C.; Zanchetta, C.; Paolo Borin, P.; Xausa, G.: Computational design e sistemi di classificazione per la verifica predittiva delle prestazioni di sistema degli organismi edilizi : Computational design and classification systems to support predictive checking of performance of building systems (2017) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5856) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5856,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5856)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of control the economic, social and environmental aspects connected to the construction of a building imposes a systematic approach for which t is necessary to make test models aimed to a coordinate analysis of different and independent performance issues. BIM technology, referring to interoperable informative models, offers a significant operative basis to achieve this necessity. In most of the cases, informative models concentrate on a product-based digital models collection built in a virtual space, more than on the simulation of their relational behaviors. This relation, instead, is the most important aspect of modelling because it marks and characterizes the interactions that can define the building as a system. This study presents the use of standard classification systems as tools for both the activation and validation of an integrated performance-based building process. By referring categories and types of the informative model to the codes of a technological and performance-based classification system, it is possible to link and coordinate functional units and their elements with the indications required by the AEC standards. In this way, progressing with an incremental logic, it is possible to achieve the management of the requirements of the whole building and the monitoring of the fulfilment of design objectives and specific normative guidelines.
  8. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.02
    0.02192013 = product of:
      0.06576039 = sum of:
        0.06576039 = product of:
          0.19728117 = sum of:
            0.19728117 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.19728117 = score(doc=4388,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  9. Gödert, W.: ¬An ontology-based model for indexing and retrieval (2013) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 1510) [ClassicSimilarity], result of:
              0.12869495 = score(doc=1510,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 1510, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1510)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Starting from an unsolved problem of information retrieval this paper presents an ontology-based model for indexing and retrieval. The model combines the methods and experiences of cognitive-to-interpret indexing languages with the strengths and possibilities of formal knowledge representation. The core component of the model uses inferences along the paths of typed relations between the entities of a knowledge representation for enabling the determination of hit quantities in the context of retrieval processes. The entities are arranged in aspect-oriented facets to ensure a consistent hierarchical structure. The possible consequences for indexing and retrieval are discussed.
  10. Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others? (2018) 0.02
    0.020983277 = product of:
      0.06294983 = sum of:
        0.06294983 = product of:
          0.12589966 = sum of:
            0.12589966 = weight(_text_:indexing in 4548) [ClassicSimilarity], result of:
              0.12589966 = score(doc=4548,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6619802 = fieldWeight in 4548, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4548)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    I believe that Google Scholar is the most popular academic indexing way for researchers and citations. However, some other indexing institutions may be more professional than Google Scholar but not as popular as Google Scholar. Other indexing websites like Scopus and Clarivate are providing more statistical figures for scholars, institutions or even journals. On account of publication citations, always Google Scholar shows higher citations for a paper than other indexing websites since Google Scholar consider most of the publication platforms so he can easily count the citations. While other databases just consider the citations come from those journals that are already indexed in their database
  11. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.01586651 = product of:
      0.04759953 = sum of:
        0.04759953 = product of:
          0.09519906 = sum of:
            0.09519906 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.09519906 = score(doc=3582,freq=4.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  12. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.015707046 = product of:
      0.047121134 = sum of:
        0.047121134 = product of:
          0.09424227 = sum of:
            0.09424227 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.09424227 = score(doc=8365,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  13. Gödert, W.: Detecting multiword phrases in mathematical text corpora (2012) 0.02
    0.015166845 = product of:
      0.045500536 = sum of:
        0.045500536 = product of:
          0.09100107 = sum of:
            0.09100107 = weight(_text_:indexing in 466) [ClassicSimilarity], result of:
              0.09100107 = score(doc=466,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47848347 = fieldWeight in 466, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
  14. Cumyn, M.; Reiner, G.; Mas, S.; Lesieur, D.: Legal knowledge representation using a faceted scheme (2019) 0.02
    0.015166845 = product of:
      0.045500536 = sum of:
        0.045500536 = product of:
          0.09100107 = sum of:
            0.09100107 = weight(_text_:indexing in 5788) [ClassicSimilarity], result of:
              0.09100107 = score(doc=5788,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47848347 = fieldWeight in 5788, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5788)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A database supports legal research by matching a user's request for information with documents of the database that contain it. Indexes are among the oldest tools to achieve that aim. Many legal publishers continue to provide manual subject indexing of legal documents, in addition to automatic full-text indexing, which improves the performance of a full-text search.
  15. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.01
    0.014187284 = product of:
      0.04256185 = sum of:
        0.04256185 = product of:
          0.0851237 = sum of:
            0.0851237 = weight(_text_:indexing in 3109) [ClassicSimilarity], result of:
              0.0851237 = score(doc=3109,freq=14.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4475803 = fieldWeight in 3109, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  16. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.01
    0.013931636 = product of:
      0.041794907 = sum of:
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 2810) [ClassicSimilarity], result of:
              0.083589815 = score(doc=2810,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 2810, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    The Library is consulting with stakeholders concerning the potential impact of these proposals. No firm decisions have yet been taken regarding either of these standards. FAST 1. The British Library proposes to adopt FAST selectively to extend the scope of subject indexing of current and legacy content. 2. The British Library proposes to implement FAST as a replacement for LCSH in all current cataloguing, subject to mitigation of the risks identified above, in particular the question of sustainability. DDC 3. The British Library proposes to implement Abridged DDC selectively to extend the scope of subject indexing of current and legacy content.
  17. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.013931636 = product of:
      0.041794907 = sum of:
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 3829) [ClassicSimilarity], result of:
              0.083589815 = score(doc=3829,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  18. Röthler, D.: "Lehrautomaten" oder die MOOC-Vision der späten 60er Jahre (2014) 0.01
    0.0134631805 = product of:
      0.04038954 = sum of:
        0.04038954 = product of:
          0.08077908 = sum of:
            0.08077908 = weight(_text_:22 in 1552) [ClassicSimilarity], result of:
              0.08077908 = score(doc=1552,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46428138 = fieldWeight in 1552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1552)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2018 11:04:35
  19. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.01
    0.013270989 = product of:
      0.039812967 = sum of:
        0.039812967 = product of:
          0.079625934 = sum of:
            0.079625934 = weight(_text_:indexing in 1717) [ClassicSimilarity], result of:
              0.079625934 = score(doc=1717,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.41867304 = fieldWeight in 1717, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
  20. Schultz, S.: ¬Die eine App für alles : Mobile Zukunft in China (2016) 0.01
    0.01269321 = product of:
      0.038079627 = sum of:
        0.038079627 = product of:
          0.07615925 = sum of:
            0.07615925 = weight(_text_:22 in 4313) [ClassicSimilarity], result of:
              0.07615925 = score(doc=4313,freq=4.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4377287 = fieldWeight in 4313, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4313)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2018 14:22:02

Languages

  • e 48
  • d 46
  • a 1
  • i 1
  • More… Less…

Types

  • a 57
  • r 3
  • m 2
  • x 2
  • s 1
  • More… Less…