Search (163 results, page 8 of 9)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  1. Sankarasubramaniam, Y.; Ramanathan, K.; Ghosh, S.: Text summarization using Wikipedia (2014) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 2693) [ClassicSimilarity], result of:
              0.03088002 = score(doc=2693,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 2693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2693)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  2. Savoy, J.: Text representation strategies : an example with the State of the union addresses (2016) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 3042) [ClassicSimilarity], result of:
              0.03088002 = score(doc=3042,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3042)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on State of the Union addresses from 1790 to 2014 (225 speeches delivered by 42 presidents), this paper describes and evaluates different text representation strategies. To determine the most important words of a given text, the term frequencies (tf) or the tf?idf weighting scheme can be applied. Recently, latent Dirichlet allocation (LDA) has been proposed to define the topics included in a corpus. As another strategy, this study proposes to apply a vocabulary specificity measure (Z?score) to determine the most significantly overused word-types or short sequences of them. Our experiments show that the simple term frequency measure is not able to discriminate between specific terms associated with a document or a set of texts. Using the tf idf or LDA approach, the selection requires some arbitrary decisions. Based on the term-specific measure (Z?score), the term selection has a clear theoretical basis. Moreover, the most significant sentences for each presidency can be determined. As another facet, we can visualize the dynamic evolution of usage of some terms associated with their specificity measures. Finally, this technique can be employed to define the most important lexical leaders introducing terms overused by the k following presidencies.
  3. Lian, T.; Yu, C.; Wang, W.; Yuan, Q.; Hou, Z.: Doctoral dissertations on tourism in China : a co-word analysis (2016) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 3178) [ClassicSimilarity], result of:
              0.03088002 = score(doc=3178,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 3178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3178)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of this paper is to map the foci of research in doctoral dissertations on tourism in China. In the paper, coword analysis is applied, with keywords coming from six public dissertation databases, i.e. CDFD, Wanfang Data, NLC, CALIS, ISTIC, and NSTL, as well as some university libraries providing doctoral dissertations on tourism. Altogether we have examined 928 doctoral dissertations on tourism written between 1989 and 2013. Doctoral dissertations on tourism in China involve 36 first level disciplines and 102 secondary level disciplines. We collect the top 68 keywords of practical significance in tourism which are mentioned at least four times or more. These keywords are classified into 12 categories based on co-word analysis, including cluster analysis, strategic diagrams analysis, and social network analysis. According to the strategic diagram of the 12 categories, we find the mature and immature areas in tourism study. From social networks, we can see the social network maps of original co-occurrence matrix and k-cores analysis of binary matrix. The paper provides valuable insight into the study of tourism by analyzing doctoral dissertations on tourism in China.
  4. Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 3221) [ClassicSimilarity], result of:
              0.03088002 = score(doc=3221,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 3221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3221)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  5. Järvelin, A.; Keskustalo, H.; Sormunen, E.; Saastamoinen, M.; Kettunen, K.: Information retrieval from historical newspaper collections in highly inflectional languages : a query expansion approach (2016) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 3223) [ClassicSimilarity], result of:
              0.03088002 = score(doc=3223,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 3223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3223)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  6. K., Vani; Gupta, D.: Unmasking text plagiarism using syntactic-semantic based natural language processing techniques : comparisons, analysis and challenges (2018) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 5084) [ClassicSimilarity], result of:
              0.03088002 = score(doc=5084,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 5084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5084)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  7. Soni, S.; Lerman, K.; Eisenstein, J.: Follow the leader : documents on the leading edge of semantic change get more citations (2021) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 169) [ClassicSimilarity], result of:
              0.03088002 = score(doc=169,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=169)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  8. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.00
    0.0034311134 = product of:
      0.01029334 = sum of:
        0.01029334 = product of:
          0.03088002 = sum of:
            0.03088002 = weight(_text_:k in 990) [ClassicSimilarity], result of:
              0.03088002 = score(doc=990,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19720423 = fieldWeight in 990, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  9. Herrera-Viedma, E.: Modeling the retrieval process for an information retrieval system using an ordinal fuzzy linguistic approach (2001) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 5752) [ClassicSimilarity], result of:
              0.029985385 = score(doc=5752,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 5752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5752)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 9.2001 14:00:25
  10. Ibekwe-SanJuan, F.; SanJuan, E.: From term variants to research topics (2002) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 1853) [ClassicSimilarity], result of:
              0.029985385 = score(doc=1853,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 1853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1853)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization. 29(2002) nos.3/4, S.181-197
  11. Rosemblat, G.; Tse, T.; Gemoets, D.: Adapting a monolingual consumer health system for Spanish cross-language information retrieval (2004) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 2673) [ClassicSimilarity], result of:
              0.029985385 = score(doc=2673,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2673)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 8.2004 19:12:06
  12. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 5226) [ClassicSimilarity], result of:
              0.029985385 = score(doc=5226,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 5226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.
  13. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 3682) [ClassicSimilarity], result of:
              0.029985385 = score(doc=3682,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    16.11.2017 14:00:29
  14. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.029985385 = score(doc=103,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  15. Fóris, A.: Network theory and terminology (2013) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.029715646 = score(doc=1365,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    2. 9.2014 21:22:48
  16. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.00
    0.0032982244 = product of:
      0.009894673 = sum of:
        0.009894673 = product of:
          0.029684016 = sum of:
            0.029684016 = weight(_text_:29 in 2677) [ClassicSimilarity], result of:
              0.029684016 = score(doc=2677,freq=4.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19237353 = fieldWeight in 2677, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2677)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 8.2004 19:29:56
  17. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0029113963 = product of:
      0.008734189 = sum of:
        0.008734189 = product of:
          0.026202565 = sum of:
            0.026202565 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.026202565 = score(doc=331,freq=4.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.
  18. Kajanan, S.; Bao, Y.; Datta, A.; VanderMeer, D.; Dutta, K.: Efficient automatic search query formulation using phrase-level analysis (2014) 0.00
    0.0027448907 = product of:
      0.008234672 = sum of:
        0.008234672 = product of:
          0.024704017 = sum of:
            0.024704017 = weight(_text_:k in 1264) [ClassicSimilarity], result of:
              0.024704017 = score(doc=1264,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15776339 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1264)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  19. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0026413908 = product of:
      0.007924172 = sum of:
        0.007924172 = product of:
          0.023772515 = sum of:
            0.023772515 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.023772515 = score(doc=4217,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2018 11:32:44
  20. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.00
    0.0024017796 = product of:
      0.0072053387 = sum of:
        0.0072053387 = product of:
          0.021616016 = sum of:
            0.021616016 = weight(_text_:k in 3645) [ClassicSimilarity], result of:
              0.021616016 = score(doc=3645,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.13804297 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    

Years

Languages

  • e 114
  • d 44
  • ru 2
  • chi 1
  • f 1
  • More… Less…

Types