Search (25 results, page 1 of 2)

  • × author_ss:"Willett, P."
  1. Artymiuk, P.J.; Spriggs, R.V.; Willett, P.: Graph theoretic methods for the analysis of structural relationships in biological macromolecules (2005) 0.02
    0.0191766 = product of:
      0.0383532 = sum of:
        0.019136423 = weight(_text_:for in 5258) [ClassicSimilarity], result of:
          0.019136423 = score(doc=5258,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 5258, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=5258)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 5258) [ClassicSimilarity], result of:
              0.038433556 = score(doc=5258,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 5258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5258)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Subgraph isomorphism and maximum common subgraph isomorphism algorithms from graph theory provide an effective and an efficient way of identifying structural relationships between biological macromolecules. They thus provide a natural complement to the pattern matching algorithms that are used in bioinformatics to identify sequence relationships. Examples are provided of the use of graph theory to analyze proteins for which three-dimensional crystallographic or NMR structures are available, focusing on the use of the Bron-Kerbosch clique detection algorithm to identify common folding motifs and of the Ullmann subgraph isomorphism algorithm to identify patterns of amino acid residues. Our methods are also applicable to other types of biological macromolecule, such as carbohydrate and nucleic acid structures.
    Date
    22. 7.2006 14:40:10
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.5, S.518-528
  2. Griffiths, A.; Robinson, L.A.; Willett, P.: Hierarchic agglomerative clustering methods for automatic document classification (1984) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 2414) [ClassicSimilarity], result of:
          0.02946245 = score(doc=2414,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 2414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=2414)
      0.25 = coord(1/4)
    
  3. Griffiths, A.; Luckhurst, H.C.; Willett, P.: Using interdocument similarity information in document retrieval systems (1986) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 2415) [ClassicSimilarity], result of:
          0.025779642 = score(doc=2415,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 2415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 37(1986) no.1, S.3-11
  4. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 2701) [ClassicSimilarity], result of:
          0.025779642 = score(doc=2701,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 2701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
      0.25 = coord(1/4)
    
  5. Robertson, A.M.; Willett, P.: Retrieval techniques for historical English text : searching the sixteenth and seventeenth century titles in the Catalogue of Caterbury Cathedral Library using spelling-correction methods (1992) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 4209) [ClassicSimilarity], result of:
          0.025779642 = score(doc=4209,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 4209, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4209)
      0.25 = coord(1/4)
    
    Abstract
    A range of techniques has been developed for the correction of misspellings in machine readable texts. Discusses the use of such techniques for the identification of words in the sixteenth and seventeenth century titles from the Catalogue of Canterbury Cathedral Library that are most similar to query words in modern English. The experiments used digram matching, non phonetic coding, and dynamic programming methods for spelling correction. These allow very high recall searches to be carried out, although the latter methods are very demanding of computer resources
  6. Ekmekcioglu, F.C.; Willett, P.: Effectiveness of stemming for Turkish text retrieval (2000) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 5423) [ClassicSimilarity], result of:
          0.025779642 = score(doc=5423,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 5423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=5423)
      0.25 = coord(1/4)
    
  7. Al-Hawamdeh, S.; Smith, G.; Willett, P.; Vere, R. de: Using nearest-neighbour searching techniques to access full-text documents (1991) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 2300) [ClassicSimilarity], result of:
          0.020833097 = score(doc=2300,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 2300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=2300)
      0.25 = coord(1/4)
    
    Abstract
    Summarises the results to date of a continuing programme of research at Sheffield Univ. to investigate the use of nearest-neighbour retrieval algorithms for full text searching. Given a natural language query statement, the research methods result in a ranking of the paragraphs comprising a full text document in order of decreasing similarity with the query, where the similarity for each paragraph is determined by the number of keyword stems that it has in common with the query
  8. Robertson, A.M.; Willett, P.: Identification of word-variants in historical text databases : report for the period October 1990 to September 1992 (1994) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 939) [ClassicSimilarity], result of:
          0.020833097 = score(doc=939,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 939, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=939)
      0.25 = coord(1/4)
    
    Abstract
    Databases of historical texts are increasingly becoming available for end user searching via online or CD-ROM databases. Many of the words in these databases are spelt differently from today with resultant loss of retrieval. The project evaluated a range of techniques that can suggest historical variants of modern language query words, the work deriving from earlier work on spelling correction
  9. Robertson, A.M.; Willett, P.: Use of genetic algorithms in information retrieval (1995) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 2418) [ClassicSimilarity], result of:
          0.020833097 = score(doc=2418,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 2418, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=2418)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the basic techniques involving genetic algorithms and their application to 2 problems in information retrieval: the generation of equifrequent groups of index terms; and the identification of optimal query and term weights. The algorithm developed for the generation of equifrequent groupings proved to be effective in operation, achieving results comparable with those obtained using a good deterministic algorithm. The algorithm developed for the identification of optimal query and term weighting involves fitness function that is based on full relevance information
  10. Ingwersen, P.; Willett, P.: ¬An introduction to algorithmic and cognitive approaches for information retrieval (1995) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 4344) [ClassicSimilarity], result of:
          0.020833097 = score(doc=4344,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 4344, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
      0.25 = coord(1/4)
    
    Abstract
    This paper provides an over-view of 2, complementary approaches to the design and implementation of information retrieval systems. The first approach focuses on the algorithms and data structures that are needed to maximise the effectiveness and the efficiency of the searches that can be carried out on text databases, while the second adopts a cognitive approach that focuses on the role of the user and of the knowledge sources involved in information retrieval. The paper argues for an holistic view of information retrieval that is capable of encompassing both of these approaches
  11. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 5689) [ClassicSimilarity], result of:
          0.020833097 = score(doc=5689,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 5689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.25 = coord(1/4)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
  12. Furner, J.; Willett, P.: ¬A survey of hypertext-based public-access point-of-information systems in UK libraries (1995) 0.00
    0.004784106 = product of:
      0.019136423 = sum of:
        0.019136423 = weight(_text_:for in 2044) [ClassicSimilarity], result of:
          0.019136423 = score(doc=2044,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 2044, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2044)
      0.25 = coord(1/4)
    
    Abstract
    We have recently completed a survey of the operational use of hypertext-based information systems in academic, public and special libraries in the UK. A literatur search, questionnaire and both telephone and face-to-face interviews demonstrate that the principle application of hypertext systems is for the implementation of public-access point-of-information systems, which provide guidance to the users of local information resources. In this paper, we describe the principle issuse relating to the design and usage of these systems that were raised in the interviews and that we experienced when using the systems for ourselves. We then present a set of technical recommendations with the intention of helping the developers of future systems, with special attention being given to the need to develop effective methods for system evaluation
  13. Li, X.; Cox, A.; Ford, N.; Creaser, C.; Fry, J.; Willett, P.: Knowledge construction by users : a content analysis framework and a knowledge construction process model for virtual product user communities (2017) 0.00
    0.0046035075 = product of:
      0.01841403 = sum of:
        0.01841403 = weight(_text_:for in 3574) [ClassicSimilarity], result of:
          0.01841403 = score(doc=3574,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 3574, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3574)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to develop a content analysis framework and from that derive a process model of knowledge construction in the context of virtual product user communities, organization sponsored online forums where product users collaboratively construct knowledge to solve their technical problems. Design/methodology/approach The study is based on a deductive and qualitative content analysis of discussion threads about solving technical problems selected from a series of virtual product user communities. Data are complemented with thematic analysis of interviews with forum members. Findings The research develops a content analysis framework for knowledge construction. It is based on a combination of existing codes derived from frameworks developed for computer-supported collaborative learning and new categories identified from the data. Analysis using this framework allows the authors to propose a knowledge construction process model showing how these elements are organized around a typical "trial and error" knowledge construction strategy. Practical implications The research makes suggestions about organizations' management of knowledge activities in virtual product user communities, including moderators' roles in facilitation. Originality/value The paper outlines a new framework for analysing knowledge activities where there is a low level of critical thinking and a model of knowledge construction by trial and error. The new framework and model can be applied in other similar contexts.
  14. Wakeling, S.; Spezi, V.; Fry, J.; Creaser, C.; Pinfield, S.; Willett, P.: Academic communities : the role of journals and open-access mega-journals in scholarly communication (2019) 0.00
    0.0046035075 = product of:
      0.01841403 = sum of:
        0.01841403 = weight(_text_:for in 4627) [ClassicSimilarity], result of:
          0.01841403 = score(doc=4627,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 4627, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4627)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to provide insights into publication practices from the perspective of academics working within four disciplinary communities: biosciences, astronomy/physics, education and history. The paper explores the ways in which these multiple overlapping communities intersect with the journal landscape and the implications for the adoption and use of new players in the scholarly communication system, particularly open-access mega-journals (OAMJs). OAMJs (e.g. PLOS ONE and Scientific Reports) are large, broad scope, open-access journals that base editorial decisions solely on the technical/scientific soundness of the article. Design/methodology/approach Focus groups with active researchers in these fields were held in five UK Higher Education Institutions across Great Britain, and were complemented by interviews with pro-vice-chancellors for research at each institution. Findings A strong finding to emerge from the data is the notion of researchers belonging to multiple overlapping communities, with some inherent tensions in meeting the requirements for these different audiences. Researcher perceptions of evaluation mechanisms were found to play a major role in attitudes towards OAMJs, and interviews with the pro-vice-chancellors for research indicate that there is a difference between researchers' perceptions and the values embedded in institutional frameworks. Originality/value This is the first purely qualitative study relating to researcher perspectives on OAMJs. The findings of the paper will be of interest to publishers, policy-makers, research managers and academics.
  15. Li, J.; Willett, P.: ArticleRank : a PageRank-based alternative to numbers of citations for analysing citation networks (2009) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 751) [ClassicSimilarity], result of:
          0.01594702 = score(doc=751,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 751, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=751)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to suggest an alternative to the widely used Times Cited criterion for analysing citation networks. The approach involves taking account of the natures of the papers that cite a given paper, so as to differentiate between papers that attract the same number of citations. Design/methodology/approach - ArticleRank is an algorithm that has been derived from Google's PageRank algorithm to measure the influence of journal articles. ArticleRank is applied to two datasets - a citation network based on an early paper on webometrics, and a self-citation network based on the 19 most cited papers in the Journal of Documentation - using citation data taken from the Web of Knowledge database. Findings - ArticleRank values provide a different ranking of a set of papers from that provided by the corresponding Times Cited values, and overcomes the inability of the latter to differentiate between papers with the same numbers of citations. The difference in rankings between Times Cited and ArticleRank is greatest for the most heavily cited articles in a dataset. Originality/value - This is a novel application of the PageRank algorithm.
  16. Robertson, M.; Willett, P.: ¬An upperbound to the performance of ranked output searching : optimal weighting of query terms using a genetic algorithms (1996) 0.00
    0.0036828062 = product of:
      0.014731225 = sum of:
        0.014731225 = weight(_text_:for in 6977) [ClassicSimilarity], result of:
          0.014731225 = score(doc=6977,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.16595288 = fieldWeight in 6977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=6977)
      0.25 = coord(1/4)
    
    Abstract
    Describes the development of a genetic algorithm (GA) for the assignment of weights to query terms in a ranked output document retrieval system. The GA involves a fitness function that is based on full relevance information, and the rankings resulting from the use of these weights are compared with the Robertson-Sparck Jones F4 retrospective relevance weight
  17. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.00
    0.0036828062 = product of:
      0.014731225 = sum of:
        0.014731225 = weight(_text_:for in 760) [ClassicSimilarity], result of:
          0.014731225 = score(doc=760,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.16595288 = fieldWeight in 760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.25 = coord(1/4)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  18. Willett, P.: From chemical documentation to chemoinformatics : 50 years of chemical information science (2009) 0.00
    0.0036828062 = product of:
      0.014731225 = sum of:
        0.014731225 = weight(_text_:for in 3656) [ClassicSimilarity], result of:
          0.014731225 = score(doc=3656,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.16595288 = fieldWeight in 3656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=3656)
      0.25 = coord(1/4)
    
    Abstract
    This paper summarizes the historical development of the discipline that is now called 'chemoinformatics'. It shows how this has evolved, principally as a result of technological developments in chemistry and biology during the past decade, from long-established techniques for the modelling and searching of chemical molecules. A total of 30 papers, the earliest dating back to 1957, are briefly summarized to highlight some of the key publications and to show the development of the discipline.
  19. Wakeling, S.; Creaser, C.; Pinfield, S.; Fry, J.; Spezi, V.; Willett, P.; Paramita, M.: Motivations, understandings, and experiences of open-access mega-journal authors : results of a large-scale survey (2019) 0.00
    0.0032551715 = product of:
      0.013020686 = sum of:
        0.013020686 = weight(_text_:for in 5317) [ClassicSimilarity], result of:
          0.013020686 = score(doc=5317,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 5317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5317)
      0.25 = coord(1/4)
    
    Abstract
    Open-access mega-journals (OAMJs) are characterized by their large scale, wide scope, open-access (OA) business model, and "soundness-only" peer review. The last of these controversially discounts the novelty, significance, and relevance of submitted articles and assesses only their "soundness." This article reports the results of an international survey of authors (n = 11,883), comparing the responses of OAMJ authors with those of other OA and subscription journals, and drawing comparisons between different OAMJs. Strikingly, OAMJ authors showed a low understanding of soundness-only peer review: two-thirds believed OAMJs took into account novelty, significance, and relevance, although there were marked geographical variations. Author satisfaction with OAMJs, however, was high, with more than 80% of OAMJ authors saying they would publish again in the same journal, although there were variations by title, and levels were slightly lower than subscription journals (over 90%). Their reasons for choosing to publish in OAMJs included a wide variety of factors, not significantly different from reasons given by authors of other journals, with the most important including the quality of the journal and quality of peer review. About half of OAMJ articles had been submitted elsewhere before submission to the OAMJ with some evidence of a "cascade" of articles between journals from the same publisher.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.754-768
  20. Robertson, A.M.; Willett, P.: Generation of equifrequent groups of words using a genetic algorithm (1994) 0.00
    0.0032224553 = product of:
      0.012889821 = sum of:
        0.012889821 = weight(_text_:for in 8158) [ClassicSimilarity], result of:
          0.012889821 = score(doc=8158,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 8158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8158)
      0.25 = coord(1/4)
    
    Abstract
    Genetic algorithms are a class of non-deterministic algorithms that derive from Darwinian evolution and that provide good, though not necessarily optimal, solutions to combinatorial problems. We describe their application to the identification of characteristics that occur approximately equifrequently in a database, using two different methods for the creation of the chromosome data structures that lie at the heart of a genetic algortihm. Experiments with files of English and Turkish text suggest that the genetic algorithm developed here can produce results superior to those produced by existing non-deterministic algorithms; however, the results are inferior to those produced by an existing deterministic algorithm