Friday, December 11, 2015

On the difficulty of finding peer-reviewers

I have recently become an Associate Editor at PeerJ. I had several motivations for this:
  • I strongly believe in their mission, and am very happy with my three publishing experiences with them.
  • I mostly work alone and therefore my papers, in the long run, will not be a profitable for them. I felt that I should give them some extra support in exchange for their extremely low number-of-authors-based APC.
  • As a mid-career researcher at a little-known teaching-based institution, I reasoned that this opportunity might increase my visibility and improve my CV.

I am enjoying my run as an editor. So far, I have shepherded seven papers through the publishing process: one of them was published a week ago, I rejected one "on arrival",  and five of them are undergoing review.  I target my peer-review invitations to people who have recently published work using the same methods, or studied the same question, both for the obvious expertise and hoping that they will find the paper interesting. Still, I was quite surprised with how hard it is to get people to accept reviewing papers: for two papers, I managed to get two reviewers with around 6-8 invitations, but my latest assignments required more than 15 invitations each!  I understand that everybody is busy researching, writing papers, applying for funding, etc., but I never thought that the acceptance rate for peer-review requests would be < 15%. I do not get many peer-review request myself, but I do believe I have an obligation of accepting as many requests as possible (and reviewing them promptly), and I thought this was the "common" mindset... Maybe the people I target for my invitations are simply too senior and are therefore swamped with review requests, but the emails of "non-senior" members of a Lab are too often hard to find, due to the common practice of including only the the lab leader "corresponding author".

Any thoughts/suggestions/gripes?



Wednesday, July 8, 2015

Gamess (US) frequently asked questions. Part 7: How to distinguish alpha from beta orbitals in the $VEC deck

Each line in a $VEC group contains the coefficients of five basis functions for a given orbital. These are formatted in a special way, with seven numbers in each line. These numbers are:

1st) the number of the orbital to which the coefficients belong (written with at most two characters, so that 1 means orbital 1, .. , 99 means orbital 99, 00 means orbital 100) . This number is repeated in the beginning of every line, until all coefficients for that orbital have been written

2nd) this number tells the program how to assign the coefficients to the basis functions. "1" means that the coefficients are for basis functions 1-5, "2" means that the coefficients are for basis functions 5-10, etc. In general , that number "n" directs the program to assign the five coefficients present in the line to basis functions 5*(n-1)+1 to 5*n.

3rd to 7th) coefficients of five basis functions

BETA orbitals are punched as a group immediately after all ALPHA orbitals.

This format entails that in molecules with more than 100 orbitals the $VEC group contains several blocks with the same 1st number. For example, in a molecule with 200 orbitals, alpha orbital 27 is described by the first block of lines beginning with "27", and alpha orbital 127 is described by the SECOND block of lines beginning with "27".

I usually find the beginning of the BETA orbitals by repeating a search for the string " 1 1" : if that string is preceded by a block beginning with "00 1", it usually refers to orbitals 101, or 201, etc. (the exception being those systems with exactly 100, 200, etc. orbitals). If string " 1 1" is NOT preceded by a block beginning with "00 1", you are sure to have found the beginnning of the BETA orbitals

Tuesday, April 28, 2015

How does OA benefit my research?

Jan Jensen has written an interesting post describing how his decision to publish only on Open Access outlets has influenced the way he tackles research questions.  One of the benefits he points out is that choosing to publish in a journal which performs a "scientific soundness-only peer-review" instead of a "sexyness/interest and scientific soundness peer review" allows him to focus on "truly challenging and long-term research questions without worrying whether or where I will be able to publish".  I think that option already existed before OA and the advent of the mega-journals: we simply had to decide to be satisfied with publishing on IJQC or Theochem whenever the Editors of JPC, JCP, JACS, Angewandte et al.  pronounced our research "too specialized and not of enough interest to our broad readership", and to accept the derision of peers who look down on papers published on those and other low-impact journals. (I admit I am often guilty of this).
To me, the true advantage does not lie on OA itself, but on the open review model (used e.g. by PeerJ), which allows authors to publish the reviews at the same time as the paper. I feel this functions as a much stronger "validation" of the quality of the work, as readers immediately have access to a truly independent measure of the strengths and weaknesses of the manuscript.
How does OA benefit my research? I am not sure it benefits my research methodology and/or choice of research questions since, as one of only two computational chemists at a small teaching-driven University, I  have long decided to research whatever obscure subtopics catch my fancy due to obvious lack of resources to compete against larger/well-funded groups working in sexier topics/enzymes. My decision to embrace an open science model, in contrast (e.g. figshare) has benefitted me more directly by forcing me to archive my results in a more transparent way, with proper "understandable" filenames instead of idiossyncratic names chosen on the fly... That is something I should have done anyway even without the open science model, but that was the nudge which brought me to the "Light" side.

Wednesday, April 22, 2015

When the description of methods in a scientific paper becomes optional.

I have just read a paper describing some very interesting tailoring of enzyme specificity on a P450 enzyme. I was, however, surprised to find that no description of the experimental methods was present in the paper itself, but was only available as Supporting Information. Upon examination of the instructions for authors in the journal I learned that, although being online only (and therefore lacking any space constraints), this publication enforces a 40-thousand character limit on the published papers and specifically states that the experimental section is optional. Traditionally, Supporting Information includes accessory data which would be cumbersome to include in the paper.  In this journal, it functions instead as a cumbersome way to access a vital part of information which should be part of the paper. I cannot even begin to understand why any reputable publisher would, in the absence of any printing costs, force their authors to split their manuscripts and "demote" the potentially most useful portion of the paper to the Supporting Information.
That's ACS: proudly claiming to "[publish] the most compelling, important primary reports on research in chemistry and in allied fields" while making it difficult for readers to have access to that same information.

Thursday, March 19, 2015

My new preprint is up

As part of their undergraduate training, our students are required to write a short thesis. Usually, due to the paucity of research funding, their theses take the format of a literature review. A few years ago, however, I proposed a computational study to the student I had been assigned. Despite no previous acquaintace with the subject, she eagerly took the task and performed some computations on possible reaction mechanisms of the organomercurial lyase MerB. She only had the time to compute a few of the possible pathways and therefore, after she had written her thesis with the data she had managed to gather, I completed the analysis of  the other pathways we had thought of at the time, and a few that we had not envisaged. Writing it as a paper took me much longer than I had anticipated, mostly because I kept postponing it due to the thrill of running computations on other enzymes and projects. I have now managed to finish it and submitted it to PeerJ, where it is undergoing review. I have made it available as a Preprint, and would be thankful for any comments about it.


Addendum: the paper has been published

Saturday, February 7, 2015

On the wrong use of expressions such as "evolution's null hypothesis"

A new paper published in PNAS has been in the news lately, claiming to have found 2-billion-years old fossils of sulfur-metabolizing bacteria undistinguishable from modern specimens. The abstract is somewhat cautious "The marked similarity of microbial morphology, habitat, and organization of these fossil communities to their modern counterparts documents exceptionally slow (hypobradytelic) change that, if paralleled by their molecular biology, would evidence extreme evolutionary stasis." (emphasis added). In the press release and in their talks with the media, however, the authors of this study have been much more forceful and hyperbolic: they directly claim that these organisms have not changed at all! As any microbiologist worth its salt would attest, it takes a lot more than morphological similarities to establish that two microbial communities are composed of the same species. Otherwise, metabolic tests with dozens of substrates would not be needed to distinguish microbial species: we would simply need to throw the little bugs under a microscope and see what they looked like! How could the authors possibly be sure, simply from their tests, that microbial adaptation to the environment had achieved that of modern bacteria by the time their sample fossilized?

More than this extraordinary leap of logic, I was grated by the author's claim that such a lack of evolution would be in agreement with evolution's null-hypothesis of no biological change in the absence of changes in the physico-chemical environment, and it therefore strengthens the case for evolution.... How is it possible to cram so many errors and inaccuracies in such few words? How could the peer-reviewers let such inane nonsense appear in the title of the paper? Let us start to unravel the many mistakes in this formulation:

  • What the authors call "evolution's null hypothesis" has NOT (as far as  I have been able to ascertain) ever been claimed as "evolution's null hypothesis" at all: it is well-know, at least since the seminal work by Kimura, that the strongest driver of genetic variation is not the positive selection of advantageous mutations but the random fixation of neutral (or barely neutral mutations). Indeed, in humans only ca. 400 of the estimated 16500 genes show strong evidence of positive selection, even though all of the genes show variation from those of closely-related species.  It is therefore NOT at all expected that genomic stasis would be observed over a long period of time. Stating (as the authors) that observing no  change in these organisms is a confirmation of the mechanisms of evolution reveals a shocking lack of knowledge regarding  molecular evolution. And the authors have not even proved that there was no change: that would require establishing that their ATP-producing metabolism is as efficient as that of their modern counterparts, that they are able to use the same substrates, and contain all the same  enzymes, etc....
  •  By claiming that an unchanging environment leads to an immutable species, the authors commit a further logical fallacy: after all, there was a time (let's call it t0) when the ancestor to that community first entered that unchanging environment. If an unchanging environment leads to evolutionary stasis, then the authors are claiming that at time t0+1 million years the species would be equal to that at time t0, or that at t0+2 million years, and so forth. But of course adaptation to an environment is not instantaneous, unless the parent ancestor already possesses all enzymes needed to thrive there (and this is most unlikely, as there has been no selective pressure for that). An unchanging environment therefore causes evolutionary pressures, at least in regards to the first cells which venture there.
  • When an observation is compatible with different theories, it cannot be used to further any of them: after all, seeing no change in 2 billion years could also be used to argue for the immutability of species. It is therefore logically fallacious to present it as proof that Darwin was right. Also, evolutionary theory has also developed a lot since the writing of "On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life". Shouldn't other workers, like Kimura, Felsenstein, Farris and Gould be acknowledged? 
  • Why would any scientist need to claim that “the findings therefore provide further scientific proof for Darwin’s work.”? Do astrophysicists need to state “the findings therefore provide further scientific proof for heliocentrism” every time that a new comet is found, its orbit is computed, and it is found to move around the sun rather than around the earth? Do anthropologists working in the Balkans need to point out that “the findings therefore provide further scientific proof that human societies do not all resemble hunter-gatherer groups?” The curious insistence of American-based media to frame biological discoveries as a supposed debate/beauty-contest between "evolution" and "creationism/immutability of species/intelligent design" is completely mind-boggling to any European, whether religious or not. This insistence was also displayed in Neil de Grasse Tyson's "Cosmos", which, unlike Sagan's masterpiece, seemed more interested in scoring debate points against a sub-section of its domestic audience than on presenting the astounding amount of knowledge mankind has gathered in the few millenia we have spent since the dawn of agriculture.



Claims unwarranted by data, exaggeration and PR stunts: all of these are usually as ascribed (rightly or not) to politicians, polemicists, salespeople and shady companies seeking to attract capital. Do we really want science to be tarred by the same brush?

Monday, September 8, 2014

Making good on my "Open Access" pledge


My most recent paper has just been published in PeerJ . It was a LONG time in the making, to the point that my 12-yo daughter once told me (only half-in-jest), that I should "cut my losses and forget about it". I am quite happy about how it turned out: besides describing an analysis of a reaction mechanism and the influence of the redox state of a hard-to-converge Fe-S cluster , it also contains  the first computations including the weighed contributions of 1.2*1013 protonations states of a protein on the reaction it catalyzes. The computational approach described here is relatively simple to perform provided that one has a good estimate of the relative abundances of those protonation states, which can be obtained through Monte Carlo sampling  once the site-site interactions have been computed with a Poisson-Boltzmann solver. To my mind, this is clearly superior to the usual approach of considering only  the "most likely" protonation state (which may often not be the state with the most significant influence on the electrostatic field surrounding the active site). What do you think of it?


Programs needed to use this approach:
MCRP, by Baptista et al., ITQB, Lisbon
MEAD, by Don Bashford, currently at St. Jude Children's research hospital
Any molecular mechanics code, to compute the change of the total electrostatic energy as each individual amino acid is protonated/deprotonated