07 October 2015

Fragment finding smackdown: 2015 edition

Our current poll (right-hand side of page) asks about NMR. But of course, there are lots of other ways to find fragments, and the question often arises as to which ones are best. This is the subject of a recent paper in ChemMedChem by Gerhard Klebe and collaborators at Philipps University Marburg, Proteros, NovAliX, Boehringer Ingelheim, and NanoTemper.

Long-time readers will recall that the Klebe group assembled a library of 361 fragments, some of which violated strict “rule of 3” guidelines. These were screened in a high-concentration functional assay against the model aspartic protease endothiapepsin, resulting in 55 hits, of which 11 provided crystal structures. The authors wondered how other techniques would fare. In the new paper, they retested their entire library against the same protein using a reporter displacement assay (RDA), STD-NMR, a thermal shift assay (TSA), native electrospray mass spectrometry (ESI-MS), and microscale electrophoresis (MST). To the extent possible they tried to use similar conditions (such as pH) for the different assays, though the fragment concentrations ranged from a low of 0.1 mM (for ESI-MS) to a high of 2.5 mM (for TSA), while protein concentrations ranged between 4 nM (for the biochemical assay) to 20 µM (for ESI-MS).

All told, 239 fragments hit in at least one assay – a whopping hit rate of 66%. Actually, the number is even higher since, for various reasons, not all fragments could be tested in all assays. And yet, not a single fragment came up in all of the assays! Overall agreement was in fact quite disappointing, with most methods having overlaps of less than 50%, and often below 30%. This is in contrast to a study from a different group highlighted a couple years ago.

What’s going on? One clue might be the solubilities, which were experimentally measured for all library members. In general, hits tended to be more soluble than the library as a whole, emphasizing the importance of this parameter not just for follow-up studies but for identification of fragments in the first place.

Another possibility is that some fragments bind outside the enzyme active site, and thus would not be picked up in a biochemical assay or the RDA. Some evidence for this is provided by follow-up NMR studies in which hits were competed with ritonavir, which binds in the active site. Ritonavir-competitive binders shared greater overlap with biochemical and RDA hits, while there was more overlap between ritnovair-uncompetitive binders and hits from methods such as ESI-MS, TSA, and MST that rely solely on binding. (This could also explain similar observations made earlier this year.)

If a picture is worth a thousand words, how many of the 11 hits that had previously yielded crystal structures would have been identified had they been tested in other methods? Here the numbers vary significantly, from 27% for ESI-MS and MST to 100% for NMR, though these statistics should be taken with a grain of salt since – for example – only 7 of the 11 crystallographically-confirmed hits could actually be tested in the NMR assay. Also, it is possible that some hits from these methods might have generated new crystal structures for fragments not identified in the initial biochemical screen.

One admirable feature of this paper is that the authors provide all their data, including structures and measured solubility numbers for each component of their library. This should provide an excellent dataset for a modeler to use in benchmarking computational methods.

All in all this is a thorough and important analysis and a sobering reminder that, even if a fragment doesn’t hit in orthogonal assays, that doesn’t necessarily mean it’s not a useful starting point. On the other hand, artifacts are everywhere, and paranoia is often justified. The art is deciding which hits are worth pursuing – and how.

2 comments:

Dale Cameron said...

It would seem to me that the real measure of a method is not how many hits it provides you with but rather, how many useful leads it provides. Since that is not just tied to the method but also to the library (although in this study the libraries were more or less equivalent), I expect that "metric" is not really measurable yet, even if we could all agree on what elements that metric would have. For example, if you measure success on whether downstream medicinal chemistry was able to make a drug candidate, then you're introducing a lot of extra chance and expertise into the metric. If HTS has taught us anything it should be clear that hit rate alone doesn't necessarily correlate with overall, downstream success.

So, I'm thinking that although there wasn't good agreement between methods, they all provided hits to start with, which was their purpose, and each starting point could yield downstream success, depending on a lot of other factors. So, could a take-home message be "Any fragment finding method you can do is better than not doing any at all." If so, then I see this report as being quite positive and supportive of any effort in the area. Otherwise, I think I'm still left wondering which method to choose; my decision might likely default again to "whatever one I can do / afford / have access to / etc."

Anonymous said...

Where's the SPR?