Scientific and Regulatory Considerations Analytical Validation Qualification of Biomarkers – DAY 2

Scientific and Regulatory Considerations Analytical Validation Qualification of Biomarkers – DAY 2


(low murmuring) – [Man] Check one two two. (low murmuring) – Good morning everybody, we’re gonna go ahead and get started. Good morning everybody, I’m Greg Daniel, I’m Deputy Director in the
Duke-Margolis Center for Policy, welcome to the second
day of our public event. Scientific and regulatory
considerations for analytical validation of assays
used in the qualification of biomarkers and biological matrices. It’s a wonderful title. We had a very engaging and
very productive discussion yesterday and we very much appreciate all of the thoughtful
comments and feedback throughout the course of
the day on the framework and all of the work that has
been done over the last year. We look forward to continuing
that productive conversation today with where we’ll
be exploring our second real world example in
applying the framework and then turn the focus to
next steps in terms of refining and disseminating the
framework more broadly into the scientific community. Once again we have a lot
of things to cover today so I’m not gonna spend too
much time on background but I will go through a
bit of the agenda today. We’ll be kicking off some opening comments from John-Michael Sauer
at CPath who will provide a brief recap of yesterday’s conversation and frame up the goals for today. We’ll then go directly into session four which will offer the case study framework using GLDH liver biomarkers
as the real world example. Session five following that will allow for discussion of the framework
from an industry perspective and panelists will provide
their initial thoughts and then we’ll open it up
to a broader discussion, Q and A with the group here
and we invite any comments from the online community as well. After session five we’ll break for lunch. Then at 1:30 we’ll
reconvene for session six which is intended to generate
discussion on methods to disseminate this framework
to a variety of audiences. And then finally for session
seven we’ll reflect back on the major themes from
both days and prioritize next steps in order to finalize
and ultimately disseminate this framework. As a reminder from yesterday
for those of you on the webcast you can ask questions in two ways. Use the comments or livechat
section on your YouTube, we have staff monitoring that
and we’ll bring the questions to the group, or you
can email questions to [email protected] And again we’ll do our best to incorporate all of the webcast questions and comments throughout the discussion and again today we are livetweeting
all of the things today with our hefty hashtag which
is #biomarkerassayvalidation. Please look at that,
engage your organizations to participate in the
online discussion as well. Again for today lunch will be on your own. There are local eateries
and food trucks nearby and then we’ll be happy to point
you in the right direction. So with that, I’ll turn things
over to John-Michael Sauer. Thank you. – Thanks a lot Greg. Wow, you guys were great yesterday. I really enjoyed the
conversation that happened and I hope today that’s
exactly what we can get into a little bit deeper. What I’d like to do is first
of all remind everybody about the email where to
not only email questions but also comments on the white paper. So scribe that down. So what I’d like to do
is just to recap really what the scope of this meeting is, right, just to kinda make sure we’re
going in the right direction. Like I talked about yesterday
in the final session, there is this relationship
between the development of the IVDs and getting a
cleared assay from the FDA and also the use of
biomarkers during an IND and the assays that are required to achieve both those goals. We’re somewhere in the middle, right? What we really want to
talk about are what are the expectations from an
assay validation point for the qualification of biomarkers. So we want to stay in
that space if we can. We keep kinda weaving to
the side but that’s okay, we’ll do that a little bit. But I think really coming
out of this meeting what we need are really to
define those expectations. And so what are the core
expectations around assay validation for any, the
qualification of any biomarker? I think there are a set of core properties that need to be defined and
we went through that a little bit, I think we need to
refine that a little bit more and have further conversation about that. I think also what we
need to do is agree upon how that context of use,
the biomarkers property the patient population
actually define additional expectations for these
assay validation approaches. And then the whole objective
is to then codify this in the white paper and
so we need your input. This white paper’s not done. Our goal as Steve Piccoli nicely laid out was to get us 90% there,
to do the easy part. You guys have to do the hard part. Have to get that last
10% for us so we have all the right things in that white paper. And of course so we’re not overreaching, we’re not asking for too much. So I think the sessions
yesterday went really well, I think we got some great
input, some great questions, some great conversations that went on. I took, I dunno, page upon page of note and I’ll share some of those
high level observations that I saw. I think we did come to agreement
that the context of use really drives the assay
validation for the qualification of a biomarker. I think also the idea that these core expectations are there. Now although the approach
that’s used in the use of biomarkers in an IND are very similar to what we’re talking about, I don’t think the
expectations are as high. I think we heard that yesterday, right? We’re not talking about the
same level of expectations so I think that’s really
important to note. I mean it was said through
multiple stakeholders from industry, from FDA. Sure enough there’s expectations
but they’re not gonna be to the level that
we’re gonna be expecting for biomarker qualification. You know we also talked
about this as a group effort, it’s not done in isolation
as far of qualification of a biomarker assay for qualification. Or, I’m sorry, the validation
of a biomarker assay for qualification. It’s gonna require the clinician,
biologists, statisticians, and yeah, of course, the
bioanalytical scientists to drive those assays. I think we also agreed upon the fact that we just can’t have a
checklist at the end of the day where we’re gonna be able to go through and just check the box
and finish these assays. Instead it’s gonna be a framework. You’re gonna have to
use the science to drive what that assay needs. Finally, and it was an
interesting point counterpoint around how the CLSI guidelines
or documents should be used. And what I got from that
conversation is that the CLSI guidelines are made
for in vitro diagnostics, we know that. That’s what they were designed for. But I think there’s clues
in there on how we can solve problems and issues that
we run into in the validation of biomarker assays so that’s
what I got out of that. We can talk a little bit more about that as we move forward. And I think also in the last part, the last conversation we
had or the last section that we had in the conversation we had I think there’s an expectation
for these biomarkers when we engage the FDA for qualification that they’re regulatory ready and I think we need to
flesh that out more, there’s no doubt about it,
but it sounds like by the time that we get to the biomarker plan phase or the qualification plan
phase for a given biomarker, we need to have a validated assay. So that’s gonna take a
lot of additional work that I think before we
weren’t offering the agency when we engaged in assay qualification. I think there’s also the recognition that there are several
areas in the white paper that we can still add and enhance which I think is a good thing. I think we also really
need to clarify the fact that for the early exploratory
studies for a novel biomarker and trying to understand
whether this biomarker’s useful or not so prequalification, you don’t need to have
a fully validated assay. And I think that was the
whole objective of table one. Although I think we need to
couch it a little bit different so that’s definitely an
area in the white paper that we need to modify around. I also like the idea of this
process map or this road map to qualification. Embedding that in the white paper. There’s several other points,
I couldn’t list them all out, but we have them in notes. So I think one of the
other subjects that came up several times and I think
this is absolutely key to the white paper is
are we asking too much? I know bioanalytical scientists,
you guys want to have really good assays, you’re
proud of your assays, of course you are because
you want to generate data that can be used. But the question is do
we need that much rigor and robustness around these
assays for given biomarkers? So I think we need to be
careful that in the white paper we don’t overdesign this
and ask for too much. There’s no doubt that the
key elements that are shown below are needed, but the
question is how far do we dive into that, how robust do we characterize those various aspects? So again, like to make that
a part of the conversation today as well as we move forward. So, I mean, Greg’s
already gone through this. We have exciting agenda here I would say. I’m really looking forward
to the further conversation in driving this forward but I
think what everybody’s hoping is that we can anchor it at certain points and come to agreement around
certain ideas and aspects because that’s really
gonna drive into the white, that’s really gonna drive
the white paper itself and allow us to complete our task. So with that I’d like to
go ahead and introduce Yuri Albrech from Pfizer. He’s actually the codirector for the PSTC. He’s gonna talk to us today
about the GLDH assay validation and a little bit about the qualification. Yuri? – Yep, thank you very much John-Michael. Hi, good morning, thank
you everybody for coming. And we’ll spend some time
discussing this example which, it might be a little
unique because it allows us actually to go the whole way. Let me just whether I know
how to do these things. Alright. So we’ll have a panel and the
panel discussion after that and there’s gonna be Shelli
Schomaker, Juliane Lessard, and John-Michael Sauer joining me here to discuss the topic. Okay so now we talk about an example, in this case it’s a
glutamate hydrogenase as in liver specific biomarker
for hepatocellular damage. So wanting to first talk
about what is the unmet need, what are we actually solving? So the unmet need is
that the gold standard for hepatocellular damage,
the leakage alaminotransferase is not really specific for liver. And I don’t know people don’t
appreciate that actually that biomarker was
discovered as a biomarker of myocardial infarction because
it’s present in the muscle and that after that
was kind of thought oh, it’s actually good as a biomarker
of hepatocellular damage. And that’s how we know that marker today. But the problem there is
that once you have a muscle injury, you get a release of
that ALT enzyme and you’re not able to differentiate
muscle and liver injury. You’re not able to diagnose liver injury on a background of muscle injury. Using that enzyme. So the case, why is it
needed, in that table you see situation in subject with
hereditary muscle diseases. Those subjects have a high
level of ALDN and AST. So if you imagine then
you got a child with Duchenne dystrophy and
has a fever and the mother gave him acetaminophen and
then gave him cough drops with acetaminophen and so on. So you can overdose that
child without even knowing. You will figure it out when
the bilirubin will go up, but that’s gonna be too
late and you actually are losing the time for administering
(mumbles) as an antidote and so on. So this kind of a case,
kind of for consideration. The other one is a subject with acquired muscle impermanence. Rhabdomyolysis, myositis and so on. And those are people,
it’s pretty prevalent. Some drugs cause it. And then it’s difficult to say okay is it the liver or is it muscle? So on that left side you
see that the correlation between (mumbles) transfer
and keratin kynase which is the muscle specific biomarker so it’s basically they correlate together. Which indicates in the
subject with the acquired muscle impairments the
ALT’s not really useful. So the answer to that is the GLDH and the GLDH I call is the
gift which keeps on giving. It’s a very interesting story. It’s been around for quite some time. It’s a simple enzyme which
it’s involved in metabolism of glutamate and it’s been
looked at as a biomarker of liver injury for long, long time. But what has happened is
never really got caught up. There are assays and we’ll talk about it, but actually there are no really reports and nobody really systematically
studied that enzyme and what is the (mumbles) of that enzyme and how it fits into the
program of diagnosing of liver injury or liver disease. So one think about it if one
would qualify or use that enzyme, it’s like ALT, instead of ALT as a cytoplasmic enzyme this
is a mitochondrial enzyme. But the story’s the same. The hepatocites don’t feel good. Burst open, you get the leakage out, and that’s how you can
measure it in a serum. So the alternative
biomarker to do like GLDH, the impact of it would be
able to facilitate development of all therapies for muscle impairments. So you have a drug, you’re
testing drug in a target population, you don’t know how you assess that the drug doesn’t
cause a liver injury. The only thing for onset of liver injury you got ALT, you already have it. So that be one thing. The second piece is the public
health or the patient care which I’m talking about and it’s the child with the acetaminophen with muscle injury who get overdosed and
how do we diagnose it. So there are two kind of an impacts which are quite important
and worth investigating. So just to demonstrate how
the GLDH actually works then this is the same table,
I just added the GLDH data for the subjects. As you can see the ALT
and AST as it was before, it’s the same, it’s
increased in the DMD boys which are the boys with Duchenne dystrophy and as you can see the
GLDH, the levels of GLDH are exactly in normal range as normal boys or normal adults so it’s not affected by the muscle injury. The other graph on the right side, it’s a time course of a
subject with rhabdomyolysis who went through a
hypoxic crisis which leads to secondary liver injury
and as you can see throughout the hospitalization,
development of rhabdomyolysis see the increases of ALT, AST, and CK. But the yellow line is the
GLDH and the GLDH is normal until that hypoxic shock and
that secondary liver injury where you can see shooting it up. So that’s kind of a
clinical case with documents that you can detect onset of liver injury on a background of
muscle injury and those. And that would be very
helpful for these people because those cases get undiagnosed. In this case the rhabdomyolysis
and the liver injury resolved itself but in many
cases knowing that there is a liver injury then influence
the treatment options and improve the care of these patients. So that’s kind evolved
to kind of paint the way what the enzyme can do and
which kind of leads me to a context of use which we
outline for our qualification and the risk and benefits. ‘Cause I think people need
to look at context of use and risk and benefits as the outline in our evidentiary consideration
for biomarker qualification and basically we have
a decision tree there and what the context of use
basically is saying is that the elevated GLDH levels in
subjects with muscle injury indicates a liver injury and
can be used in clinical trial. In conjunction with care of the patients. So the risk and benefits
is we don’t believe that there are really
additional risk associated with application of GLDH
because if we don’t have GLDH all these cases are undiagnosed. And if the GLDH would fail,
then at some point we’ll get bilirubin, we get the
bilirubin today so we are not adding anything more risky to this. So what we are doing
we’re actually providing a benefit to the subject
that we see the onset of liver injury. Just that so you understand the context. So now, now that brings me to the assay. Which what we are using. So we were lucky in this
project because we didn’t need to go through a really assay
development optimization and so on so this is really
an assay validation exercise. We didn’t move the assay. So the assay is actually
manufactured by Randox or Roche has the assay as a
research purposes only assay. It runs on these wonderful machines and costs about a dollar an assay. So the assay is manufactured
under pretty standardized conditions and it’s, in terms
of mechanics of the assay, it’s very simple. It’s a spectrophotomatic
assay of enzymatic activity. So what the validation is
is really characterization of that assay in light of the
impact of that context of use, bring that into alignment in understanding the limitation of it. So that’s where we are. So that brings me to kind
of giving you the continuum. And I think this is really
in my mind, in my view, the most important things if we talk about analytical validation. This is in my view it is the analytical validation continuum. So we’ve got an assay,
I’ve got it thinking that I have a biomarker. What do I do, and how do I do through that that I get from the exploratory
studies through application, through qualification,
application in clinical trials, and potentially, in this
case, you can actually walk it the whole way
to in vitro diagnostics. Because there would actually
be benefit for humanity. Have that assay, all
these subjects, available in every hospital. But thinking about that
and I will kind of go over these individual parts of the validation and seeing the validation
as sort of a living document because in my mind if I
have an assay which is made, we are not changing it. So what I am doing, I am characterizing the performance of that
assay, I’m using it. So I’m always adding it based
on how I’m using into it, making it more detailed
but it’s the same assay and I’m using it in light of
what I know about the assay. That’s the important piece. I’m using the assay, what
I know about the assay. Know how the assay performed. ‘Cause that’s important for
the interpretation of the data. So how do you start, right? So you got an idea to develop
biomarker for liver injury so you start with some
exploratory studies. So that’s how we started. So you validate your assay
and so the goal of it is to understand, get an idea
how the biomarker perform and write a paper about it. So I call that basically a
validation which everybody does in academia everywhere for a good science that I have reproducible
data that I’m using it in a single laboratory with
the goal of publishing a paper. And that’s what we have done. So you need to characterize
all those things. Actually all those
seven categories anyway, but it’s for one laboratory,
it’s one set of subjects or samples I’m getting
and that’s how to do it. And then I have the paper
out of it that we publish in 2013 so then you get
the understand of how the biomarker behaves with your assay and now you just say okay I really think it’s gonna be useful so on to
some context of use studies. Can we confirm with a variety of subject, a variety of diseases,
muscle, and so on so forth. So for that again I’m
running in my laboratory again I’m building on
that initial validation which I made for the paper and
then maybe I’m using samples which they’re frozen longer so
I just need to do stability. Did I cover that part
for the freeze stability or samples if they’re handled differently? So I need to have that. To me it’s more of a thinking guide of looking at where the samples
are, where the subjects are, what these things, what
these conditions of use are so that I’m addressing
them in my validation. And in my validation I do it the way because if I know that say, for example, I have samples which are
frozen for two months and if I don’t, if I do the
two months freeze stability and it doesn’t pass, I know
I cannot use these samples. So it’s kind of a common sense in my mind, just kind of adding to it. So then in our case, then comes
this kind of green rectangle and then we are going to the qualification or application of the
biomarker under the IND. So in our case we actually
went first to get the biomarker into clinical trials under
the IND because of its utility for developing therapies for
subject with muscle diseases. We did that first and now
we’re doing the qualification. And that’s the wonderful thing
which once you go to that route of using that biomarker
as a decision making. We’re not talking about
a biomarker which is used for internal decision making. Of the company, of exploratory marker. We’re talking about marker you’re bringing to a clinical study where
you are making decision of discontinuation of therapy. We’re bringing the marker
which is really characterizing a liver safety of a subject. So that bar for that
particular trial under the IND, it’s pretty high and you need
to really have that assay validated the way that you add to it this requirements which CLIA ask you for which is the proficiency
testing and which is the method comparison and the
reason for the proficiency testing there is that you have
a trial which takes a year so we measure these
samples on basis as samples come to the laboratory
and you want to make sure that your data in January are
the same as if you measure them in March or June. So that’s why you need to
have the proficiency testing and so on so you are
adding to it and again it’s how you’re using it. So in my mind that bar is higher
than the qualification bar because the qualification bar
is saying I’ve got an assay which supports the context of use. And I don’t need to run
it in CLIA laboratory. I’m having it be good
enough that I can interpret the data for that context of use. Once you go and once you
apply, then the sponsor needs to bring it into a validation
status that he can make medical decisions on it. So that’s my kind of a view
on it how I’d like to see it. And after that if you want
to go to in vitro diagnostics this is a totally different
can of worms but I would like to see all this continuum
sort of fit together that you can work and
use at least parts of it even though you may need
to do some other studies and address it a little bit
differently in different stages but at least you can
capitalize on that experience and expertise you develop in the data throughout the continuum. Okay, does that make sense to everybody? Alright. And you may disagree, might take it apart during the discussion. But I think something
which would provide us as a guide of thinking
and considering that every situation is different but
we need to have a guide of thinking how to move
from place A to place B. So then what I’ve done
looking at these progressive validation of biomarkers
and I put together, this is basically a little more detailed those seven categories
what John-Michael showed us and just put them together as
looking at what do you need for the exploratory in say the paper, kind of a validation assay for a paper because it doesn’t necessarily
mean that for a paper have a crappy assay. All I need to have a good
assay for the paper, right? (laughing) So but again you can see I
just made it these examples if I’m using samples
immediately or I don’t keep them on room temperature, just
freeze immediately or put them in the fridge, I don’t
need to have stability at room temperature. Because it’s not a condition of the use. So that’s kind of for illustration. I don’t need to have a method
comparison of proficiency testing, it’s for the paper. So then move to the
biomarker qualification. Where will be used more, it will be used in a variety of conditions
so I need to fill out the blanks in it but I
probably don’t really need proficiency testing on that
because it’s not gonna be that lab who’s gonna be measuring it, so I think the lab which
eventually will be measuring it will need to know these things. But it’s not really necessary
for the qualification. And then we’ll go to the
laboratory developed test, that’s the CLIA standards for applying it for decision making in the clinical trials where you got to have
the proficiency testing and so on and then there’s another box with the IVD validation
which I’m not gonna go to but that’s the detailed kind of a way. How you validate the assay. And in my mind it’s all kind
of it goes along the lines is you know an assay which is
done by a single laboratory, trained personnel, to IVD which
is done in all laboratories it has to be really foolproof
that anybody can do it routinely so that’s why
you need to characterize the assay more and more in
order to be able to know the limitation that you are
able to assure that the assay can be interpreted in every
laboratory around the globe. So that’s kind of how I
kind of think about it when I think about the level
of effort you need to put in to characterize the assay. So the question here is we
can spend and haggle here all we want about what actually consists and this is for illustration
purposes from the white paper if you think about the
variety of effort you put in in the different stages. You can go from 60 samples
to IVD 2160 samples. To achieve precision. In my mind, precision doesn’t change. The precision of an assay or how the assay work doesn’t change. What change here, how well
characterized I have it. I’m not making the assay
better, it’s not wine. Assay doesn’t get better if I do more, you know, keep wine longer,
it maybe get better. Assay does not. It’s the same. What does it do here, it
might as these will become tighter to a point and then
if I do two million samples it’s not gonna be better as these anyway. So it’s about a level of
characterization of the assay. The assay does not change but
in how much I know about it and I need to use that
information in terms of interpretation of the data. Okay? Hopefully it makes sense to everybody. Alright so now again this
is kind of a flashback to these categories,
what we are looking at. And then I have here two slides. I don’t want you to read
it, but just kind of, I took it from a validation
report for the GLDH assay and how we address it. The beauty about this test is we can see we can skin the cat in
many ways so there’s always another way how one can
do it in terms of the acceptance criteria and I
think we need to get to a point where we’ll get a sum
guide and thinking about what those could be
and how we gonna do it. The important piece I want
to show here is the feedback which we got from the agency
which I think was fantastic. Because in terms of precision. We got, oh you guys, we had
precision, but did you really do the precision around your cut points? Because we had one low and
one high and the one cut point in the middle we didn’t
have precision on that. It’s linear. But I think it’s a great point, again, signifies the confidence
in your cut points. Right? So that I think was cool and revealing, and then of course you can
talk about the accuracy and what is the clinical truth
and comparing it to another FDA clear assay and how
you can compare something to another FDA clear
assay if there is none. So you know it’s the thinking process. It’s all about the thinking process. So linearity and us. So addressing all these issues
and we can have a debate and I think the white paper
should give us an idea how do I go about it. Right? Not in my mind not really prescriptive, I need 300 samples, do
I need three samples? But no, you have to sufficiently
address this you know the variability does not
influence the interpretation. And it’s up to you to figure it out. It can be calculated, right? Hopefully that makes sense. (laughing) So this is the rest of it
and it comes down to the CLIA proficiency testing because
we’re using it and running it in a CLIA certified laboratory
and make a decision on it. And I think I had, here,
oh, I had here another one. When we look at the interference,
we did not maybe look at Hector’s lipimia and all these
things but a very interesting point is the interference
with exogenous substances. So is there interference
with other drugs or something like that? And it might seem kind of
(mumbles) but think about it. ALT is by itself interferes
with a lack of vitamin B6. If you don’t have vitamin
B6 it doesn’t work. Okay? And believe it or not
there are other enzymes and companies are
working on them which are transaminases and I’ve seen in my life a inhibition of ALT by
a off target because you get transaminases it just
goes to that active site. And just zero, zero activity after this. So that a really good point. So the point is that you
address it in the qualification that you’re working on
a specific population and you got to make it
in my mind that exercise, thinking exercise. Do I have subjects that treated something and all these things. But it’s a good point to think about it. So the question is where
it should be and people should think about it and
address it appropriately. I don’t think there is a
prescribed thing that would take PDR and take every single
drug and try to figure out whether that’s gonna happen
and hamburgers or whatever. So fruits, like grapefruit
juice and inhibition of cytochrome 638. It’s about thinking. Okay so I think I’m at
the end of my monologue. And I hope you at least
enjoy it a little bit and we’ll go to the panel
session so I would ask the members of my panel
if they can come here. And then you can start
asking questions, okay? I’ve got some questions
here, kind of a guide to discussion, but so… So with the panel, so we all
know John-Michael Sauer here so we’ll, Shelli, can
you introduce yourself? – Is this on?
– Is it on? – Yeah? I’ll take yours. I’m Shelli Schomaker from
Pfizer and I’ve been involved with the GLDH qualification
from the beginning and even way back about 10
years ago when we were doing kind of the preclinical
evaluation within the PSTCC so kind of been along for the ride. – And Juliane? – I’m Juliane Lessard, I
work at the FDA in the office of in vitro diagnostics,
division of chemistry and toxicology devices and
I’ve been involved in the GLDH qualification process as well. – Alright, so now you heard
kind of my monologue about it and where we are going
with this so I put here a couple of questions or thoughts. What people think, it’s
not limited to this, but it’s about all the validations. So how we put data, should
we be rigid, should we be really kind of thinking
about where are we going, where it’s all bringing us. And then the differences
between drug development tools. The LDTs, the applications
and so on that we understand and really don’t do too much,
we do definitely support it. Okay? And also about the CSCL
site guidelines and its role in where we are. So just leave it up to you? Is there any question or do
you want me to shoot this? Oh, please, go ahead. – [Man] So actually a
very very nice story. It’s probably valuable to
convolute that into a paper to give people that linear
path, I really recommend that that gets done. I guess that’s a job for you
or me then, John-Michael. So what I think is a question
though is there’s a couple of different components
to what you described relative to I guess many in the audience. Method comparison studies. So you’re talking about
an instrument and a kit. Did you do method comparison
studies with split specimens back to the manufacturer,
or to a different laboratory with a different regulatory structure? Perhaps a CLIA lab to a
CLIA lab or your single lab to the manufacturer. Maybe you could just dive
into that little piece ’cause that’s a little nuanced. – It was split samples
to another CLIA lab. – [Man] Right, did you check
that you’re on the same lot of materials for calibration? – Yes. – [Man] So you actually did
a pre check on calibrations, lots a lot, QCs, so that
you were preloading the success rate?
– Yeah. – [Man] That’s very very
cool and that’s very very important, thank you. – Did that. – [Woman] So Yuri, a question. So I may have missed this
but can you tell me how early in the process did you really
assess bias in the system? ‘Cause I noticed in your method comparison you tried to compare two
methods to try and understand if there is a bias using human samples, but early on when you’re
going through this continuum, when did you start
thinking because you had, I don’t know what your
reference standard was, I missed it because I had to step out. How did you know the bias component there? ‘Cause that’s a key piece
in what we’re talking about as total allowable error. I think it’s a good thing
for you to highlight when in the process you
understood that parameter. – So our calibration
standards were purchased from Randox I believe and
they are CLIA approved so they have verification standards. And there was a range on
those and as long as we met that range we said that the
QC had passed so I’m not sure we addressed bias actually
in the way you’re talking about it early on. – [Woman] But do you know
what the characteristics of those standards were
that were cleared by CLIA? Like whether endogenous
material recombanant– – Yeah, they were endogenous
materials that they had qualified for a number
of assays, not just GLDH, that they use in the
clinical labs, CLIA labs. – [Woman] Okay so that’s
definitely a better situation than where most of us
are when you’re thinking about exploratory markers. – [Yuri] Yeah, that’s what I’m
saying, this is an advantage. – Yeah.
– That we can actually work through it, I see it as
everyone talking about GLDH as the gift which keeps on giving. It’s a good example to walk
through because then you can do what exactly you are saying
and we can kind of see the impact of it and how we
can kind of maneuver with it relatively simply. – [Woman] Right so I just want
to throw that question open for anybody who has experience
to come and talk about when do they think it’s appropriate to be thinking about bias? Mostly they are focusing on
precision but the component, the important component of
bias, when does it really become real in terms of having
the right sorts of materials to assess that? – [Man] Actually I think
that was a good segment to what I was about to say
to Amina so maybe we can go back and forth. Look I think great work,
okay, beautiful work. But I do think this is
99% of what we deal with is beyond this. Let me explain that. You had a system that is fully automated. Almost fully automated. You worked with a
manufacturer, I know Randox, that they qualify that
standard inside all calibration so you had that. You had an assay that has
been around for 50 years plus. That assay you can buy any
component and you can buy it from five different
manufacturers and you can actually get the same results. You had a system that is
made for CLIA laboratories to give it to operator, put
the sample, push a button, and walk away and the numbers comes out. So that is not the world
that 99% of us living in so kudos to you, it was great,
(laughing) more can use it. So as we talk about putting
this in a white paper, I just want to make sure
that we’re realistic with what we deal with. Beautiful work but I’m
not sure it’s applicable to a lot of things we’re doing. – So you know the answer
to that is I think you still have to
address all these issues. You still have to address
and go through precision, accuracy, and all these things. It may be easier because
you’ve got that laboratory equipment and you really
don’t have changing antibodies or anything like that. I agree with it. But CLIA system is the most simple system which you can show we’ve done
through this whole process and get to a point that the
assay has to be validated for the qualification too, right? – [Man] Yeah, yeah. – Does that make sense what I’m saying? – [Man] It does, and I think
I just want to encourage the team for one thing. I think what this audience is looking for and what people are looking calling in is that you say it
makes sense, you brought a great example, you
said, you gonna go ahead and store the samples for two months. So it’s no brainer. Go ahead and do the
stability for two months. I think what this audience
is looking for, okay, I know I’ve got to do
two months stability, but is it that I take
three samples, do I take a low med high, it is
endogenous, then do I, you know, put in a freezer for 24
hours and then say time. I think what we’re
needing in this industry for what we’re talking
about is something specific. That again I go back,
there are white papers, there are (mumbles) papers,
there are other papers. That I think I’m hoping our
white paper and what is missing is some specifics. Don’t people and say you have to do this, but some level of guidance
that this is what is the best practices and
hopefully get the approval, the buy in from FDA to
make this into a guidance so I’ll just leave it as that, thank you. – That’s a great point. – That is kind of what
the CLSI guidances do so these are consensus
documents between industry, government, academia
to kind of give an idea of some of the studies and
how to set up the studies to answer some of these questions. – [Man] I want to go back again. I don’t want the CLSI guideline numbers to be thrown at me, it’s been
thrown at for several years. (laughing) I want to be specific. In the CLSI guidelines show me exactly where they talk about sample stability, not calibrator stability. Show me in, I mean there
are aspects we keep talking about CLSI guideline, show me
exactly how it is applicable to the work that we do. – [Man] Anytime. – [Man] We’ve heard
that for several years. We need specific. Get example of the drug
development and show it in there. – It’s difficult to do because
it depends on the assay, it depends on the technology,
it depends on the context of use, the intended use. – Right, I think that’s where
talking to the agency early is really been critical. So even for our assay that was developed as a laboratory developed
test and we went to the FDA, we got feedback that we
had to go back and do the precision and the
recovery spike recovery with our medically relevant
samples so even with a test that’s supposedly
developed to the criteria for laboratory developed tests, that back and forth with the
FDA was really critical and I think you have to be
flexible and open to that going forward. – I think one thing that we
need to be careful about too and Ash, we talked about this, right? How prescriptive do we want
to be in this white paper? I mean if you look at the
guidance, even for bioanalytical, I mean it’s not super prescriptive. There’s room to maneuver, there’s no doubt it’s a solid guidance but
there’s room to maneuver. – [Man] I am not advocating
to tie everybody’s hand. I’ve actually been an advocate
saying do not tie my hand because I gotta use my
brain, I gotta use my science to deliver good quality
validation to support whatever that I need to do. At the same time we gotta
provide some general boundaries, right, that this is what
the industry standard is, that’s what we’re doing,
get the FDA buy in that this is the minimum
core that’s what is needed. Okay rather than throwing CLSI. I can go in, if you
leave it alone, for me, I continue on doing good validation work. But for the new generation that comes in, do we want them to go reinvent
the wheel every single time having the same conversation
we have right now 20 years from now, or
do we want to go ahead and anchor at some point
based on the experience that we have gained and then
for them to build up on it. So again, I’m just getting
a little philosophical discussion over and over we
gotta anchor at some point something a stake in the ground
and move forward from there. If it’s CLSI, if it’s GLE,
if it’s somewhere in between, we gotta do it. – So do we use our examples to do that? I mean and that’s what
we’ve been trying to do because we’ve been building
more and more examples into the white paper and I think those are the learning points. So we need to find more
real world examples. – [Man] And what I want
are examples we all put our pretty picture of what worked. What I’d like to see and
actually I would encourage FDA. Put examples of things that didn’t work. Because we can also learn
when a validation went wrong. When something was submitted
and it was not approved, why was it not approved? Because the lab could not have
produced it or is it because there was fundamental issues with that? So to us is that at some of these meetings I think we should have a
two day meeting of examples of everything that did
not work and FDA rejected. It would be so valuable. – Well the interesting
part is thinking about the qualifications that the
PSTC’s been involved in. The FDA has never said nope,
this is the wrong assay, you can’t use it. Instead it’s been more
around we need additional characterization around X Y and Z. And so we’ve never gotten to
the point where it’s been said, sorry, your assay’s not gonna
work for this biomarker. – [Yuri] Alright, so over there. (laughing) – [John] Thank you. John Ellington, OGC. So maybe address that point
in part but also a question regarding quality control
based on your comments which were very good. First of all you said
it was a (mumbles) assay but clearly the assay outline
is measuring enzyme activity and many of these analyzers
and the typical gold standard for that is to use zero order kinetics. Rather than a fixed incubation. I just wondered what
type of method that was and the reason for the question
is that if you’re using zero order kinetics, the
validation exercise for things like prozone is quite different to the way that you would do it in
a standard immuno assay. – Okay I have to plead
the fifth amendment. (laughing) – [John] Okay, that’s fine. And I guess two options– – Shelli, do you know? – I know it is a kinetic
assay, it’s all done on instrument, I know they
did look at the substrate depletion and that when they
were doing the validation. – [John] Yeah okay so the
other one is things like you can also get cofactor
and coenzyme restrictions simply because of high values. But to Ashton’s point as
well, you showed it was a fully automated clinical
system but these assays start out as microplate
or test tube assays. They don’t go immediately
on fully automated systems and again I’ve been around
long enough to be able to say that that’s how we did
ALT when it first came out, it wasn’t on an automated
clinical platform. As to the QC, you mentioned
about getting the QCs at the right level. I just would be interested
in your comments because I find very difficult
in the bioanalytical world using standard PK quality
control in terms of either force execs batch to batch or we don’t have enough QCs on there, they’re not at the right concentration which goes to your cutoff level. But also typically
looking at whole studies start to finish where we’re using the same quality control material to try and assess two assay performance
because it’s very interesting looking at the different levels
of validation that you do at different stages. I think one of our biggest
challenges is actually getting our validation data to
mimic what will happen in the sample analysis stages. A lot of companies put their
best teams on validations, they get great data,
throw it over the wall to the production team
and it all goes belly up. And so your validation exercise
typically may not predict how the assay will work
in a routine environment. And there’s ways that we can improve that, multioperator, multiplatform, all of that. But I just wondered about the idea of accepting critically batch to batch based on the criteria that is used in PK, I don’t find applies in
the clinical world at all. It just doesn’t add up
statistically or clinically. So I just wonder what your
comments were in terms of how you think we can actually
give some leeway to people to look at their quality
control in a truly analytical quality control
manner which wouldn’t necessarily restrict accepting batch
to batch as you go along but perhaps retrospectively
looking at the data overall to show that we’ve not
inappropriately either accepted or rejected data. – Those are all good points.
(chuckles) Yeah please. – [Jean] Is that John? Hi.
– Hi Jean. Good to see you. – [Jean] Yeah I understand
what you say very well and actually in practice
I make up QCs using the recombinant and I use
those QC as a part of fail to accept the run. But in addition, I have
so called sample control. Those are truly endogenous
sample pool that I collect and pool and subdivided
into thousands of allocates. And I use that to learn. And so along the span I
still remember one assay that we had like a three year program and I used them and you
can see the chart like the (mumbles) chart that there
are not a lot differences. But it is, you know, you
can clearly see the biases due to lot differences. Number one. Number two is that I
also use that to look at stability trend. So I mean that’s a lot of
wealth of knowledge you acquire from the endogenous sample. So I still say that for
stability you have to use endogenous sample. – That’s exactly how we did it. You described, with the pools. – [John] And I do exactly
the same and have done all of my life so it’s using
sample QC is important. I guess my question was
directed as if we’re putting a white paper together to give guidance to the bioanalytical community,
how does that sort of thing translate into recommendations
for accepting the data of the study as opposed to the typical way that laboratories are going
to when they’re doing PK work which I don’t think
applies in the same way. We don’t get the right
information from that. – Yeah I mean in this
case it’s even, right, the assays are on a routine
basis because you monitor safety of subjects so it’s
not that you have a study and you finish the study and
so it’s basically continuously so that’s why the proficiency testing and using these pools and
so on just making sure that you’ve got the quality
assure over a long time and even your two samples
you measure on Wednesday not gonna be drifting over time. So it’s what the people
with clinical labs are deal with when they have the
laboratory developed test in hospitals, right? – Yeah. (mumbles) – [Man] That’s the message I
think would need to get out to the community–
– Yeah. – [Man] That we’ve not done
that type of thing before. – Right, and I think we
have the benefit of doing this assay nucleic proof
test, they have all those procedures in place already
but we still need to put in the white paper
at least make it clear in the white paper that they
do need to have QC samples even if they aren’t available
you need to create them and monitor them over time
so you understand the trends. Because I think that happened
with safety in some cases we didn’t have the QC that
we needed over five years. – But I think we need to
really clearly distinguish the thing because I don’t
want to have everybody who goes with the qualification. And I don’t think in my mind it’s the goal to go through qualification
like it would be a LDT for safety and being
applied for that stage. I think people then when they
would apply that they have to figure that out and
that each individual lab has to figure it out how to
do the proficiency testing and do all this and they
produce a reasonable data but I think that for the qualification, I don’t think we need that. – But we have to be careful
because I think what we’ve learned is these
qualifications take a heck of a lot longer than we plan. Right? I mean I think the
original estimates for the kidney safety program, oh it’s
gonna be done in two years, it’s gonna roll quick,
you know, we’ll be okay. I know, it’s taken a
lot longer, right Steve? (laughing) – I’m switching back and forth, it’s okay. (laughing) – [Woman] Okay, so I’ll quickly go. I wanted to come to
your second point there. First of all I liked
the presentation a lot, it’s perfect case study of
everything going wrong, right, and I liked the analytical–
(laughing) – Yeah, we made it up. – [Woman] The analytical
continuum, the concept we’ve talked about but you laid
it out really nicely as well but I’m wondering if we
need sort of a parallel analytical validation
continuum for the 90-95% of the biomarker assay–
– Mm hmm. – [Woman] That are implemented
in the clinical trial. So that’s your second point there. How do (mumbles) integrate this
to biomarker implementation and I think maybe we want
to clarify that a bit by what we mean by
biomarker implementation in clinical trials. Are we talking about the quote
unquote 90% which include (mumbles), target engagement,
proof of concept biomarkers that you never every have
intention to take them to predict something other
than internal decision making? So this is where it getting
confusing a bit in the audience because I think there is a
premise that these criteria or what we are discussing,
the validation requirements, would then translate down
the road into the biomarker implementation in clinical trials as well. This is where I think
you’re hearing slightly differing opinions about
what is appropriate. I think Mina’s comment about
the bias, reference interval, you never would have a
cut point to begin with when you actually are trying
to quantitate a biomarker. So called relative accuracy still. But you want to use it for
PKP modeling, QSP modeling, so there’s a variety of
uses for this biomarker so it goes back to
context of use but I think it’s required some clarification. – I mean I would go to
Chris’s comment from yesterday and Chris jump into
whether I’m misrepresenting what you said. This, to me, is, and I
think I’m on the same island as you are in this, if you
use it for, if you have a biomarker and you use it for
a internal decision making, you’re not really about dosing, you’re not really
influencing patient safety. Do apply it as you wish. What we are talking about
here, it’s a biomarker which you make a decisions
on dosing, decisions on inclusion, exclusion, and so on. So Chris please, can you clarify that? And that I think needs
to be done this way. – [Chris] Well exactly and
her comment was what I’ve been trying to articulate but
she did it much better. (laughing) That around context of use,
especially we’re all trying to advance scientific understanding. And qualification is a way
to work collaboratively towards that. So your context of use may
actually be what she’s describing which is proof of concept,
exploratory, that is a valid context of use if a
collaborative body wants to voluntarily gather science for that. So what I still challenge
the group on since that is a valid context of use, what
is the level of analytical validation for that context of use? Because what is being proposed
is if this is the minimum for any context of use, and
that context of use is valid, then you’re saying for that
low level of context of use you still have to do all seven
analytical validation steps. And I just want to make sure that– – [Woman] More, not even
just all seven, right, actually some of these
concepts may have to be thought about really
differently as well, right, where you don’t have (mumbles). So some of these, and we are
getting to this in the next session perhaps but I want to
make sure that we are clear about these categories. – No I think you’re absolutely right. But I think there’s some basic
things we do for any assay, even if we didn’t know what
the subject variability was, what the cut points were. We want to generate data
so we can get there, so you need some type of
validity around that assay in order to be able to answer
these questions because again it’s just like what’s in the
lead paper, it’s cyclical. We learn and confirm, learn and confirm. I mean it’s a beautiful model. – [Woman] Yeah, yeah, thank you. – [Man] Yeah so I’m getting concerned over where we’re going.
(laughing) Because we’ve been here before. We keep rehashing the same
thing about being prescriptive and what we’re gonna
specify and how a lot of it could be free willingness as we move on. I don’t know if you
guys have been following and I made this point at
APS, have you been following the reproducibility crisis in science? It’s being publicized across our field. There are people that are
throwing out 10% numbers, there are people that are
throwing out 40% numbers, people can’t reproduce our research. Okay and I’m gonna tell you
that’s a biomarker problem. Because all this data that
we’re trying to reproduce is biomarker data. So I would say be very
cautious even early on to say that we can all just
freewheel and make up parameters as we go. I think that there’s
a strong argument here being at least slightly
prescriptive even early on. And so just kind of keep
that in mind that I feel like we’re kind of moving away again
from the specifying things. People are gonna look to this
paper as the gold standard for reproducibility in our assays. And I’m telling you, this
is the reason why we’re not able to reproduce our
data because even early on people are like well, you
know, it’s good enough, and how much do we really
need at this stage, and the differences could
be plus or minus 1000%. So just be careful about this. People will look. This should be the gold
standard in this paper. I do not want to say there
should be no place in here where people, it’s ambiguous
on what assays are necessary and maybe we could even
put a little bit of technology based criteria
in there that represents good acceptance criteria. – I think the important part
is identifying when you need that gold standard acceptance criteria. I think that’s what
we’re trying to get at. There is no doubt that when
we conduct early assays, that’s not 100%, it doesn’t need to be. ‘Cause we’re asking questions. – [Man] You should err on
the side of gold standard. – Well no hang on, hang on. When somebody conducts
the very first assay, there is no way that they can
have a fully validated assay to be able to equal what we’re gonna say is the gold standard. You have to learn and confirm. It’s just the way it is. It’s science! – [Man] They should be able
to look at this white paper and they should be able to
see where their assay is weak. There should be no– – So they should know where
there gaps are, right? When they look at this white
paper and they’re conducting their first assay, they should
know what more they need to do to get it.
– Yes they should. – That’s the standard, right?
– And you have to take a stand on what is a good assay
and what is not, okay? – [Man] So (mumbles) paying attention. GP44.
(laughing) Straight up. And, and I know this and
this may be a shock to you, there is a CLSI guidance
document called C62 and I wrote a big chunk of it.
(laughing) It is a development and
validation of biomarker assays using LCMSMS technologies,
use one half of this, and it dives deeply into
calibration stability which are surrogates, diverse
pools, specimen stability, and order sample and post
processing stability. CLSI C62 and on October 14th
last year, CDRH recognized that as an appropriate analytic standard for endogenous biomarkers with
LCMSMS and there is a ton of stability and I
think you owe me a beer. – [Man] Okay this is Washington,
I’m gonna have my lawyer call your lawyer,
(laughing) and we’ll settle and
I’ll leave it as down. That’s all I’m gonna say about
CLSI for the rest of the day. (laughing) Go read it. – [Man] C62 is one half
of what we’re describing and I will reiterate–
– It’s being recorded. – [Man] I got it, I got it, I got it. – Secretly. – [Man] I got it, recalcitrance
isn’t appropriate, I got it. But it does indeed involve
a lot of what we’re beating to death here, these
things have been done. I actually had a different
question for you. GP44, C62.
(laughing) I’m kind of interested
on even though it was a manufactured assay and as a system, you didn’t really point out
which one of the columns that you followed through on validation. You pulled data out of the
table at the end of 20 days, two replicas. Could you just maybe
deconvolve whether that chart that you showed was actually
mapped on top of your timeline for me? You understand what I’m asking? – I don’t have it here. This will be actually a good way. Because that would show
what was done when. So what I showed the table
with, let’s go back to this one. This table. It’s basically the status now. Okay? And that status now, it’s
kind of a collection of what we have done over time
and so I don’t have that but Shelli can comment on it. We had a quite comprehensive
validation from the get go. You know more about it. – Yeah so we started way back like I said with the preclinical validation,
the assay’s the same, and we had pretty set
criteria, kind of a standard precision, three or four
days, most of those criteria probably not interference and
a couple of those other ones. And then as we moved on we went
from preclinical to clinical we added more precision data
and then we went to the LDT standard we added 20 days
and the method comparison and interference and again
interference depends on what population you’re looking at
but we added the typical ones that you do for serum assays. – [Man] I’m gonna push one step further. Did you try and force
failure by force lot change, force calibration change,
and induce, in a short term timeframe, a lifetime concept
of complete reagent switching in your validation studies? – So no, not in the
validation but we’ve done that within the lab because
it’s been so long now we’ve been running these
on clinical trials, but no, not within the validation. – [Man] Congratulations,
because again one of the things that I think Ash made a point
we are drawing to a phase two phase three axis of evidentiary data. That’s where we sort of
dialoguing but we’re trying to look backwards but let’s
remember we’re doing this pinch point to phase two phase
three because after that, these assays and these assay
systems can live in clinical medicine for 25 to 35 years. So it’s the time concept,
the control over time that the clinical
laboratories and the CLIA labs have really got nailed
and I apologize to you I had thrown out these
numbers because these problems have been solved yes in
a subtly different way but it is still control of
precision and control of drift over time in the morass
of humanity coming in and coming out of the laboratory as an example of where the
clinical laboratory is really direct derived this framework
and I want to acknowledge the fact that you thought of
that and the thinking exercise you talked of Yuri I want
to compel other people to sort of have a glass of
whatever it is you drink and have those thought exercises, what if. What if, what if? Because it also becomes
a fascinating journey. Of discovery. – You need appropriate QC
across the whole journey. – And then also if you look
at the data over time right so we’ll look at the historic
performance of the assay over the years so it’s
evaluated away for trends and if there’s a drift or
something like that we would hope to see it. – [Man] Are you guys
aware of the megapools? So we worked, the industry
worked a few years ago with a company called Golden West. Biologicals, actually
(mumbles) wrote a paper, they make 400 people
pools every three month, and have been done for
the last four years. So the size of the pool normalizes to the marker concentration
’cause it’s 400 people. They make them every three
months so you’ve got four time studies or stability
studies banked per year and they’re about four years old now. So that is another way,
the size of a pool controls the average marker
concentration as a longitudinal control for drift. – Yeah. It’s all makes sense but
again right so we’re talking about qualification and
then we’re talking about clinical application which
is a little bit different and I think we need to
kind of keep that in mind because otherwise we just
kill ourself if we’re gonna do everything like GLDH in this stage. So we need to kind of, but I
think what GLDH is good for, it’s the example of it for
qualification and for application in clinical trial. We can see on that one
example the difference and being able to consciously
say so what I need here and what I need there because in any fact I would like to put it
in a clinical trial, I would like to make a
decision, I think that’s the holy grail of what we are going after. So Steve please. – [Steve] Been here so long
I think I forgot my question. (laughing) You actually brought forth
a can of worms in your talk and John literally took the bait by going to zero order kinetics. Looking at this as an
enzyme activity assay I’m gonna open the can of
worms and let ’em out now. – Okay. – [Steve] The difference
between a mass assay and an enzyme assay is a very serious one in terms of how this is
performed, but you mentioned that there are endogenous
potential inhibitors to GLDH as there are for our liver
transaminases as well. We measure those by
activity rather than mass because we know we can
have denatured material but if we measure activity in
the presence of an inhibitor, we’re going to have a
false positive result with potentially serious
consequences to the patient. Right? So was anything done,
did you water your ideas on doing things in a, say,
spike recovery fraction where you would not actually
spike with the enzyme material itself but with in
zero order, first order kinetics spike with the substrate in
a substrate deficient matrix and examine that the activity
is proportionally increased to the amount of substrate that you add? So this would control for the
false positives and I say this full well realizing that in
our kidney safety qualification NAG has exactly the same issues. So how can we look at
that combination to reduce the level of potential
false negative results by inhibitors in enzyme assays? – That’s a fantastic
bioanalytical question. We haven’t done it. And really not. And the question is when one would do it, it’s important to characterize the assay. We haven’t done it. I’m not aware of endogenous
inhibitor of GDLH, I think there must be one
because there are some for everything but I’m not aware of those. – I mean we did do a
literature search, right? – Oh yeah. – Right, we looked across
it, and we came up with very little nothing. – Nothing. – [Steve] But true confessions,
we did nothing of a similar nature for NAG in the
Nefertox panel either. – Yeah. – [Steve] And these all
apply to that as well. So it’s a different way of
looking at how we have to do the validation for our
individual situation which plays into a lot of the
questions that have been here even though you start with
a manufacturer’s platform and appropriate IVD to do this,
you still have to validate that as working in the situation at hand in which you’re working. So whether or not you
have a kit to start with doesn’t matter, I’m not so sure
there’s a 90-95% difference here in how we do these things. We still have to take whatever
it is that we’re working with and show that that meets our
needs for the context of use. Now it might be a lot easier with an IVD than anything else, but
it still has to be done and so I think we’re all in
the same boat at that point. – Yeah, I mean how did we
figure out the ALT inhibitors? I think what happened was after
those compounds were dosed we saw all zeros across ALT– – [Steve] That’s right and you had– – But then we did an investigation. – [Steve] Denatured ALT which was inactive and you had to go from mass
assays to activity assays to be able to show that you
were actually releasing ALT. Right? – And the vitamin B6 was
the people they didn’t eat the processed rice, they had
a low level of ALT, right? – [Steve] Mm hmm, mm hmm. – So it’s all post marketing. I don’t think that…
(laughing) – Yeah I think that’s the
question is how do we, I’m sorry, how do we handle
that in a qualification. I mean we’re not gonna
have all the answers. And so I think we need to
do the best we can and– – And we need to think
about the risk right so we’re adding this on
addition to everything else that we’re running so we don’t have GLDH, these kids don’t have a way
to monitor liver injuries, so we’re not really risking these kids and we’ll probably if
we get a high or a low, we’re gonna look at that
value, we’re not gonna just stop dosing immediately or… – And I think another solution
to it is really monitoring your data. You gather your limb
systems and you look at your performance of that
while you are in production over time–
– Retest. – And that way you start
seeing, start getting people from ALT that just were on
that rice which was processed and you start seeing trends in those. And then how you get to it
so I think it’s important to the whole laboratory,
how it works, how we apply all process is important for that. – [Steve] I think that’s an
incredibly important point and we touched on that
yesterday and I’d like to recapitulate that just because
it comes off a CLIA platform and it works really well
doesn’t mean that you can’t not look at the data. You still have to examine
the results as they come out and make sure that they’re
trending appropriately everything else that’s going
on and that’s the part of this that we really need to
put in place for looking at these things longitudinally, right? So you said, January, June,
July or whatever month you pick, yeah, are they working
the same, are we doing the same amount of good for
the patients in the testing at those times? So we can’t afford not to
examine each one of those when they come out if you
have other reference data to base a judgment on, you
have to use it and look at each and every individual point. – But again we need to, I
totally agree with this, this is really important
for the application. But now we talk about a
qualification biomarker or validation of an assay
for a qualification. You may not have this whole
wealth of data and everything so I think–
– I know that. (laughing)
I know that. – So I think it’s important
to say what really, because the qualification
is one end point or outcome which gives you a confidence
into applying biomarker in certain contexts of use. But then I think you need to have an assay validated for that. But then you take it and
start applying it in a clinical trial for decision making, then you need to have all
these things in the control in order to be able to
rightfully apply it, but I don’t think you
need to have it a priory for this kind of a comparison
over time and I think for that qualification. Juliane, what do you think? – Yeah but the more robust
your analytical validation is ahead of time, the more
confidence you can have that you’re not going to
run into problems later on. – Absolutely.
– Absolutely. – [Steve] Absolutely, that’s it. – Makes sense. Who’s next?
(laughing) – [Man] Well I’ll be
honest I’m pretty woozy. (laughing) I’m gonna ask the
organizers if they can maybe get those Friday buzzers
so I can sit in my chair for 20 minutes and when it
buzzes red I’ll come to the mic. (laughing) Okay. Takes a little while to sink in, I know everyone’s not a gem. I’m your worst nightmare,
I’m another Russ, sorry. (laughing) I want you all to relax,
I have nothing to ask, I just want to make a couple statements and people were telling me
yesterday I was too quiet and I was quiet ’cause you
guys were doing a great job arguing but that side of
the room is just terrible. I’m sorry.
(laughing) Okay, you guys, a pleasure to sit with. I want to make sure that
again it’s gonna be based on the same statement that I made yesterday and I did forget everything
and Lauren helped me remember and I wrote them down. People will use this paper to
see where their assay is weak. Use this to drive what they do. Use this to understand
what more they need to do. True for qualified assay. I want to make sure that
again we didn’t sneak back to the exploratory hypothesis
driven because this is not the paper I would give those people. I would give them Jean’s paper, I would give them Crystal
City, pick the number, I can’t remember which one, one of six. Six, and probably a paragraph
or two out of the next one. But again you gotta separate those areas. And 44 C62 the guidance
that Russ didn’t fully read. I think that’s a great one. I didn’t read it but man
if you read most of it, that’s fine by me. – [Russ] I wrote most of it. – [Man] So again really
whatever you’re doing and when I’m watching this
and I’m watching the other kidney ones it all makes sense to me and we need to do it,
patients are involved, but it really doesn’t
apply to when I need to put 12 or more biomarkers
into an oncology study and I’m gonna use ’em once,
maybe I’ll take a couple to the next stage. I don’t even need a
plate reader sometimes, I could just hold that,
yep, there’s yellow there. And that is a hint okay
so again just really focus that down, I’m
concerned that this paper has a sneaky way of showing
up in a guidance document for bioanalytical method
validation for biomarkers and that scares me a lot. And now I’ll sit down and
your Russ nightmare is over. – [Russ] Not quite. (laughing) So I would ask and Russ points well taken. But I would ask to consider
that there’s an I in community, there’s no me in community
but there’s unity in community and anything, I know, pretty good huh? (laughing) And anything that even in early discovery, if we can get away from four
six 15 or four six 20 X mindset and just generate a little
bit of this intra individual, this biological context
or biological result data to just support, that’s
just quality information. If it’s technology like mass
spectrometry or well qualified standard or even a standard
somebody else can buy to rebuild an amino assay, we
can provide some continuity for the growth and
translation of biomarkers. So I would just ask and I
agree, it is out of scope. The exploratory biomarkers is not within the scope of this document. This is a pinch point to
phase two phase three kinda but anything that we
can glean as a community to help in that translation
is truly value added and I really, really
would like to see CVI, CVG and that concept a little
bit baked in early. So we can work off each other’s strengths. – [Man] What we’ve learned is
that you didn’t get my joke. So I really don’t hold an
Eliza plate up, I look at it. Four six 20 doesn’t even
enter my thought pattern. The assays are developed fit for purpose, those that need it, four
six 20, those that need biologic variability which is pretty much every single one we do, we always look at biologic variability, we look
at biology and stability, variability, analytic
stability, we do it all. My concern is it goes
to the Nth degree here. And if I have a biomarker
in an early oncology study and even though it’s an early
biomarker and in an early study if it’s gonna be
a go no go decision, that assay’s more looking
like this because I know I’m gonna have to kill a project. – [Russ] Well so that’s
a very very clever point and that is risk return. I have the fortune to
work at a couple of great pharma companies, Lilly
in my career particularly. More work, make a kill
decision but make it smart. Which is I think what
you just alluded to Russ. – Exactly what I said, yeah.
– Okay thanks. – [Woman] Like Steve I kind
of forget what I was gonna say but two comments. First is a continuum of what the Russes have been talking about. I think everybody really
was responding well to the idea of the continuum
of the context of use and the assay validation
but I did want to echo what Russ Wiener said in that
very clearly articulating that there are assays that
will never enter this continuum and I think where I’m
seeing confusion in the room is at the exploratory stage
you often have biomarkers that you can see if it starts
to hit it out of the park absolutely, that’s where
you’re gonna want to go and so that’s an
exploratory marker you’d put on this continuum in
the exploratory space. But then there are a whole
bunch of other exploratory markers that have no
interest in ever going there. They’re for internal
decision driving and I think if there’s a way that you
can make that really concrete in the paper so people are
really clear and I can give sort of a related
example so we do a lot of target engagement assays
early in development. We just want to convince
ourselves to make more spend on our drug. Yes our drug got where we wanted it to be. That’s a go no go for our
program internally we’re done. Right? So that never needs to go
anywhere near this continuum. But you can imagine a
scenario where we might say not only do we want to
derest that decision but we actually see there’s
potential that the extent of target engagement is
maybe going to inform what our dosing paradigm is
going to be and could ultimately be a tool for physicians to use
to determine when to redose. Well that’s the same assay
but that’s an assay that you’re thinking if it’s gonna
go that way it’s gonna get on this continuum. So I think if we can be
really clear about that because those of us who
are worried about creep are worried about those assays
that we’ll really only use for ourselves, that we’re
never gonna ask anybody else to ever reproduce ’cause
we’re making our internal, our risk, our spend decision. Should be clearly excluded from this. While also emphasizing the great value of if you see that you have
a biomarker you may want to qualify in the future
the earlier you kinda get on the train the more
you’d save yourself time. So first comment and then
second comment quickly which– – Can I quickly respond–
– Will come up later. – [Russ] To that one, Lauren. – [Lauren] No you can’t because
that was absolutely logical so no you can’t rebut. (laughing) I’m teasing Russ, of course you can. – [Russ] I’m feeling the
love, thank you Lauren. I want, page five, the first
introduction, fourth paragraph it says clear, one end
of the clinical continuum and I’m gonna quote as well as assays used for measuring exploratory
biomarkers in clinical drug development are outside
the outside the scope of this document. – [Lauren] So yeah Russ, I read it. I completely agree. I think the point that I’m
making is you really can’t overemphasize that enough. And so if you have a figure
that has this continuum. Another statement that
says this is a continuum for biomarkers you envision
going down the qualification route, reminder, reminder,
this isn’t for, you know, come on, so we all know how to present. You tell them what you’re gonna tell them, you tell them, and then you
tell them what you told them. In this instance I think we
can’t be too careful about that. I think anywhere you
can embed in a figure, Anything for your own purposes
would never have to go past point X, I just, I think
it can’t be overstated. – [Man] So I just want to
emphasize I wasn’t arguing against this continuum,
I’m not saying that we need all validation all the time. I liked how the white paper
handled this because for me it was a progression of quality. The only thing that
still worries me though is are you willing to
publish this early data that you’re generating? That still bothers me a bit. And so I was kind of hoping that we would at least leave
some cautions for people ’cause I know that data
is all being published. Right? All this early biomarker
data people are dying to put it out there and
just again, I am not arguing against this impression. I believe in it, I liked
how you guys handled it in the white paper and I agree with you. – [Lauren] So I hear
you on the publishing. In my personal experience it’s
not the people in this room that I have difficulty reproducing data. It’s all the academic
labs out there, right? (laughing) I mean that’s the truth so. – [Woman] So may I just clarify what I heard from you, Lauren? You are not necessarily saying
when we take internal risks and when we make decisions
on target engagement, your assay does not necessarily
fit that purpose, right? What you’re saying is
it is a quality assay, you can’t stand behind a
scientific publication. It may not address all of
these criteria to the T, right? – [Lauren] Exactly. – [Woman] So I think we
need to be very careful because I think the
quality of the assay is not what we are questioning, it’s
the extent of validation. It’s very different. – [Lauren] And whether we
want to get on a prescriptive continuum with it, that’s all. The second point really
quick ’cause I’m sensitive to the fact I’ve been standing here a bit is around the examples
that go in the white paper I think are really helpful
but also given the historic context and how long it
took to generate those data to get those examples in the white paper. Some of the practices
are gonna be inconsistent with what our recommendations
are for approaches like for example when it
comes to all the analyses that should be done with
endogenous and a light up front versus a lot of spike recovery
experiments that were done and I don’t know if you could put any sort of additional text about
if it were 2017 and we were starting this same exercise
of this qualification of this biomarker it might
look a little different in this particular ways. – I think that’s a really good point. I think the other thing that
we need to talk about also is the fact that the expectations
of the FDA have evolved. Right and so you can see
what a large validation that the kidney safety
biomarker validation was. I mean, soup the nuts basically right? And so the question is if we
had approached the FDA today, where would that conversation go? It might be very, very different and so we need the capture, thank you. – [Man] So just one
follow up, one request. So to page five, just kind of
playing like Russ over there. So I was reading this
introduction and I’m like yes. It doesn’t apply to the vast
majority of the work I do and it actually says that. Assays in measuring exploratory biomarkers in clinical development are outside the scope of the document. And then fear went into my blood. However,
(laughing) the general analytical principles outlined in this document for biomarker
assays may also be applicable to biomarker methods
in clinical development of pharmaceutics and
that’s the key and the line that gets people to go hmm. So I would consider removing the however and, right, and moving the
paragraph that’s pretty much towards the end of the
introduction up front so that when people are
reading it before they start reading all this they
know what the purpose of the document is. It’s for these qualified
biomarkers and not exploratory and then the rest of the
document’s beautiful. It was a good read, I
enjoyed it, and being in the companion diagnostic space
as well lots of it applies. So thank you. – Send us your red line. (laughing) – [Man] And just to clarify my comment much like you had done. I’m all for trying to have
reproducibility of the data. We have to have that. But in the biomarker
qualification space we also have a lot of academics coming to us. And for areas we have
drug development need for which we have no standards at present, vascular injury as a safety marker. We have to somehow get
ideas out into the space to have them be used and then from there we can decide do they have value and take it to the next step. Those academics, in this
case it’s non academics but we do have others, they have no intent to market, develop an
assay but we have to give at least the drug developers
that want to use those markers enough details about the
assay that they can at least voluntarily incorporate
it into their programs if they see that there’s value for it. So all I’m questioning
is this is not my area of expertise by any stretch. All I’m saying is when you
put forward these principles just try to put caveats around
them so that for the non, you know, pharma device person, an academic person can still do this. Maybe not with the same caliber or rigor, but at least make it an
attempt so that we can get more information. – Again Chris as you know,
we have the letter support process and with that
letter support process we also put a data package
together around that, right, which includes how the
analytical was done. So that’s what we’re trying
to do is how do we push the community forward? I think there was a question. – [Woman] Can I make a comment? Actually I disagree a little
bit with what was said before. Because I actually especially
like the introduction as it is written
(laughing) because I mean if I start
with an exploratory assay and I have to think about
how I am going to do precision whatever and then
you know if I have a guidance for not this assay but for a different one the same principle can be
applied so general rules like no spiking, la la la. So this is so much easier
than I don’t have to reinvent the wheel but I just can take this out, what would apply then to the
assay and can just leave out what I cannot do because
things assembled are not there, blah blah blah. But I can just take out
whatever you know is suitable for me and the other point
is if this election of an exploratory biomarker
is made because we think something is there so you
would never, ever select a biomarker to test even
though it’s exploratory if you don’t think it
could show you an effect. And what is so different with
so called valid biomarker at the end? You want to show that this
biomarker makes a difference and this is what you use it for. So why not applying things as they are? And I think to be honest
I like the introduction as it is written. – [Jean] I want to comment. I think previously some
comments have already been made that the way that we start
the discussion of biomarker in more than 10 years ago
and so we actually have a group met over 18 months
of time and carved out this white paper that
was published and it has all the details of how
to do it starting from the very beginning. So I really encourage
people who have not read the white paper, please apply
it because I have also met other people who came up to me and say I use that white paper to
follow through and it works for my exploratory phase
and also they understand it’s a continuum and then
they can go to a pilot study and gather some more data and information and also endogenous sample. So that sets the stage for now. For the qualification phase. So I hope that we don’t get mixed up what is a proper use of
already a white paper and also not only that there
are many, many publications that cited in this white
paper that you can use and learn from that. – [Man] This was perfect. I was actually about to
ask, Dr. Lee, don’t leave. We have it. So the question I was gonna
bring up actually a comment was that I wanted actually
Dr. Lee so that’s great, you just came. Dr. Lee and John Allison. Some of the people that were
in the original white paper. I like to hear from you guys. That with these documents
are the documents which I’m not gonna name
were out there, right? Why did you guys actually
put the original white paper together? What was it missing for
the drug development world that you guys decided to come
up with this white paper? Right? – [Jean] Well actually
to tell you the truth I was the president of
clinical (mumbles) society. I really try to encourage
the collaboration of the clinical system and I
approach them and I say your machine is wonderful. Can we use your machine? And they say it’s a
closed system, you cannot. And that’s really how it
started, (mumbles) and I then started and say
we get no help because they did not realize there’s a good market in the pharmaceutical world. And so that’s how we started
and had the discussion. And so we got a group and we
have our AAPS ligand binding focus group. That’s how we produced this
white paper 18 months later. – [Man] Yeah I agree Jean. I think my participation
in large was coming at it from a clinical scientist perspective. And I’d been publicly speaking
and giving case studies of where bioanalytical
laboratories following the PK guidance were actually
generating nonsensical clinical data. And they didn’t know it. So part of it was actually
recognizing which laboratories were doing biomarkers that
didn’t have sufficient knowledge. In my opinion to be doing
it and also from a clinical perspective where that
may impact upon subject welfare and safety. Which doesn’t happen only
with safety biomarkers. So I think part of it
was to try and start that move away from the bioanalytical
guidance and document what was critical and how
it could be done differently and I think we did a good job there. I think that was okay. To Jean’s point I think
there wasn’t and still isn’t sufficient collaboration
between the clinical diagnostic and research arenas and
back then the top five or five of the top pharmaceutical
companies in the world also had global diagnostic arms. And they didn’t talk to each other. So whilst some of these
systems are closed systems, we can actually still use
them in drug development by opening up how we generate the data. It just means that you need
to understand the systems and how you can do that, but some of those systems
can generate very good PK and PD data and fit the actual document and the data to what’s expected in
the bioanalytical world so we can get the value out of them. So I think this particular
meeting I encourage me and I think I agree with Mina
that we do need to be careful and that particular paper
is very much applicable to the exploratory phase. For me it’s just great
that I think we’ve now got a consensus that we’re
not doing PK assays. – [Man] At the time
that we wrote the paper, the Jean Lee paper, the DiSelva paper was being generated too which we used for most of our PK work. And the Meyer Sleuss paper
and this (mumbles) paper came out of another work
stream that (mumbles). We recognize that these
were very diverse questions that we needed to answer. So from the beginning this
was a biomarker paper. We weren’t trying to
retrofit PK into biomarker. So we acknowledge that it’s endogenous. There’s a continuum. Your case study is a perfect
example of the gold standard going all the way to
making clinical decisions. We can recognize that then
paper but we recognize that most of the work we do
is with exploratory biomarkers that have an impact on the
clinical study and hopefully we say hey, they correlate
with the outcome of the patient for efficacy and we get an
earlier read that this drug is working and if we have
that good correlation, we want to present it to the
FDA so that they say hey, this is a great drug. We want you out there with it. I mean there’s motive behind this. Let’s face it. And I think one thing I
would like in the paper is some examples of efficacy
markers rather than just toxokinetic markers
because I think there’s an overall need for a kit,
an instrument based platform for something that’s widely
distributed for all sorts of indications but I think
when you’re talking about efficacy markers it’s
usually pretty targeted to a patient population, certain disease, and those are not gonna
be put on a platform. It’s just economically doesn’t make sense. There’s not the market down. – [Man] They are markers,
right, they used for efficacy. I mean lipids. – [Man] Oh yeah, okay. But I’m just saying that
if you use some of those straight off a clinical assay,
you’re gonna get questions back from the FDA. How does the impact of your
drug on some other things and there’s a lot more,
I always wonder sometimes if you have a biomarker
that is highly upregulated, why a CLIA lab would be
a little bit concerned that if it would potentially
there could be some carry over. You may diagnose a patient, the next sample coming into that analyzer and because it’s never seen this before. Just a little few caveats
for even CLIA labs need to be a little careful about. I’ve had CLIA labs say
we don’t want to run your experimental samples on our analyzer ’cause we don’t know what it’ll do to our, what patients we might put in risk. – [Man] So just one comment
to add to my other coauthors on that paper. My goal when I went there
and that was 10 years ago and we still have that picture
and it’s a great picture of the group and I actually
think I was taller, less gray, and John was amazing. But I came in there with
one goal and that is to get the word fit for purpose
to not be a dirty word. Fit for purpose doesn’t
mean a crappy assay. It means the assay is made for the purpose and we go from something that is a yes no up to something that
is highly quantitative. And that paper is written
exactly like that. And I was able to come back
to management who before would say no, I want
a PK type of an assay, I want all these criteria and
I said no we’re gonna do it fit for purpose and that was
not a comfortable discussion. After the paper was
published and they read it they understood it so that’s
kind of one of the goals that we had as well. – [Man] I guess as another author I think, you know I was at an
early stage in my career and we were bringing biomarkers along. There really were no standards
or ways about doing it so I think it really was a starting point and I think what we’ve
seen as we’ve talked about this continuum now is it
really does serve as a place or a benchmark for how
to do exploratory work. So I think we can really focus
this paper as they’ve said on the qualification aspect. My question though Yuri. You had I think what looks
like a really good continuum if you can figure if you will. But if you could go to that slide. Back to some of the
questions people have asked. At what point would you say address the statistician to find out what your cut points are and
therefore that would have guided you as to what your
control setup would have been. How could we overlay some
of these key timings? – So I tell you that and
Shelli can comment on it. I mean we did the cut points
after the exploratory studies. And then confirmed the cut
points with the exploratory context of use studies and
now we are doing actually confirmatory studies of
everything where we can confirm it but the first, very first statistician we’re basically done, right Shelli? – Yeah so for the
publication for the early exploratory work we’re just
looking at GLDH as a liver biomarker, we probably should
have had David on board earlier than we did but as it turns out it’s such a good marker it
didn’t really change anything. But yeah during the IND submission, that’s when we kinda got
the statistician on board and kinda looked at the cut
points and now like I said we’re confirming them. – [Man] So this was a
biomarker that you identified preclinically and then
you implemented it into the phase one setting? – It’s been around for a very long time so like I said the PSTC looked at it maybe starting 10 years ago preclinically when kinda the PSTC was
more of a preclinical focus and we could show that
GLDH is better than ALT at predicting liver
injury based on histopath. So we had a really good
foundation so yes I guess you can say we did but then we
knew it was a good marker so then we started looking at the clinical translation piece. – So I’ll just comment
on the (mumbles) GLDH, how did you guys come up with
that, it was forever here. So yeah that’s been forever
here but nobody actually had clear context of use. When people were looking
at it they were thinking it’s better than ALT. It’s really not, it’s about
the same, it might be better in case of metabolic diseases. Because we do now population studies. It’s really cool what
you can do with this. But as kind of a marker it’s like ALT. And when it comes really
cool and have an impact is if you say specificity. Deliver specificity. It’s basically a liver
specific ALT and that solves lots of problems, even in drug development where you have these increases of ALT which could be metabolic reasons. It could be a (mumbles) action. It could be anything. And maybe those are all extra hepatic. Right, so with GLDH you can address it. Fairly simple. So the specificity of when we got into it because of addressing the specificity and then we kind of realized wow, this is really interesting. And then you start talking
to the patient advocates and saying like okay,
what’s actually really good for the general practice? Right so then everything
kind of snowballs and now we are talking about looking at Nash and all these things in
large population studies. Thousands of subjects. And it looks really cool. So that’s how kind of the
history for that marker is. So it’s been around, overlooked. – [Man] So I guess the
other question is did you, using the data that you have,
did you go back and compute say the performance
standard using the cut point in the total allowable
error as we’ve talked about? And if so, if you didn’t
I guess then it’s– – I don’t believe we did. – No, we didn’t ’cause the
way we came up with the cut points was actually a
really interesting approach. It was basically because
clinicians know how to use ALT. We hung GLDH to ALT. There’s an unbelievable
correlation between the two as you’d expect, right,
if you’re just looking at liver injury. If you throw muscle injury in, that’s where you get the discordance. So that’s the way we came
up with the cut points and it’s a really tight assay. We looked at the difference
between the inner subject variability, the variability in general between ALT and GLDH and GLDH is tighter. So we need to go back and
do that just like we need to go a little bit further
with the kidney safety biomarkers and do the same exact approach. – That’s why this framework
which we are discussing is actually important because and I agree, I don’t know who said it,
it just should be written the way that a biomarker
scientist, not necessarily an expert in validation,
can read it, enjoy it, and know where to go, whom to talk to, and how to do this. So this is extremely difficult to write the way that it’s readable
for general audience, scientific audience,
provide enough information not too (mumbles) but it just
kind of make people think that framework, what do I need to do, in light of all the things
with context of use. Well all these things
that we are talking about. And validate, characterize the assay. It’s characterization of an assay. – [Man] Alright, sorry to
hog the mic, last question. So now if I want to bring forward a GLDH, it’s already, assuming
it’ll have a context of use application, but I don’t use
it on the Siemens platform. I just chose not to do
that for whatever reason. – Mm hmm. – [Man] Hypothetically. Now do I have to, again, is
there a comparability approach that I can take or do I
have to go through and do, in essence, demonstrate
context of use all over again with now the new test? – Perfect, thanks Shashi. – [Shashi] Yeah in our
qualification determination letter we do say or the guidance
we say that alternate assays can be used. We are qualifying the
biomarker, not the assay. That said, when you use a different assay, you may need a different cut off and so on so we realize that and so
we actually have copped out and said that talk to the review division and deal with it in that context. But generally I think you can
address this a little better but something like a bridging
study might be the ticket. But basically that’s what
we say, talk to the review division, ask them what
more is needed and they will probably consult CDRH and so I think– – I mean in the end it’s
a question of, yeah, you want to make sure that
the assay you’re using is giving you reliable information so if you’re changing
the platform, even if, this happens to in vitro
diagnostics all the time, you’d have to internally
validate that assay for your context of use for
your instrument to make sure that you’re still getting
the same information. – And even like we’re
gonna start looking at the kidney assays that
the PTC’s qualifying and even internally you need to run some kind of validation
so maybe not 20 days but you know we would do a
pretty complete validation even on the same kit internally just so we have confidence in it. – Please go. – And there are certain
parameters that would be affected when you move it from a
platform to a platform. Maybe not sample stability
per se but precision might be affected or
accuracy might be affected so really it should be
like a risk based approach like what can you envision
to be affected by this change and then you validate accordingly. – So I’d like to say, I mean
this is a bit out of scope for the white paper. We decided not to address
this because we’re gonna have to have a larger conversation around that. We as biomarker submitters,
as PSTC, as CPath, we want these biomarkers to be used so we’re trying to come up
with constructs by which these biomarkers can be used. For instance, for the
kidney safety biomarkers they’re being run at PBI. And so anybody could work
with specific bioanalytical and go ahead and qualify
and use these biomarkers. The issue is when we did
the GLDH is it was Pfizer who actually was a single laboratory that ran all these samples. So I dunno, you guys don’t
do GLDH analysis for people, do you? – Well Shelli and I are
buying the instrument and running it in our garage, no. (laughing) – [Man] So just to further
John-Michael’s comment that to one of the points
of the white paper example is that all of that validation
work will be published and made available and
then you can reference that so I realize that yes, if
you need to go to another platform and do this with
another test just as Shashi said, you need to show that it
is fit for what it is doing on the new analytical system. We at least have all of
those published because again as Pfizer you’re not
releasing all of those details and most of the biomarker
qualifications that have come through already are not
public, they’re only a limited number that actually have data available. The (mumbles) one comes to
mind because it was also a approved test for a different purpose. So this lets us set our
goals a little bit better if you see that all of
the assays that we’ve used in this qualification have
15% precision every way we measure it and your assay has 25%, you ought to be able to assume
that we’re gonna have to do a lot more work to get
it to the same place. – Yeah, well these kind of
criteria depend very much on the context of use
and on the assay, right? – [Man] Absolutely, absolutely. – And on the analyte that
it measures so I don’t know if just looking at other
qualification validation activities gives you exactly
an idea of what you need to do. – [Man] No, but I think it’s
helpful as a starting point. – Definitely. – [Man] But it should not be exact. – You can see the type
of validation assays but the details are…
– Right. – [Woman] I just want
to have a simple comment to follow up Chad raise a question. I mean can you retroactively use the data to calculate
the performance standard and then that is estimated
at the cut point, concentration, and then say
that this assay is good. – Well I think we have to– – [Woman] In the white paper? – Yeah I think we need to
prove that to ourself, right? We laid this out but I
don’t think we’ve tested it and so I think that’s a good thing to do. – You can use this example
and just (mumbles) everything. We got all the data. – [Woman] I mean retroactive
calculation is okay but I’m just really setting some example and say oh it’s really
useful, that’s my comment. – You can also do it in
the confirmatory phase. – So I just want to make a
very short comment on this. I don’t know whether
we are overthinking it. Because there is in every
hospital in the clinic as people are coming with
assays they are measuring these LDTs, they’re applying
them and the doctors make a decision on it. They are governed by those
whole CLIA thing right. So now in what we’re
doing and I don’t know whether that is as stringent
as what we are talking about here, I think it’s less stringent. Do you agree with me? (laughing) Oh good, good. So I just kind of think that
we need to have a standard for the qualification
which gives the reviewer, us and the reviewer, a
confidence in the biomarker so the assay is just the tool
and I’m sorry if I’m offending anybody okay to this things
but that’s what we need and we need that
characterization of that assay that will allow us to
assess that context of use. Then comes the application. That’s gonna be done in
different laboratories. Shashi said it, we’re
qualifying biomarkers. You’re not qualifying an assay. That’s different from CDRH which you guys, if I understand it right
and correctly Juliane is you are basically IVD, it’s an assay. Which needs to work certain
way but it’s an assay. It’s a different way. So now if me as a sponsor
want to use an assay, then I can use this context
of use and the characteristics which is done through the
qualification and then I go to the lab and need to
validate my assay platform the way that I can do all
the work if I want to do for decision making I need to
do the CLIA piece on it too and boom I can use it. I think that’s as simple as that. Am I mistaken? Okay. I’ll leave it up to you. – Go ahead.
– I’m going to go back to something you said a
few minutes ago around creating the paper so that
the biomarker scientist can read and understand what
is needed for validation and obviously the core of
it though is for the lab scientist who’s going to be
saying this is how I have to specify the validation and conduct it. I want to put out there that there are, and I’m coming from having
spent many years in pharma now being on the CRO side,
there are many smaller companies out there developing
drugs that want biomarkers that don’t have the biomarker specialists and are coming to the
CRO industry and saying we want an assay for this
but they can’t give us all of the details and so
I think this paper should spend some time emphasizing
either the CRO has to create that biomarker scientist role
or somehow there has to be that input into the process
for the smaller companies and the CROs that are being
asked to do this because yes, we can develop assays for
a molecule, but for this you really need to know so
much more around the process going into that. – Very good point. I think hopefully what we
can do is in the white paper is do both. I mean clearly we need
to speak to the biomarker scientists and speak to the
bioanalytical individual in the smaller companies,
it’s a great point. – [Man] So I’m gonna try to
bring some of the concepts that we all discuss home a little bit. And I think you’re gonna see that I think we’re all driving at the, you know, in the same direction.
– Mm hmm. – [Man] The reason I
ask the five six authors to come in and talk about this great paper that they put together,
we know as Jean Lee paper which at the time I think
I was in kindergarten but, you know, at the time.
(laughing) And I learned, right,
these guys are the experts. They put it together. And so again I go back to
a comment I made yesterday. That’s an anchor that we are
using for fit for purpose, exploratory biomarker. That’s what we said, right? We’re all using it in one form or another some are more rigorous,
some a little bit less but it’s there. On the other hand we’re
talking about the biomarker qualification and we’re trying
to set an anchor right there. What I’m hoping that we accomplish. Okay there was a reason
why we have all adopted the Jean Lee paper. So we have two anchors. How much validation do you gonna go and do to move from the exploratory
eventually if you decide to go into biomarker qualification, that’s someone company
dependent, culture dependent level of risk. We established that. So then the question is, not a question, the comment that I like to
make is that whatever we end up for the biomarker qualification,
let’s not reinvent the wheel in terms of acceptance criteria. It goes back, if you go back
to the slide that you had that it shows the number
of wrongs you right. Oh you’re wrong, we already talk… To me again it gotta be
a continuous process. We all agreed on that. So to me again this precision,
your route of accuracy, the parallelism, all
this stuff, it is there. How early do we decide to move
from doing an exploratory. So again I want to make
sure in whatever we learn from CLSI guidelines to add to this biomarker qualification, great. Some of the folks may want to
take some of this information goes back to Russ’s comments
that you said you remove it, but maybe we need to
either be even more clear on what we’re saying is
that to move it and say here it is, you got the two anchor point now if you want to go
ahead and use some of this for exploratory be my guest. Maybe you want to address that earlier. But just let’s not reinvent
the wheel at the biomarker qualification because it’s
gonna have a disruption. Between these continuous process. So I hope that makes sense
but that’s what I hope that we can achieve in this white paper. – [Man] And I just want to be
clear about what we’re calling that one anchor biomarker qualification. Are we talking about
biomarker qualification from a cedar point of view,
or are we talking about biomarker qualification for a manufactured or marketable assay? ‘Cause that’s not what
biomarker qualification is. – [Man] See exactly. – See your point of view. Yeah it’s a good point. – [Man] So just a couple of
quick points of clarification. I think what we’re talking about
here in terms of validation exercises will be very
similar to what the Lee paper calls an advanced validation. And to the point of analytical platforms I think automation in
clinical labs started in 1957. Now whilst I was around
when that was the time I wasn’t working with them until 1972 but since that date I can
tell you that those clinical systems when we think
about all of these issues that can happen analytically have all been thoroughly thrashed out
to the point of carry over to Jeff’s point, that’s a
critical part of all of these. And it was interesting
there was a focus group in the ligand binding group of the AAPS called the 21st Century Lab and they put together a list of criteria and I know Chuck was
very active in this role of what their wishlist was
that they would like to see on automated technologies
in the bioanalytical arena. And I remember going to
one of those meetings and there was a whole list of things and (mumbles) said well you do realize that virtually all the clinical systems have had all of those for 15 years now. So it’s a very well advanced. Now I usually do biomarker
analysis on 18 different platforms at the moment, six of which originate
from the clinical arena. And I choose the best platform
for the right biomarker for every project to get the best results and for things like carryover
it’s typically very easy, we do a pro zone study
with a blank directly after the pro zone samples. That will tell you exactly
what sort of carry over issue that you have. So I don’t want to sort
of drive regulators away more than they already
are from clinical systems. I think they have a great role to play and they can give us some great results. – Thank you, makes sense. So anybody else? (laughing) – [Woman] Does GLDH (mumbles) beforehand? – That’s an interesting question. Shelli, please. – So yeah so preclinically
we’ve looked at like 40 plus just at the PSTC different
(mumbles) and it definitely depends on the (mumbles), the
type of injury where it is in the organ so it varies
so you can have examples where GLDH goes up first
and you can have examples where ALT goes up first. Clinically we’ve seen it
go up in the small samples we’ve done it pretty much
the same and I don’t know if that’s just, it’s more drug
induced but minor changes. But yeah, it varies. – There is just we are on that subject, what is interesting about it
is actually it’s the half time. So the half time of
ALT it’s about 48 hours and half time of GLDH is about 14 to 18. So what is actually interesting
if you look at subjects recovering and don’t
recovering from acetaminophen poisoning so you can see
early whether the subject is recovering or not. Because they start dropping
the GLDH levels much faster if they go to recovery. Anyway. – [Man] Just a last comment quickly. I want to make sure maybe
we nail in the coffin with this reproducibility conversation. (laughing) So you know again let me use the example. If you send the same reference material, okay, same reference material that we buy from the same vendor and
you send it to the 10 pharmaceutical laboratories
and to 10 academic laboratory, all the 10 pharmaceutical
labs gonna get it the same, almost very similar. Five out of the 10 academic
lab may get it right, the same as what we’ve got. Now you send 20 different
reference material to all of our laboratories,
we’re all gonna get different results. So I want to make sure this
idea of reproducibility that we talk about. We heard yesterday, some
today that it has to do, we need to, you know,
worry about our people. Part of it is that we all
have system suitability in our companies, we hope we have. The people I work with they
have system suitability. So there is a training
program, there is other things, so it is usually not the
analyst or the scientist, most of the time. So the true, the reason we
come back and say oh my gosh, 80% plus of the biomarker
data in the literature is not reproducible,
it is not that because they’re bad laboratories, most of them. Okay, especially the
pharma and biotech windows they produce those results,
(mumbles) scientists doing that work. It is truly because we do not
have a gold reference standard for 90% of the biomarker. Out of the 700 biomarkers
that we go after, a handful of them that
have reference standards. And that’s the world we’re
gonna continue living in so let’s deal with it and get used to it because if you take WHO,
Nipsky, all the organizations out there that are gonna
go out and can come up with a gold standard, they
put all their resources, their entire department,
they can put no more than 10 gold standard, biomarker
gold standard a year out. So that will take 700
divided by 10, seven years to get it all. So again I want to make sure
that this reproducibility, it has to do truly in my
opinion the majority of it has to be the reference
material that we all use. – One thing that we
didn’t point out overtly is across both the
kidney safety biomarkers and GLDH, we only ran
it at one laboratory. We did that on purpose. Because again it’s not
about the assay and the qualifications, it’s about
showing that GLDH is responding. So that was a good thing we did. We thought about that
ahead of time and then we were worried about the reproducibility and the additional work
it would take to go ahead and cross validate assays. So that’s another thing to think about when you’re qualifying a biomarker. If possible, use one laboratory. – Yeah and you got the
other extreme is if you look at these clinical assays on
these automated analyzers, we reran a couple hundred samples for ALT which was measured at
University of Michigan and we reran them at our
lab and they use the same Siemens machines and the
statistician was saying like you guys got the same
data, it’s not possible. (laughing) It’s perfect to nth of a degree, right? So you would think that
the CDRH and the FDA does something right. – Well for in vitro diagnostic you know it’s the manufactured really
as anchoring the assay ’cause there’s a lot of
in vitro diagnostic assays that do not have a reference
standard, it’s very common actually and then the
performance of that assay is linked to a clinical outcome. That’s how it’s initially,
that’s how it comes through FDA and that’s really what
tells you what the number that your assay gives you
means in your intended use population, right, it’s anchored by the clinical performance. And then the manufacturer
has mechanisms in place to make sure that years down
the road in different lots in different people,
in different locations they still when you get
a number five you know what that means based on their original clinical performance. It’s a bit more difficult to do you know, and you don’t have that
many factor anchor. – Yeah that’s, I think
it’s really important this journey in understanding the journey to develop the test at the end. So what I’m kind of hoping that with this simple assay, somebody
already makes it and so on, we can really walk it
into in vitro diagnostics the learning which we
would make on the way. It’s actually that’s the
interesting part of it because that could be then applied. Because now we can
discuss here this the CLIA and using it because we have an assay, we have an example, right? I agree with you, the
example is too simple maybe on one sense, but on the sense of process it’s actually a good
example because then you can think about it okay and when
you have the proficiency testing how do I do it? Yeah it’s easy to do it
with a GLDH assay if I have and then it needs to be
done differently right but still you have, we
have a kind of a process which at least somebody
walk through and publicly and we all see it and then
we can adjust this kind of a conversation around it
and the guidance, general guidance I’m hearing from
people here talking about the document has to be readable
or needs to be good for biomarker scientists and
bioanalytical scientists. So they have to, because we need to talk. We are all biomarker scientists. But I’m crappy bioanalytical scientist. (laughing) But we have to get together
and the outcome is a biomarker or interpretation of a parameter
in light of human health. Make sense? – [Man] Absolutely, I just
want to caution and I think it was brought up. This is a beautiful, if
you take this and we put it in a paper as a flow
diagram, as a process, I’m okay with that. But if you’re gonna go in there from, I’m an enzymologist by
training so I look at that and I say specificity,
where’s your KCAT over K and where is it. I mean there’s all kind of
stuff that I can go ahead and throw at this. So what I would say is
great process, let’s map it. Well for example how you
said of specificity for a binding assay versus a
potency assay or activity assay, very different. There’s a reason why 90%
of the clinical allies are out there, they like
to measure things based on potency and activity
and not on binding. So agreed?
– Absolutely. – [Man] Let’s be careful with it. – Yep, I know. I’m seeing five. – Well I think we did a
pretty good job on this one. Clearly we drove a lot of conversation. So maybe you guys deserve
five extra minutes of break. (laughing) – I think so. (applauding) Thank you very much. – We’re gonna come back
in about 20 minutes. (low murmuring) – [Man] You wanna throw
the feed on the microphone? That is actually mine. – [Man] So everyone our 20
minute extended break is up and not being extended to 25. – Okay. Thank you all for getting back
and ready for another round of participation, I’d like to
introduce the next session. This was put into this
program at my suggestion and I’d like to really
give you a little bit of the history why. Since we’ve had a number
of divergent white papers and consortia and groups
who have tried to do fairly similar sorts of things,
I felt it was very important to involve people who had been
doing this in the industrial concern, generating these
data, validating these tests, and have been working
with a regulatory agencies over the past dozen or so years to actually have input
into the white paper. None of these people were authors on that but they are experts in the
field and their feedback on this process is actually
very important to our balancing and understanding in between
the industrial perspective and the regulatory perspective
and maintaining continuity with those efforts that
have been done in the past. So I would like to turn this
over to Mark Arnold who has a great deal of history
in doing this kind of work and thank you very much for
putting together this session. And good luck. (laughing) – I almost feel like a Gallagher
audience at this point then that we all need coveralls
for what may be coming at us. Well you know with Steve
saying that he was the cause for us to be here. I really also want to
thank the committee because they recognized that they
were working as a closed group and they needed the industry
input as well has having this open session and really
soliciting a lot of input from the community. Which I think will go to make the most robust paper possible. So with that, I will allow
the rest of the panel to introduce themselves. – Is this on? This is Lakshmi Amaravadi
from Sanofi Genzyme and I’ve been working
in the biomarker space for more than 15 years and
looking forward to discussion. – Chad Ray from Zoetis and likewise. I’ve been in the field
for roughly 20 years. Almost the whole time in
biomarkers of some sort. – Lauren Stevenson from
Biogen, been in the field for 12 years, in
biomarkers for the last 10. – I’m Steve Lowes from
Cueswest Solutions, 25 years in regulated bioanalysis. I’m actually come from
the LCMS angle of things so hopefully can bring that perspective. Little bit more out in
this discussion here. – Okay. Okay so moving on. This group all had the
opportunity to read the paper ahead of time and had a couple
meetings and we’ve collected some of our thinking here
about the points that have been discussed pretty thoroughly
but we want to emphasize and then kind of use
this to set the stage for any other questions coming
in from the audience here or the audience on the web. And we really would like to
have more audience on the web participation in this
portion because this is the opportunity where
industry is really getting the opportunity to give that feedback. So please, send your
questions in, let’s get them and allow this panel as well
as the rest of the audience to provide input. So we all went through the paper. We thought it was a very good framework. We’ve liked these aspects of it. It’s framing what we’re looking to do, it’s giving us that context
and it’s telling us not only how it’s gonna be used but
what the expectations are for the assay and so the
critical topics are there. It gives you the analytical
characterization. The point that I like the most
which is what I championed when I was in pharma with my
group and now that I’m out at a CRO is you really have
to understand the biology of the biomarker and from
that you can build the assay specification and it is going
to be an iterative process as you go through things that you learn, you take that learning about
the biology because it may be different between healthies and patients, and you go from there. So in this case what
we really want to do is have that framework and
then be able to apply it to each new biomarker so that
it is answering that context of use question. We were pretty much in agreement around the scope of the paper. Confirming disease modifications. And limiting it to the
qualification of a biomarker for a disease. You know a lot of discussion
has been here today on and yesterday what is that
scope and what’s the creep going to be, how do we keep
the exploratory biomarkers from having a rigid framework
or rigid requirements? And so I think the scope
setting we’re all in agreement, that’s the way to set this,
later on we can figure out how to apply it to the other space. And then keeping it to the
two technology platforms. We think was the best approach right now. Those are two of the ones
that there’s a lot of experience with and a lot of opportunity. The overall strategy that
you’re gonna go through for the validation is gonna
be similar and we do want to point out and it’ll come
out later that there are some differences in the types of testing that you’re going to do just because of the technology differences. So with that, we’re gonna start going into a little bit more of the particulars. And I’ll turn it over to Lakshmi. – Thanks Mark. So I think what we are
going to do is just briefly take different points in
the document that we thought we should highlight where we agree and some that need clarification
and we’ll go through that very quickly for a few minutes each of us and then we’ll go to the discussion. So we appreciate the
context of use concept. You know some of us feel like
we’ve been doing this already with fit for purpose, they
are sort of one and the same. However, certainly as
Russ I think alluded to, fit for purpose was somehow
seen as a dirty word by some so we are very happy
and we want to associate a positive connotation for context of use. This means what makes sense
and it makes sense for us from a drug development perspective for other areas of biomarkers as well. So we appreciate that. And we also maybe touches on Mark’s point. It’s important to understand
this before launching into assay development so I
personally don’t really like to think of us into buckets
of bioanalytical and biomarker or whatever we want to
call our scientists. I mean these are biomarker
scientists and if you are indeed separating the
two, you want to make sure that both parties know enough
about the other component so they’re able to do the right thing. The iterative process of
analytical versus clinical validation was well-highlighted
in the white paper. And it was highlighted that
these were very distinct and I want to make sure we
also clarify that they’re interlinked so this is, you
know I think it’s clear to us based on the discussion,
but I think it’s important goes back to the impact within
our companies even right. So you don’t want to silo
people who do the assays versus people who think
about the biology then you don’t get the right result
and we see this internally as well as I’m sure I’ve
heard from some of our FDA colleagues and that’s why they’re
emphasizing thinking about analytical validation up
front before you just go all the way through. So I guess I spoke to the next point. And then coming to the
specifics of the paper. Full validation versus partial validation. While I understand what it
means, some of us feel that this makes sense, I also want to highlight this could be misconstrued
where when you go into the industry in organizations
that are very big, there are a variety of
stakeholders who get involved and it’s a cross functional
effort when it comes to biomarkers and there’s annotation
just like fit for purpose full validation means
that’s the gold standard and that’s what you have to do. And if somebody else, some
other department is offering to do full validation
versus some other department does partial validation that
maybe isn’t good enough. So I don’t know if there
is a good, some other term we can use, I think there was
a translational validation that was put in the white
paper, something like that in the table one, and maybe
something to think about. And then table one. This came up yesterday and
I was thinking about it last night and I looked at it again too. I think it is causing
confusion especially in the exploratory section. While we understand in
the white paper it means exploratory phase of biomarkers
that you intend to qualify, there’s a whole bunch of
other exploratory markers as many of us talked about. The 90% of the biomarkers
that are being worked on in the industry fall in this bucket. That our internal decision
making, it’s clear that that’s not what you’re referring to, but I think it’s important to, we cannot underestimate the
importance of highlighting that as well and make sure
that that’s appropriately covered or they can have it
all together in the table in the exploratory aspect of it. So specificity. So I do want to mention when
I was looking at the paper there’s a section there. It talks about for large molecules or LBA, ligand binding assays, that’s
not possible to achieve. I think Afshen talked about
this as well yesterday that… I want to make sure people
don’t walk away thinking large molecule assays do
not have to be specific or you cannot achieve them. I think it’s worth clarifying
what we mean by this. I think respective of whether
it’s a large or small molecule it’s essential to know
what we are measuring. So we have to consider
the context of use again. What is the specific molecule
you’re trying to measure? Maybe there are multiple
isoforms that your assay measures and you don’t care which
isoform it is that you want to detect, then that’s okay. So you’re making a conscious decision that you don’t need to measure it. For example, the IL17 story right with IL17 (mumbles) versus (mumbles), which ones do you want to measure? And maybe you don’t care,
or you do care in which case you want to make sure that
your assay is specific. So I want to make sure people
don’t walk away thinking you don’t need to think about it. And the way you achieve
specificity with the large molecule LBA is how you generate your antibodies. And you screen for what
you are looking for. So next slide. So PK assays are not
equivalent to biomarker assays so we’re all on the same page on that. I think when you read
the paper and as you know Russ Wiener again referred
to there are places in the paper that keeps going back to this PK assay validation. Many of us prefer that
we go at it (mumbles) thinking as fit for purpose
paper did apparently I just learned today, (mumbles)
thinking about how do we qualify or validate
these biomarker assays. I think that would flow much
better and we would eliminate a lot of the controversy
if some of the aspects of CLSI guidance applies, some
of the aspects of PK apply. Please incorporate them as
fit, but don’t have to keep going back to them other than, you know, citing them in the references
would be one way to deal with that. Again, going back to the
risk of misinterpretation, I think some place in the
paper it talks about starting from the PK assay and then sort
of working down from there. So the reference material
versus calibrators. So the calibrator rather. So the paper did a good job of
highlighting the limitations of protein biomarkers
and how to address them, these differences between
reference material and calibrated and how you
address that in your assay the limitations of it. As we alluded to yesterday, the term and the practice of commutability is a missed opportunity
that hasn’t been addressed in this paper. I do want to highlight within
APS one of the subgroups which is dealing with the
biomarker discussion group there is a paper that
will be coming out soon, it’s accepted, on
biomarker protein standard identification and characterization. And that really has a good
deal of relevant information including number of case studies. Again as somebody pointed out earlier, 90% of the problems
really with the biomarker comes from the fact that you may not have the appropriate standard and how do you deal with that? There they also address the commutability using the statistical
approach of domain residuals and I don’t want to go into
that but I want to make sure at least we highlight that
and that can be referenced. So when calibrator does not
equivalent to endogenous analyte right and that’s when
we want to go back to endogenous samples because
that’s the biomarker you’re measuring and assess
your validation parameters, analytical validation based
on your endogenous sample. So rest of my colleagues
will address each of these parameters as relevant, but I want to come back,
talk about stability very briefly because that
won’t be covered later. So stability I would caution
that we give a hard look at using endogenous samples
for stability assessment. In the table in the
paper it does talk about reference standard or endogenous sample. So I want to make sure
that people understand the liability associated with
not using endogenous samples and I want to quote Stephanie’s
paper where it clearly highlights two different biomarkers. One behaved where the reference
standard stayed stable for a year and a half where
the endogenous samples were only stable for four
months or something like that. I may not be right on the numbers. And with the other biomarkers
the endogenous samples were actually stable for more than a year whereas the reference standard was not. So they were able to
rescue their clinical study and go back to generate
a lot of valuable data even though the reference
standard did not reflect that stability. So I do want to make sure
that this paper is a good opportunity to highlight
that and not, you know, avoid the misleading of you could use the reference standard as well. Because that could be
a potential liability. – [Mark] Okay, moving on to Chad. – Yeah so I had the opportunity
to talk about statistics that were discussed in the
paper and from where I stand I thought the paper did a
really nice job of incorporating the concepts of total allowable error which I’ve highlighted here and I guess I think I brought it
up through the discussion, I think what we have to do is figure out how to now operationalize
these types of measures incorporating statistical support
and when is the right time to engage and build that in. So I guess just a couple
points at the top there. I think they did a really
nice job of describing an experimental design
that would allow you to parse out the various
varients components both within subject, intersubject,
and also the analytical, the CVA. So I think that was done very well. The concept of bias, I’m not
gonna cover it on this slide ’cause I think Steve will
cover it with much more detail and I think it’ll lead to
a fairly robust discussion but purposefully use the word
bias as opposed to accuracy. I guess where the panel was a bit split, not everyone may agree with
my perspective on the stats but so some of the comments
that came from the panel were concern that maybe we
were too prescriptive with .5 times .25, where
do we, how do we really set those benchmarks? And then what was needed. We need to take into account biological analytical variability
and I think all of us were in agreement on that. It just makes sense. It goes back to the
learn and confirm model that we’ve stated from
the Lee paper all the way through this so. I think go on. – And so I’ll take this one. Yeah so relative accuracy bias. Obviously there’s a lot
of still perspectives that remain in our community about this. It’s kind of key to this discussion. This fact that you don’t have a true value of the endogenous level
is fundamental challenge to the bioanalytical
community that have been used to working with xenobiotic drugs. But that said I think
we’re starting to get a healthy discussion around it. I think we’re having the
whole bioanalytical community recognize the problem
and come to the table to discuss all of this
and I really do hope that some of the comments that
are probably being thought of and discussed on the other
side of that camera there get into this discussion. Obviously measuring
accuracy of spike standards has limited utility if
it’s not connected to the endogenous analyte. We’ve had that discussion
here over the last day and a half. It’s going to continue. This concept of the
continuum of working from an accepted assay all the way through. That’s a good one. I think it’s a little
bit dangerous to think that we have to set this
gold standard early on. That as a gold standard
you’ve got to do better than all of the time. That seems to be a
conflict of terms if it’s a gold standard than it’s a gold standard but we’re talking here
about establishing something that we can build upon as we
go through this continuum. So I hope that that is preserved. Relative accuracy or bias
being linked to the context of use in itself precludes
prescriptive criteria. But let’s also keep in
mind that it’s gotta be relative to something. Even with an LCMS assay
where we’re dealing with a well-characterized reference material, it’s relative to that white powder. So it’s always relative
so I don’t think that should be something which scares
the bioanalytical community we just have to think
about it in a little bit of a broader sense. We’ve mentioned that
commutability isn’t addressed. I think well enough in
this paper at the moment. And it is in itself I
think predicated on having a good, a decent handle on accuracy. Without that accuracy,
how can we be able to say that we have an assay that we
can compare to something else? Knowing what it’s relative
to is the fundamental aspect of that commutability. Defining accuracy bias
of a bioanalytical assay there is a consensus. But it’s developed a lot of confusion and we hopefully we need
to get through that, we need to get past that
and it’s a real fundamental of this discussion. That bottom bullet point of protein versus small molecule biomarkers. I would suggest that if
we’re considering LCMS and in the white papers
it stands at the moment there are certain sections
that make reference to the fact that cases where
we do have a well-characterized reference standard then
let’s take advantage of that. I believe that there’s a
lot of the bioanalytical community now working with
biomarkers where we do have a well-characterized reference standard and when you have that available to us to take full advantage of that. And I think that that
needs to be fleshed out a little bit more because
there’s too many of us working with lipid biomarkers,
with peptide biomarkers, where we can do a very good thorough job around getting closer to that
absolute accuracy concept. So I’ll leave it at that. – [Lauren] Okay so I’ll pick
it up here with parallelism and dilutional linearity. We discussed a fair bit of this yesterday. We really just wanted to highlight here to make it abundantly clear in the paper that dilutional linearity is
related to control samples which are spiked with reference material, whatever reference
material you have to hand. And I won’t go into the details
about how you assess it, but it does require that
an expected concentration is known so therefore as
Steve just referred to if you don’t have a
well-characterized reference standard that you’re fairly
convinced is very much like if not exactly like your endogenous, then the relevance to your
biomarker assay is going to be more limited. Parallelism as Steve clarified
very nicely yesterday requires incurred samples
meaning it can only be done with endogenous analyte. And that is where you want to demonstrate the parallelism between your standard and your sample dilution curve. Here the relevance to
biomarker assays is absolutely fundamental, right? I think we had good discussion yesterday where we all agreed
whenever possible absolutely endogenous analyte is the gold standard. So of course there is complexity
in practice with that, right, because often as we
spoke a lot about yesterday you don’t necessarily have
samples with high enough concentration of endogenous
analyte to perform the assessment the way you would like to, but you need to be really
careful when you think about spiking on top of that
endogenous analyte and that was a little bit of what I
was referring to related to our case studies is that
this concept has really been discussed quite robustly
in the last couple of years especially since Crystal
City six and I think there has been a change or an
evolution I guess I should say in the scientific thinking here. And so what I heard yesterday
was we all agreed that if that’s all you can do at a given time, it’s very well understood
that’s all you can do. But that there should be,
perhaps, some clearer statement in the white paper that says
when you do in fact have access to endogenous samples of
appropriate concentration that that’s something that
you want to go and revisit. And I just want to turn it over to Steve in case he has additional
comments regarding in the context of the LCMSS he may have a better understanding. – With regard to parallelism, yes, yes. I mean that is one point. Where we do do a parallelism
experiment with the surrogate matrix because we’re doing
that with a standard edition and Lauren will point out well if you’re doing standard
edition what you’re spiking it with and it’s not your endogenous analyte which it isn’t of course
then it brings into question what are you doing there. So we acknowledge that that is something that we have to address,
but where we do have a good, again, a good, well-categorized
reference material. Certainly in the LCMS
community we’re doing it. We still have to do a
parallelism experiment of showing that the surrogate
matrix that you’re using, black matrix, spiking
to behaves the same way as the actual sample matrix. So that is one example
and just at the break I was talking with Chris
Evans that another example of an LCMS experiment that
we can do, we have to do is pulsed extraction sample stability and that’s not something that
the ligand binding community are doing, but it is
something which is a nuance and a well-established
experiment that we do and we can do and we should do
with LCMS and it needs to be added to the discussion here and it needs to be added to the paper. – [Mark] Okay. – Okay and then as I touched
on earlier but I wanted to highlight here is that
again the relationship between parallelism versus
spiked recovery and we really wanted to highlight from
the industry perspective the rich data that we can mine from these parallelism experiments. It’s not only the parallelism test itself, but as Afshen highlighted
yesterday, use it to identify your MRD and I know he
spoke about prozone. But what wasn’t discussed
was a means to also estimate your LLOQ in relation to
your endogenous analyte. I think all the examples
I’ve seen over the course of the last couple of days
when they were speaking about determining LLOQ that
was still with a spiked reference material and not
with the endogenous analyte. It would be nice to see if
we could incorporate that into the white paper and
also that when you assess parallelism in multiple
individuals this is actually a really robust way to assess selectivity. This going to be directly related
to that endogenous analyte as opposed to a spike recovery experiment. So really just a way to
highlight that there is certainly value and there’s a time and
a place when that’s the best you can do to do a spike
recovery experiment but it doesn’t replace the
true parallelism experiment and it really doesn’t tell you about the selectivity in relation
to your endogenous analyte. So if that’s the space you’re
in, just sort of a caution to say that needs to be
interpreted conservatively. Accurate recovery of
spiked on top of endogenous may, it certainly has helped
me sleep better some nights in the past to feel like, okay, certainly in a ligand binding
assay I’m reassured that my critical reagents
are seeing my endogenous and my recombinant in a similar fashion, but I’ve also had
perfectly viable biomarkers for measuring the endogenous
analyte where that edition experiment A plus B does not
equal C, it equals something like W. So it doesn’t mean that
the endogenous analyte can’t be measured well and
when you do have a case where it adds us, it
doesn’t automatically mean everything’s perfect so. That’s it. – [Mark] Okay. Steve are you gonna
cover this one or was I? – [Steve] I think you were taking this one but I can take it. – [Mark] Okay. – [Steve] So the framework is
what we’ve started with here. That is in my mind the
take away that we need. That we need something
that as a community, whether you’re a
bioanalyst, whether you’re a biomarker specialist,
whether you’re a statistician that we can pull out and we can use. I’m a little bit concerned
with the references that we have in the document
now to see the CLSI documents that many of us don’t have access to. Hopefully something can
be done about that but at the moment there’s a
lot of references to those. So Russell pointed out that
there’s a CLSI document related to commutability,
just go and look at that. Well that’s not easy for
me to do at the moment. And worse so it’s not easy
as a CRO to be able to go to my clients and say hey,
there’s the reference document that you need to look at so
we all get on the same page. Either we have to extract
that out of the CLSI documents or we need some mechanism
of being able to get all on the same page. I like the concept that
was mentioned this morning about this being kind of
a thought map that we need to go to, we don’t want
to be too prescriptive, but we do want to be able
to still talk to a process with kind of a common
language across our community so that we can all be
having the conversation, we’re not talking different languages. And this point that the
expectation that there’s gonna be this continuum, it’s gonna
be this iterative process of moving through the pipeline of a drug. Obviously that’s something
I think that we seem to have universal agreement on that. That there is the expectation
that it’s gonna be that continuum. And then the last context
point differentiating drug development from
surrogate end points. I think we’ve pretty much
hit on that one already and I think we’re on the same page. – [Mark] Yeah. So we’re to the kind of end of our input and just some of the things summarizing what we’ve put out there. So it’s really turning it over now to the audience here and
hopefully some of the folks on the webinar who may
have sent in questions. If you don’t have anything
we’ll start talking amongst ourselves and
discuss some of the issues. (laughing) – [Shashi] Hi, Shashi Amura. About the surrogate end points. One can theoretically
come to the biomarker qualification program also
for qualifying biomarkers as surrogate end points. But that’s not the only
mechanism as you know. Just wanted to point out that
that option is also there. – [Man] So as world class
experts you clearly demonstrate content knowledge, I really
thank your thoughtful considerations and I know
we will certainly welcome prescriptive input, not just,
I don’t like this section but rewrite a section for us. We certainly look to
you guys as leaders in the community to do so. I wanted to just dig
into a few little things. So Steve we talk about
small molecule biomarkers being exactly the same
as the endogenous form and I want to make sure
at least people are aware that biology in endogenous binding space is different versus
surrogate calibration matrix and so even isotope
dilution mass spectrometry we do have to be very conscious
of binding the internal standard like before with
the analyte in a specimen. So we did downplay the
accuracy piece and please tear this apart and say you don’t like it. We downplayed accuracy
because the end result we aim for is a reference
method, a high order reference method with higher order, a value assigned reference
materials in true human matrix as truth. That’s our anchor point for true accuracy and so we didn’t want to get
into this accuracy I can spike and everything looks good, I’m a spiker like Brad Ackerman, if
you’re listening, you’re still a spiker. (laughing) Sorry Brad. We didn’t want to really
use the word accuracy because in true metrological space for us it is reference method
but it’s commutable, ie all of us can run
this sample, commutable reference materials which
made us back off that because as we’ve
described, Steve described, so very few have reference materials. So I just wanted to say that
that’s how we flavored it but if you disagree
certainly we would welcome the feedback to recreate that in some way. I have a few other points but
Steve I’ll just (mumbles). – So just a comment on
what you were saying. I think one of the points that you raised it may be missed some in the traditional small molecule PK world. Which is that binding event. And with carrier proteins
and the like for these types of molecules and
so people tend to spike and to do an extraction for their recovery rather than spiking, doing
some sort of equilibration, and then doing it. And I think with biomarkers
because of their complexity in interactions, so many different ways, we have to pay little bit
more attention to that aspect of it and whether you’re
using the endogenous molecule or the well characterized
reference standard that you’re spiking in. We’ve got to make sure
that’s part of the process. – [Man] And actually what you just said, spike and recovery versus
spike in equilibrium and then recovery are two different things and they should be noted as such. So they will give different answers. But actually– – [Man] I just want to touch
on one of the bias points. But Steve, you go for it. – [Steve] One comment
on reference material and commutability. There was a deliberate
omission of commutability from the white paper out of
respect to our colleagues who are publishing that paper which is due very, very soon having
seen it pre-publication, had the opportunity to
review it, there’s a lot of good material in there
that would be appropriate but we did not want to
reproduce that or attempt to reproduce that and
that will be certainly a very valuable reference
to add into here. So when that is published and available we will amend this appropriately to go along with parts of that
that we think are necessary. Also having been at some
low level a part of that there are a number of
very pertinent suggestions about reference material and commutability of
reference material such as we can do in the biological field in there that we would of course adapt into this. – So I was a big
proponent of incorporating commutability but after
hearing the conversation for the last two days I’m
not so sure it’s as important because the way I had
envisioned this process again would be the biomarker
was qualified and then all you needed to do as
the next lab coming forward was show some degree of commutability but I don’t think that’s
what I’ve heard here the last two days. It’s really there’s an
absolute linkage between the test and the biomarker qualification. I don’t think, from what I’m
hearing we cannot differentiate those two, they’re inextricably linked. Am I wrong on this? – However maybe within
this qualification right it took eight years for kidney markers and I do not know what type
of issues you had to deal with in trying to bridge variety
of lots over the years and things like that. Even within one assay and
let’s just say one lab. – [Man] So commutability would
not necessarily be important in the first round of
qualification because assuming you’re doing this at one
facility, you have no need of a commutability concept to do that. It’s when you get beyond
that point and go to the next context of use or actually
put this into a production environment that that
would become important. And that makes perfect sense here. And although there is a
linkage between the test and the qualification of the
biomarker as we have heard, that is not a direct one to one link. And this is what happens
when we get to the next step of how do we qualify that
biomarker for a slightly different context of use or does someone
want to use that biomarker in a different facility
and how can they generate data on that biomarker? So those are question
which we’re not quite answering here, right? – And to that last point
is something that we were discussing at lunch. Especially where if you
have different capture and detection reagents
than the original assay, how similar is what you’re
measuring at that point and so what hurdle do
you have to go through to use it in that context of use? – [Man] A more detailed
question is if you have the same capture and detection
reagents but somewhat of a different system and we
know that things are different in different places, what
do you have to do as well? I think everybody here realizes
that the knee jerk response is yeah you change your antibodies. Oh, we go back to square
one and start over and that’s probably the appropriate answer but it is a iterative
process, a continuum, as to how you have to go back to do that and our colleagues at
the FDA have said that there’s likely to be
similarity but not concordance between the two methods or
the two facilities doing it. And so I don’t want to speak for Shashi but I’ve heard that remark
enough that we’re qualifying the biomarker and not the
test and in other situations you have to show that the test is suitable for the new context of use. – Or the same context of use. – [Man] Or the same context of use. And I would go back to the
principles in the paper and say here’s what we
think we need to demonstrate or redemonstrate or change how it happens. – [Woman] I can add a little to that. So we also describe the
performance characteristics of the assay that was
used for qualification of the biomarker so there
is some point of reference if you will to see what
was achieved by that. And to be fair we only
have a qualification of three clinical biomarkers, 10 of them are non-clinical biomarkers, and for one of them is
an imaging biomarker so if you take that out the
other two are prognostic biomarkers and for both, one
was diagnostic, prognostic. But for both of them we
had FDA cleared assays that were used. Even so, a different cut
off had to be determined for the intended context
of use so that was the main challenge. One challenge we had with
one of the submissions, plasma (mumbles) was to have
multiple FDA cleared assays and this was retrospective
data which was generated from others so it did
bring up a challenge. I’m not sure it addresses your problem but just trying to tell you our thinking when that happens. What we thought we
would do is we looked at the qualitatively is it
going in the right direction? Addition of plasma
(mumbles), is it benefiting? So qualitatively some of
them were enzymologic assays and some were immunologic ones. It’s not easy to think of
bridging that kind of scenario. So we chose one as a confirmatory data set and we gave performance
characteristics of the assay that was used for that
confirmatory data set. So that’s how we have, it’s complex. Even in the world of FDA
cleared assays it’s complex so when it’s not I don’t understand. But we do give performance characteristics and what’s acceptable. – [Man] So I wanted to touch
on a couple of concerns you described and one, I think
Chad that you commented on and I think one you talked about Lauren. Which is the easiest one. For Lauren you described the
hard math or the locked in math of desirable,
minimal, and, what is it, desirable, minimal, and something. CBA CBG, the bias in the imprecision. I just want to echo those
are really quite interesting clinical medicine in terms
of the desirable imprecision and desirable bias and the bias question’s gonna come back in a sec. But did the language make
sense in terms of why those are designed the way
they are, the false error on the two ends of a reference population? They’re designed around two
ends of a reference population. Did so many of you call
something normal or abnormal inaccurately based on
the analytic measure? Did that come across at all
in the documents or not? – Yeah I understood it. – [Man] You’re an ex
clinical chemist mate. (chuckles) – Well again I think one of the concerns that the panel brought
forward was maybe it was there was too much detail
and so you could go to the reference but then
again, I think you need access to it in a distilled
manner and it was there. I didn’t have a problem with it so I think you’d have to
ask the other panelists. – So I got feedback around that. The clarification there is not
so much what the content was, it was more about the placement and the extent of the content so it may be a stylistic
comment more than anything else is that embedding that level
of detail in the document in that fashion disrupted the flow of the these are the things
that you want to look at and how you might want
to approach it and so a lot of comments that I
heard over dinner last night and in informal conversations
was a lot of this material fits perfectly as an appendix
so those details are there and the main body of this you
say you need to consider this. There’s a really good way to
do this and it’s right here for your reference. So it wasn’t about the
feedback I heard was not about not understanding it, not
thinking that it wasn’t a good approach, it was two fold. One was being careful
because this is meant to lead to the basis of guidance
to make sure that it’s not interpreted as this is the
one and only way to do it and then to provide the
right level of detail in the main body and
give people the access to all the details in the appendix. – [Man] I would add just
from personal experience that is the easiest way to
do it because the other way is 120 reference population
in every single breakdown of the phenotype of a biomarker
so that is the simplest published form we know about
as a, it looks like an elephant but it’s only a small bite
of an elephant kinda thing. – [Woman] I was just gonna
make a comment related to how people reacted to
the positioning of that statistic section. The first question people
wanted to understand is we place it before validation
and how many people really operate their
methodologies by defining total allowable error, right? The first thing they do is to assess what the assay’s position
is and potentially bias, even biases looked at later
on and there are adequate number of biological samples
available to even understand whether you’re generating
a false positive or a false negative response. So I think the authors
of the paper really need to think about where is
the right place to put this and do people really
operate by thinking about how much precision can I allow
before even understanding what the biological availability is? So there’s a conflict in, you know, it looks like we are prescribing
that they should think about it in the assay
design phase when really they don’t even have the
adequate number of samples to assess variability. So I think that is also
sending some mixed messages. – [Man] Steve I’ll try and
address one piece of that. So you may recall Crystal City Six. I said four six X is
arbitrary and capricious and I still believe that. I think we positioned something because this was a very hard
section to write I know ’cause I had to do it, thank you Steve. And we try to flavor it two ways. One is a priori
measurement and then one is how can I use my validation
statistics and get confidence in fold change? So it was hard to write. We tried to position it
up front as an either or or both construct and then
did really try and say but go and talk to a statistician
because you’re getting very, very deep into the weeds
of noise in biological signal and then signal over noise
with noisy measurement superimposed upon top of that
so that is very, very hard to put down in a few paragraphs. We said it there. I mean rip it a new one, I wrote it. Rip it apart, rewrite it, I don’t care. – I mean I commend you for
taking this approach to it. I think and you probably do
a poll here in this audience and you probably got a
lot of people saying yep, makes sense, I like the
way you step through it. You know what the dilemma
is though don’t you? It’s the great unwashed out there that are gonna be dealing
with this and that that is something that we’re really
gonna have to step them through it and I think
as long as the leading community groups such
as what the focus groups that we have in APS can
get this messaging out and help explain all of this
through the bioanalytical community as a whole I think
that is gonna be a requirement. Otherwise to put this out as it is it’ll cause some to
panic and it’ll cause… It can’t just be a
standalone document as it is without the followup
that I think it deserves. – [Man] Okay understood, but want to move the other points behind this. Was actually in looking
at bioswitch tends to be a component of this. There’s a distinct
tendency towards looking at a false bias or
looking at no bias at all. Ignoring the bias. And by false bias I would
mean you are using this relative to an inappropriate
well not the best standard that you have so there is
some bias that you are not measuring unless you’re going
back to in vivo incurred samples to do this. So again all along that
space from ignoring the bias to having the correct
bias in your assay for an in vivo sample. Where can you set the error
goal to go back to the clinical conclusion that you want? And that really was maybe an
underemphasized part of this but that’s why this was there
because until we understand we’re pretty good at getting
the other half of this imprecision equation,
we’ve been doing that well for a long time but really
determining what the bias is and is this the correct
determination for bias and can we plot this into
the calculations and do that? Well that’s a major part
of this because that’s frequently ignored and if we
are doing this on an end point of a clinical context,
then we have to understand what that difference in
bias is or it may become very important to those patients. – [Man] I actually want
to leverage that and point a question back to you guys as experts. So we tussled in the absence
of a reference material or reference specimens
metrologically anchored, high order reference method,
we struggled with that to get you a bias number. Or how you get a bias number. So we posited, individual
QCs or calibrators at the end which you’re
used to doing in BMV. We also posited spike and
recovery as a surrogate for a bias number. What other ideas would you bring forth as a way of getting, trying
to get close to truth in the absence of truth? ‘Cause we struggled
with that one, honestly. – I thought you did it right. The three tiered approach
you had to assigning bias, that’s really the only way to
do it but in an exploratory setting those first two are irrelevant. We don’t have that information
so you’re left with the idea of spike and recovery,
all know what’s that– – [Man] Right, but understand
that then you’re missing those two pieces and that
bias number is as good as you can get but it’s not hard and fast, it’s not what you want to live with in the clinical arena. And that’s really what we’re saying here. – [Woman] Right but the
question is is there a continuum of how we can acquire this data? So you start off by defining
error, the total error with the supposition and then
you move on to acquiring bias as you are getting more
of the relevant reference standards available and
you eventually end up with your allowable error,
the total allowable error which takes into account
what the bias component as well as the position. So it really in the
evolution where do we truly, which is a state of
development of the methodology, are we able to asses bias? So we just need to be realistic about it. – And I think that goes to
what was mentioned before is to take this to a practical approach that people can pick up and follow. And so having the context
and things that in the body of this is what you should be checking, that appendix that gives you
more, but interleaving it to things broadly. – [Man] So I’d like to see
it as one of the case studies because I’m already confused
about how I’m gonna apply this and I’ve been doing it for how many years. And I know people who
don’t have the experience to look at it and go what the heck. So it would be nice, we
were talking about the value of case studies, I think
a case study that shows exactly how to go
through this step by step and has actual numbers would
be a great deal of help and I can’t remember if
the ones we have in there have this already. Do they? Okay. – You’re asking for an example. – [Man] What I hear from
you two is that you want us to be more prescriptive in
the process of how this is to be applied–
– Right. – [Man] But the danger in
the case studies is always that those numbers tend to
become the prescriptive part of the process and so
we’ll have to make sure not to have that happen
because nowhere in here did we suggest that this
amount of error is appropriate or required at each and
every step of the way. That has to be individually determined. But I can see the general
application for a more well laid out process of how
to apply this sequentially to get to the end when you need it. – I think what Mark is
suggesting is sort of an example. – [Man] Yeah yeah yeah. – It could be mock numbers. – [Man] I just want a concrete
example, I want actual numbers filled in. You know when you guys
start conceptualizing well we’re gonna do–
– Algebra. X, Y.
– We’re gonna use this program over here
and that’s kinda the way I saw it the last time around. Did Proctor and Gamble have
a program for calculating total error and everybody was
using it and we were using it but then no one understood
exactly how to explain what they got out of it
and kind of all of this? And it was the case and
so I would just like again simple numbers, make ’em up
if you want, I don’t know, but show the iterative process. – [Man] Okay. – [Man] I’ve decided the
very beginning on my assay here’s what my bias needs to
be and here’s what I did next. And here’s what I did next
and show actual numbers so people can follow through. And if they want to, they’re
just gonna put their numbers in and hopefully they’ll get
to a place where you want them to be. – I mean I think I said
it earlier Yuri’s example should be a perfect one
to be able to do that. And at those different
points along the way compute the performance
standard when you knew what the cut point was
and I think it could be a really good example. – [Man] So I just have one little thing and then I’ll sit down, I promise. So the table of computed
CVI and CVG I ask that we generate that as an Excel
document you can plug your numbers in and follow that experiment. Would that be good to share? It’s locked unless you
need to put in your numbers and your alpha factor. Just to share that and it just
does all that calculation. – Any pools I think that can
be shared with the community around any of the things
that have been discussed would be valuable.
– Okay. – I got a five minute
warning, we have some people who would like to… – [Man] Just a quick aside,
were there any questions from online? Okay.
– Okay. – [Man] So did you guys see
he asked all these question and then he goes do you
have any question online? (laughing) Just two quick comments okay guys? With regard to specificity
I want to make sure again clarify this and if it’s
not clear in the paper we will do it. We’ve obviously put specificity
as an important factor, one of the seven core factor, that’s one. Number two is that I
want to make it clear, we will never, ever be able
to be 100% sure about assay specificity for large molecule. And for example, the
antibodies we generate, usually they’re fusing
recombinant protein. The second here is even if
it’s against the endogenous protein those post translation
modification that may happen. You have a different
diet, you have an argument with your wife and then you take the blood and you’re gonna use that too
so there are changes, right? So I want to make sure again
we have (mumbles) there about yes, go try to find, make
sure the assay’s specific, but the reality is we’re
never, ever 100% sure until 20 years from now
somebody else finds out that there were four
isoforms and not three. So that’s one. The second thing is with the parallelism. Points all well taken. Lauren and if we’re not clear in the paper we gotta be more clear but
we imagined tiered approach. We have put parallelism,
not dilutional linearity, well gonna remove it
then, it says parallelism, dilutional linearity,
spike, okay, I don’t think. So what we need to do,
parallelism is the one thing, seven key factors, probably
for my lot the second thing I look at. Even if it’s at the early
stages to establish that. So we are basically with the
sample what kind of sample we use we have taken in the
paper we said tiered approach. What do we mean by that is
that go use endogenous sample exactly as you said it but again, when the endogenous sample is
not there you gotta go spike on top of it. We have also included in
the paper that remember, that’s the best you can do,
you can sleep better at night, but that’s not a true representative. So when the sample is available, go in there as an amendment,
add it to your report. So I want to make sure
it’s there but the fact that you guys providing
the feedback we probably better go back and provide
some clarification. – Exactly, it was more about
that because that spiking experiment is not actually parallelism. So I think it would–
– Exactly. – It would help if it were,
this is what parallelism is when you’re in a situation
when you can’t actually do it you can do this other which
is really dilutional linearity on top of whatever endogenous
is there experiment as a hybrid approach. But with regard to
selectivity I didn’t see anything about parallelism
informing selectivity of endogenous analyte, I only saw the spike recovery approach. So again, if that’s all you
can do, clearly articulating that would be helpful. – [Man] And the only
question for the panel. There are instances and I
think we saw one example, there are times that you can
actually stimulate a cell line right, get the endogenous
form, at least you think that’s the endogenous
form and then you spike it in a proper matrix. Do you consider in your
laboratory as a true parallelism study or don’t? – We consider it a close facsimile
but it’s not parallelism. – [Man] So still under those
condition you try to go back and find a sample? – If once I have it if I
have, I have to tell you, currently, I don’t have any
in study samples like that. But I do have samples that
we created in that way. – Okay.
– And when we have samples of sufficient concentration
we would absolutely go back and do that again. – [Man] Great, thank you. – Okay, so we’re down to like one minute. – [Man] Yeah just a
comment ’cause I don’t know what the organizers are
so I did get a text that they’re having difficulty using
the chat so during the break if the organizers can
just check that may be one of the reasons why we’re
not getting any questions. Over to you. – [Man] Thank you sir. So Steve, one of the
comments that you made about diluting out endogenous
specimen with calibrated matrix surrogacy, that was
conceptually in the five point add mixing section in the
document but it was apparently not clear that you mix at a low calibrator with a high sample as a five point add mix or even the reverse
depending upon what you have your hands on. That wasn’t clear. That certainly was part of
the flavoring that, again, subtlety I guess. Wasn’t clear? – No I didn’t pick that up as clear. I think I could maybe
pick that up with you. – Right I just want to make,
sorry, make one comment to Afshan. Since we’ve been talking about
specificity, parallelism. I think many of us are on the same page. You did a great job
yesterday with those figures with the parallelism. Also showing how to
accept your parallelism. But I do think in the
paper tweaking that wording so it doesn’t lead to misinterpretation. ‘Cause as somebody said
10 years down the road you want to be using this paper. And there’ll be a fresh
batch of scientists who are using this paper. – [Man] You know guys
actually Amina and I we were talking about this over the break. The challenge what we face
is the use of the parallelism section was this big and
the stability is this big. And it’s not that we’re
putting more emphasis on one versus the other, we were challenged as how can
we go ahead and put this paper together to provide
the gaps that we saw in certain areas? And we’re struggling with it
to be very frank with you. How much detail do we
want to put in there? I’m hoping that later on
as you guys were talking later in the afternoon we
provide guidance from everybody here as well as over the
phone as what is the level of detail you guys want us to put in there and what is adequate or do
we just refer to bunch of white paper? Even the document we’re talking about, it’s a valid point. For me to begin I had to go in there, first of all get permission
in order to get in, then I had somebody after
a few days tell me yes, we check your credential
and we think you’re gonna add value to this society. So then paid a fee and then
I had to go in and pay for, so every, is it gonna
be that we’re gonna have a thousand people now go
to this one secretary, one admin that is set
for that organization to go out and qualify every
one of us to get the documents? So you see there are the
challenges we’re facing, right? And I think we’re gonna have
more discussion afternoon. – Okay. So I’m gonna give John the last question, hopefully it’s quick, and
then we’ll leave it up to the committee to sort
out and solve that problem for everybody.
(laughing) – [John] Thanks Mark. Just a quick observation
on Afshan’s mention about using stimulated cells
or even simulated whole blood to create endogenous molecules. Certainly I use them for
parallelism but it doesn’t replace it, it’s definitely better
than spiked dilution linearity. I do use them a lot to
create QC samples though which is definitely better
than spiked samples. Caution, don’t do your stability
on that because you will find there are number
biomarkers where the stability is different. – With that I’d like to
thank the organizers again for having us and thank the
panel for all of their work. (applauding) – Okay everybody thanks very much. Please assemble back here at 1:30, lunch is again on your own. We do have some maps for
nearby locations for lunch. If you’re joining us for
our working lunch however, if we could just kind of congregate here so we can all head upstairs
that would be great. (low murmuring)
Yes. – Okay welcome back from the break. Hope you all enjoyed your lunch. We’re gonna go ahead and get started on the next session. So in this session we’re
gonna focus on strategies to disseminate and ensure
uptake of this framework. The discussion will
include potential processes by providing feedback as
well as the development of a communication strategy
to socialize the framework to the broader community. As a summary of the immediate
next steps I’ll quickly run through that but
before that I’d like to ask the panelists for the
next session to come on up and go ahead and get seated. So just as a summary for
the immediate next steps for the framework, the working
group will be accepting comments through the end of
August on this framework. If you don’t have access
to the framework right now it is currently available on
the Duke-Margolis website. We have a page for this particular event and the actual framework is there. As I mentioned the working
group will be accepting edits to that so it’d be really
helpful as many of you know to make sure that as you’re
commenting and have feedback on this particular
framework to please do so with direct edits into the document. General comments are somewhat
helpful but it makes it really hard for the working
group to really distill that into what that means for
the actual framework. So direct edits into
the framework would be much appreciated. If you do have comments
I would say make them as specific as possible
and make sure you indicate exactly what would need
to change and where to make that change in the framework. Those kind of comments
and direct edits would be of most help to the working group. You can send all of that
back to the email address that we have which is the
[email protected] Later there will be a
specific email to go directly to the working group but
for now please use that. Between now and the end
of the calendar year the working group and the
writing group specifically will continue to meet. We’ll review and go
through all of the comments and edits that are submitted
and we’ll make changes as necessary to the actual document. So for this particular
session I think it would be really helpful for our
panelists and all of you to give us very specific
ideas on two things really. One is what additional groups
should we reach out to now to make sure that we’re
getting a broad range of feedback on a particular work group? As well as once this thing is final, towards the end of the
year as I mentioned, what is the best strategy
to make sure that this framework is widely disseminated and what’s the best strategy
for encouraging adoption of what’s in the framework
and using the framework? So I appreciate comments
from the panel on that. With that I will go ahead
and turn it over to the panel and ask each of you to
introduce yourselves although you have multiple
times over the last two days. And give any thoughts that you have on these specific questions. – So I think I was the
one that was supposed to lead us off. So regarding the comments versus edits, my joke when we met during
lunch was comments will be ignored, edits will be considered. (laughing) So certainly as part of
the evidentiary criteria framework documents from the
workshop about a year ago being part of that writing group, comments are just really challenging. Especially for a group
effort where you’re working on the phone. So when you write a
comment saying unclear. What’s unclear, what
would you like to change? So again please, as
much as you can, provide specific edits, especially
for people that have suggested that the white paper
is missing certain sections. And then the other comment that made it around the dissemination of information. Certainly the two goals are
one, for this white paper to be as good as it can be. Certainly the working group
has done a tremendous job putting together a first draft
and certainly the discussions over the last two days
are gonna help that draft to be even better. Not everybody was able to attend today. Maybe people didn’t even
know this meeting existed. But there are groups
out there that can have a valuable input into this space because they’re end users if you will. And so to the extent
that you know of groups that would find value
or would use information as paper, please let
the working group know who those folks are so that
we can reach out to them and get feedback on actual comment edits. That would be particularly
helpful from academic groups. They’re probably the least
represented here today and it’s certainly folks
that we engage with as part of BQ on a regular basis. Once the guidance is
finalized, not the guidance, I keep saying that ’cause
that’s where I come from. Once the white paper is
finalized, we’ll probably do something analogous to what we had done with the evidentiary criteria framework. Ultimately we’re gonna put it
out on the various websites: Duke-Margolis, Critical Path,
FDA, the various communication teams can put together an
opportunity to do some type of an announcement so that ultimately people know that it’s there. From my perspective at
the FDA and certainly as part of the BQ team, we
will be working internally to socialize these concepts
once they’re finalized with the staff that are
the end users on our side. So these would be folks that
go beyond the qualification development efforts as well
as our more general staff that are encountering
biomarker development in the IND space. Not that this would be a direct
application to that space but certainly it’d be something
for them to think about and it would be relevant. So those are the things
that we would be focusing on from an FDA perspective. – So my name is Joe
Menetski, I am the director of the biomarkers consortium at FNIH and deputy director of
the research partnerships group there. So my reason for being here and I guess the discussion topic is
very germane actually to much of what we’ve been talking about. In the consortium over the past year and not just in terms of the workshop but in terms of everything
that we do in the consortium in terms of how do we
communicate what we do to folks that would actually be able to use it? So that goes from we have
a really great marker for measuring kidney safety
which we heard about. How do we get that outside
of the project team? And in the past biomarker
consortium’s been going on for 10 years like CPath and it’s a… In the initial stages of
these projects you worry about getting the projects
done, get the science, and now I think as time goes
on and the area has matured we really have to start
thinking about the next step and making sure that this
stuff actually gets out and it doesn’t take 10 years
between the time that you start talking about a really good marker and it’s actually used all the time. So the one thing that I
did bring and I thought it would be instructive at
least to have a little bit of discussion around
was, it’s not that one. Somewhere in there! – [Man] These are my slides
from session three yesterday that got lost.
(laughing) – Here they are! I’m probably at the end
’cause that’s normally where I am. There we go, that was it, that was it. No, the next one. Alright so a year ago
the FNIH, the FDA, CPath was involved, there was
a lot of people involved, had a workshop, April, middle of April, and the whole point was
very similar to this except for maybe a
little broader which was what do we do, what kind
of evidence do you need for to qualify a biomarker? And putting together a framework
to place that in context. Many of you I met there,
we were there Yuri, and Frank and others
were all part of that. And I thought it was instructive ’cause at that workshop there were
200 people in the room, more than half of whom were from industry. A large number were from the FDA. And there were a number
of others and so there was a starting base of a
large number of people who were actually there
and enthused and involved and just like here, the
conversations were just as active if not more so, at least as active. And so I thought it would be a little bit of a nice case study in a
different way of communication of the outcome and so the
workshop was in April. In October we finally got to
the okay let’s send this out, this is what we think it is. So we had the workshop just like this, we had many months of
reiterations afterwards to revise the document
because the workshop like this gave something to react to
and we needed to react to it. So in October it came out. We sent out an email to the
bunch of people who had gone and half of the people we sent it to opened it up and read it. So that’s really good, 50%
of the people that were in the room or mostly in the room, good. In December we were talking
about the little bit more broad view of sending
it out to different people. We did an FNIH eblast so
this is a communication term that I learned over the last year. And that was also placed the same time as we sent out things
from the FDA listserve, we sent out a bunch of
things through CPath, we kind of blanketed the
area and of those about 20% were opened so at least from the ones that were sent out from here. So that’s not as good. I think it’s good, it’s
actually, to be honest, it’s more than, it’s several
percentage points higher than industry standard
for sending out emails. But you know there it is. At about the same time we
started realizing that again, the communication was
important and we were maybe falling behind. We completely redid our
web pages and the website for the consortium and
this was part of it. And so in this case we
had at the same time we started being able
to measure who was going to the web page and
who was looking at this and we had 300 or so web
views on that day anyway and they lasted for a long time. So they were actually there. That’s really very high in
terms of industry standard so I’m thinking, I’m a metrics guys. The other thing you gotta understand. I may not be the analytical
biomarker scientist but I like metrics. So that’s like six times the norm. And then if we followed
that out you can see it was picked up at
another bunch of places and some of them came from the FDA site as well as others. One of the interesting things
that I want to point out about this was that we
didn’t directly talk to the regulatory affairs
professional society or RAPS but they picked up the
story and they actually had a pretty nice article on their website. So thank you RAPS. Did a very nice job. And then I also found a reference to the Myotonic Dystrophy Foundation. So those two were picked up
by doing nothing essentially but sending it out through the FDA. I think we can do better at that. I mean that’s two. We have it on the FDA
site, we have it on CPath, we have it on FNIH. These were kind of
where they started from, but I think we should be working harder at getting it into some
of the other sites. The other thing that we
did which is not on here because my communications
folks didn’t want me to publicize it too
much was we tried things like LinkedIn and Twitter
and things like that and that was, from my
point of view, pretty bad. It really did not go well
and we didn’t get a lot of retweets–
– What was the hashtag that you used?
(laughing) – So you know it might
not have been as specific as the one that’s being used today. But particularly like LinkedIn
I was a little surprised we didn’t get a lot of
reposts and things like that and I was just curious
like this is something that we all care about. I would assume a lot. Just for my own edification,
could you raise of hands if I sent you a LinkedIn
note saying here’s something, how many of you would
actually repost that? Yeah. So maybe two. So that’s about right, yeah. (laughing) How many would actually
know how to repost? No don’t say.
(laughing) I had to be taught. I actually had to go back
and find out how to do that. So at any rate that was
something that you know if we try to do that again
that we would have to approach that a different
way so I think from this point of view I look at this and I say we’re doing it and maybe
we should be doing better. We should be taking this on as a much stronger and much bigger part of what our daily life is, and
you know this communication process actually depends
really on everyone and I think in the past we’ve
all had the situation of well, FNIH will handle the communications. Right, that’s their
job, that’s their thing. And I think what we really
need to get to a point where it’s we all do this. This is everyone takes a very active role in reposting and in making
sure that people are out there listening because
otherwise it’s not consensus. It’s not consensus until
it’s absolutely clear that the groups who were
involved in it all buy in and are all promoting. And so that may sound a
little bit like I’m trying to convert you but I think
we need to do this. So that was my perspective
and I’ll pass it off to Lisa. – [Man] Oh back, that’s right. Got a quick glimpse of (mumbles). I can do it for you. – [Lisa] Yeah, you can do it for me. – [Man] Sure. – I can’t hold two things
and talk at the same time. (laughing) – [Man] I’ll help you. – So as was already
noted, there probably are not that many people representing academia in the audience today. If you consider yourself
primarily an academician, raise your hand. Okay so just a handful. So part of the reason I’m here is that the community I deal with
are the academicians. And so I wanted to kind
of give you a perspective for where they see this
whole biomarker qualification effort and what role they can play and how we could encourage
them to do things in a way that would be more helpful
to some of these big qualification efforts. And I guess the first thing
I would observe is that I see many people in academia
use the term qualification, they use the term validation,
they use other terms related to biomarkers like surrogate. They have no clue what
they’re talking about. Okay? It’s all sort of the same
mush of things to them. So I think that’s an important
role that we can play to make them understand
what is the essence of a qualification program
and how is that different from validation in the
many different forms of validation we have,
and how is that different from getting an FDA device
cleared or approved? So it’s all, I’ll tell
you it’s very, very muddy in the minds of many of
the people in academia. Next. But we should point out
that people in academia still are making very
important contributions to drug discovery and
identification of biomarkers that might help us to
better use those drugs, target them to the right patients. And one thing we have found
to be a very big challenge is that the integration of laboratory and clinical sciences is
suboptimal, to put it lightly. In many academic institutions. There are people who are
doing the clinical trials, there are people who
are doing the biomarker laboratory work, you might
have a clinical laboratory in a big institution, and those three parties do not necessarily talk well to one another. And so we have found as
we have gotten more into targeted therapies for
which there might be a biomarker used say in an
early phase one slash two kind of trial where they’re
gonna enrich on a biomarker for entry into the trial. We began to realize that some
of these biomarker assays had like no clinical validity
assessment ever done on them and in fact some of these
had maybe never even been performed so we a few years
ago instituted a process called biomarker review
committee where we had a little checklist and said tell us first of all how you do your assay and
tell us a couple of basic pieces of information like relating to analytical performance, tell us what reproducibility
studies you’ve done. Have you assessed the
sensitivity or specificity? Reproducibility across different readers, different labs, tell us what you’ve done. We’re not expecting you will
have done all of these things, but we were sort of hoping
they would have done a couple of them. And so we started using
these forms and were, I have to say, horrified at the answers we were getting back on these forms and I have some funny
examples that I’ve given to some of you. I won’t repeat all of
those ’cause I’m not trying to make fun of people but
it just really told us that people do not even understand
some of the basic terms. So what was happening is your clinical, often a junior clinical
person was put in charge of an early phase trial and they were told we gotta put this
biomarker in and they’d try and go find someone in
a lab who was willing to talk to them about doing
the assay but that line of communication and getting
the time of that person who had the experience
in assay development and assay analytical
performance evaluation was really tough for the clinical people. So by instituting this new review we were kind of forcing
a marriage between them and it wasn’t always a happy marriage and so I think that we’re
continuing to do more education to say you
really do need to have this conversation and we don’t want you to do your early phase trial
enriching on a biomarker that nobody else will
ever be able to reproduce. And we’re not asking a lot here. We’ve heard a lot about the
nice, full analytical validation process, we’re asking for
really basic stuff like can you do 10 samples, split
’em in half and run ’em on two different days? And what’s the concordance? I mean this is the
level we’re starting at. And so it’s really basic
stuff and we need the help of all of you to get the
word out so people understand what are some really minimal
criteria you need to do to be sure that it’s worth
even doing this biomarker. And we’re in our early phase
trials we’re talking about patients with advanced
disease, you’re doing a corneal biopsy in most
cases to get that sample. It’s unethical to waste those samples, put the patient through
that, to run biomarker assays that give you crap results. So we really need to take very seriously what we’re doing here. So somebody I think it
was the gentleman here had mentioned about the
attention we’ve put on rigor and reproducibility in
research and NIH has been put on the hot seat and
said what are you doing to make sure that all of these
grants that you are funding using taxpayers’ money
is resulting in research that is usable and reproducible? So about a year ago I guess it
was after many deliberations amongst the leadership of
NIH they actually put out some guidelines on the
grants website saying what they’re going to be
doing and how they’re altering their grants review process
to hopefully improve the reproducibility of the
research that we are paying for. And that’s what you see there. This is pulled right off
of the grants website and just a couple of things
that are of particular relevance to what we’ve been talking about over the last two days. Is first of all and this
is another hat I wear. I’m very much into reporting
guidelines for research. So there are many papers you can pick up and even good journals and
you read through the paper and the method section
and you still are kind of scratching your head like
well, how did they really do this assay? And how do I know that that
antibody they used in their assay was any kind of a decent antibody? And so we’ve been giving
these instructions to our grant reviewers
that they need to really look carefully at the
background information provided in the grant to assess how
good the evidence really is and people have different
levels of grantsmanship skill and some can write volumes of
stuff that sounds beautiful but when you sit down and
actually say what did they say, there’s not a lot there. So we’re just being a
little more prescriptive about the kinds of things we want to see. And so some more specific
things are rigorous experimental design, so
if you’re doing a study, early drug study, maybe
even a preclinical study, how did you decide you
were gonna do five mice in each group? And tell us more about how you
picked some of the reagents for your assays. Did you validate them? If it’s a cell line did you validate it, if it’s an antibody, did
you actually validate it before you used it in your experiment? Because it’s the old
garbage in, garbage out. If you haven’t made sure you’re working with good basic materials, and
you haven’t sized your study to actually be able to answer a question or designed it in a way that avoids bias so that you can answer a
question, then we don’t want to proceed with this but
often that information is not given so again having more
information we’re requesting to hopefully do a better
and more fair evaluation of the research proposal. And I think do I have one
more slide or was that it? Okay. So the goals of all of
this effort and I think what we need to accomplish
here is to raise awareness of the importance and the impact of assay analytical performance. And I get to see some
of the crash and burns that you guys never get to see. I mean I’ve seen the grant
where they had an assay and somebody switched a
reagent halfway through and the investigators
contacted us and say well we just did a plot over
time of our assay results and it’s a totally different assay as of January 2015. And I’ve actually seen people give back their grant money saying I’m
just throwing in the towel. So I get to see that stuff
that you guys never get to see. We do need to educate
all the stakeholders. There really is a big need. Even in academic institutions
they have terrific clinical labs, terrific
research labs, but the research people don’t necessarily
don’t know how to put an assay through all the paces that you
guys have been talking about. And even if they did know,
they frankly don’t want to spend their time doing that. People who are running
the clinical labs know that they have to meet clear requirements and they have more attention paid to that but it’s often the research
labs that academic clinical people are working with who are doing an early phase trial so
we need to have better communication going there. And so we do need to
incentivize this because it takes time and it takes resources. And some of the feedback
we get when we ask for our biomarker checklist
to be filled out is like I dunno where I’m gonna
get these specimens, who’s gonna pay for me to do these assays, and it’s a big issue. So we as a funding institution
also have to think about how we can not only
incentivize but also provide resources so that people
can do some of this important work. And finally, I think our
goal needs to be to share this biomarker data that we’re generating. Now data are only going to
be valuable if, number one, they’re any good, and
number two, you need to have accompanying those data
adequate documentation of the assays, the reagents, et cetera, preanalytical factors,
so that we can judge if the data are any good. And any of you who have been involved in these biomarker
qualification projects know when you’re off and
going out searching for any data you can find that might be useful for your qualification
effort and it would be a real shame to not make use of these data that are being generated in
academia and other places to really bring it all
together and even if we see differences in the data we
learn from the differences. But observing differences
doesn’t do us any good if we don’t have other
variables recorded with it to understand what might be responsible for those differences. So it’s really a matter of
trying to do our science in a more rigorous way,
documenting in a better way what we’ve done, what we’ve used, and hopefully we’re all
gonna benefit from that, not just academia but
the entire community. Thank you. – Great, thank you. And Lisa I quote you often
’cause as part of conversations we’ve had in the past when
the outside party says look you’re trying to tell
me what to do, Lisa says no, I’m not trying to tell you
what to do, I’m trying to give you the tools to tell us what you did. So.
(laughing) – We get that response a lot
with our reporting guidelines. Say I don’t like reporting guidelines, I don’t want to have to
follow what somebody else is telling me to do and I
give exactly the response that Chris said. You can do whatever you
want to do and people doing different things
is actually a good thing ’cause we can learn from it. But if you didn’t tell me how you did it it’s not useful to me. – [Greg] Great, thank you. Martha? – I’m Martha Brumfield, CEO
of Critical Path Institute. And really appreciate all
the comments Lisa made because we deal with a
lot of educational aspects within CPath as well. So, the slide come up? – [Greg] You want the slide to pop up? (laughing) – It would help but it’s not–
– Steve you figured this out. – That’s okay. – [Man] What’s going on? – [Man] Next slide please. – Well not to waste time, I’ll
go ahead and start talking. – [Man] Is that the first one? – No back up, back up one. – [Man] I did hit it a number of times. – It’s okay. Go forward one.
– Okay. – Sorry. One thing that we are
poised to do because of the nature of our work is to help get the word out about the broader framework document and
also this white paper once it’s ready to be
further disseminated. We run a lot of consortia and I listed in the first bullet just the
number of biomarker programs that we’re working on as
of today and that’s not the total number of biomarkers
but the number of programs. So it’s Alzheimer’s disease,
Parkinson’s disease, Duchenne, safety biomarkers,
tuberculosis and on and on. And we have found whenever
we start a new consortium typically they’re constituted
of scientists from academia, scientists from the regulated
pharmaceutical industry, occasionally from device
companies, government agencies, FDA and EMA also very much
support through a liaison role, but we probably spend
the first year educating that whole group of people
about what qualification is and what we’re going to need
to do to get a biomarker across the line. So it’s educating them
about the importance of clinical validation but
certainly with the assay validation and I would say
we have found the amount of time it takes to get a
biomarker across that goal line really hinges on the quality of that data and how much validation
has been done up front. Just in the last month a
new consortium we started we’ve been trying to get
access to a data set for which we were told it’s perfect,
it’s got all the biomarkers you need, it’s longitudinal
data and we get the data and it’s not even half of what the individual said was there. So and I don’t mean to
imply people are trying to misrepresent information. There’s a fundamental
lack of understanding of what kind of data one
needs to make a regulatory decision as opposed to
publish in a medical journal. So I think we all have
a lot more work to do and as others have said,
we need your help to get the word out to your communities. We certainly will work
with our communities, we will disseminate information
throughout all of our consortia and even to
groups that we choose not to work with. Multiple times in the course of a year an investigator will approach one of us and claim I’ve got the best
biomarker in Alzheimer’s disease you’ve ever seen,
I’ve done it in 10 rats. How do I qualify this? So this is where the conversation becomes is it regulatory ready,
a term John-Michael used earlier, it’s a term we use a lot. Because we all know
there’s a very long journey from discovery in the laboratory looking at animals and then
translating that into humans. So documents like what the
working group has prepared and we’ve been discussing
the last couple of days are really gonna make a
difference in being able to get biomarkers and hopefully
other kinds of drug development tools and novel
methodologies out there that ultimately will
derisk decision making, both for those developing
the drugs but also for the regulators that need to ultimately make a benefit risk decision. So I’ll transition to next slide. I just wanted to point out something, I know Shavri spoke to this yesterday and I think Chris has
mentioned it, but it came up in the discussion earlier
today about the importance of having real examples
out there where others can learn from. So under 21st Century Cures
with this new provision for transparency, this is
gonna happen much faster than it has in the past. With the biomarkers that
have been qualified, the information is made
available at the end of the six or 10 year process. But now information is
gonna be made available much earlier in the process. Now I can’t speak to exactly
how much but for example at step two where a qualification plan would be submitted to the
agency, it’s going to articulate what does an analytical
validation look like? So there’s going to be at least reference to what your goals are and
that’s gonna be informative. And it’s certainly my hope that
as more of this information becomes public, we as a
community are going to learn and we’ll get better at this
so that we can move things through the cycle much faster. So it provides an opportunity to learn and another of my personal
hopes is it provides an opportunity for us to
sort of combine forces and work more collaboratively
because the last thing that the agency needs
or any of us are five different groups working
on the same biomarker and coming up with five
different ways to approach that biomarker so we
really need to think about enhancing our collaboration. I think there’s one more slide. And I’m just gonna summarize
what Greg started with is that for now we want
comments, particularly edits, very specific edits sent
to [email protected] We’ll be changing this
to another email address in short time but you don’t
have to worry about that. If you send them here
they’ll get forwarded. I’d like your input on are
there other professional societies that we should be circulating the document to now or other
networks that some of us may have where we can reach
a broader group of folks that have a vested interest
in what we’re discussing here? We needed to set a
timeframe so we’re gonna say by end of August we would
like comments and edits on this white paper. The writing group,
you’ve done a ton of work but you’ve got a lot more
to do, I’m sure you know. (laughing) So they will decide at
what pace they’re gonna be able to work but they
will review the comments and then I think it will
be up to them to decide how best to address comments
and get another version or a final version or whatever
it’s gonna be so too soon to tell you exactly what
that will look like. But certainly we want to
make sure that everyone that’s participated in this
workshop both here in person and those who are online
that this comes back to you so you can see how it’s been refreshed and incorporates the amazing discussion that we’ve all had here today. So I think with that I will stop. – Okay great, thanks
for all of the comments and I just want to reiterate
the point that although that email might be
changing going forward, the actual website on
the Duke-Margolis site that captures the video
and discussion from the last two days will
stay up for I think we said not quite eternity,
(laughing) but for a really long
time, much longer than will probably be need. – As long as Duke is in
existence, the website will exist. – That’s what we decided. So that website will stay
up, the draft document will be on there with
accepting direct edits and very specific comments
through the end of August. So I do want to ask, we
have some time left on this and so to the group, questions
about which groups or networks should we be reaching out to and how to drive adoption? But one just quick question
to the panel if you can just answer, one or two of
you, in like one minute, is that Joe, despite all
of your efforts on Twitter and LinkedIn, you’ve shown
that it’s really hard to get people to look
at things and to sort of actually adopt things, and
Lisa you mentioned just how hard it is to get academic
researchers to use the tools that actually can help
them be more transparent. But to Martha’s point
that this is really about derisking decision making
in drug development so what can be done to
help encourage adoption of this framework in
like a few words only? (laughing) – Make it readable for
the academic biologist. – Okay. – [Man] Can I ask just a really simple? Can you make it available in Word format so that everybody does
it in change edit mode? That’s gonna make your life so much easier than setting it up as PDF. – Okay. We’ll make sure it’s in that mode, yeah. – So the one thing I would
add is to everyone here has to be involved. – Yeah, not just the individuals
but the organizations– – Yes. You have to be actively reaching out and pushing the envelope
of maybe some scientist social skills. – [Man] Procedurally I
would strongly recommend that you include line
numbers in the draft version that you put out. I’d hesitate about putting
track changes in Word depending on the number of
comments you anticipate. It would be really hard to merge. But line numbers as a PDF
and make people comment on what line, this works
in standard organization other places. Practical recommendation. – [Woman] So with regard
to your question about how do we disseminate the information. So the one thing you
notice here is there are a number of members from
the American Association of Pharmaceutical Scientists. Many of them are technical
experts so we can definitely reassure you that we will take the message to the technical experts,
but what we do need to have in the mix are the academic
and the medical experts. We cannot ignore them. So my question is and
it’s just a suggestion, like Michael J. Fox Foundation. Which really funds a lot
of biomarker initiatives and efforts so is there a
possibility like we have members from the Huntington’s
Disease Foundation in the audience, but can we
take it and provide it to them when it’s in near final form to say whenever you are providing a
grant, can you provide this as a guidance to all those people who are receiving grants? Similarly to the Alzheimer’s efforts. I worked in so many different
consortia for Alzheimer’s and fortunately for us we’ve
have very strong clinical partners, nevertheless
it’s taken such a long time for any of these years
of biomarkers to see the light of day and
the issues are all to do with technologies and how
people have approached assay validation and clinical
validation so I think that is something for us
to consider with regard to how do you disseminate the information, take it to the American
Society of Neurology? Just ask them to post it on their website for anybody who’s embarking
on research activities. Would you consider
following these procedures because it’ll simply your life later? – It’s not a secret society
with a special handshake. (laughing) So once this comes public,
especially once it’s near the final form please,
share frequently and often. Put it in your holiday gift certificates, whatever you wanna do. But certainly get the word
out and all of those groups that do work with
especially patient advocacy, many of which raise funds
for biomarker development, it’s important for them to know not only they want to develop the
biomarker but then what’s they’re facing so that they have
realistic expectations. – And when I’m talking
about pushing out the work that we’re doing here at this workshop. We see a number of biomarker
projects each year. And I have to remind all of those teams every time I talk to them
almost to go to the best toolbox and use the correct
name for the biomarker. Is it a response marker, is it… Because everybody has
these things in their head and they’re all different
so it’s not just, I mean, it’s a package. It’s this, validation,
it’ll be the statistics, it’ll be the framework
that’s already out there, it’s the best glossary. All of that has to be
reinforced by the groups that are here. So yeah put it in your Christmas list. – If I could add, I’ll
certainly make the commitment on the behalf of CPath
that we will push this out to members of all our consortia
and Michael J. Fox is one as is CHDI so we will, we
can’t tell them they should use it, but we can tell
them this represents the best thinking of this
collective group of experts and that we want their comments
and they should be aware this is the direction we’re moving. So we can certainly do that. – [Man] So you guys are
speaking my language right now because this is why I’m arguing for having some transition for this document, making it accessible,
and I’d love to know now what you guys think over there
because this doesn’t sound like what you’re wanting
this document to be but again I don’t want it to
be all validation all the time but my whole thought process
was the fact that I do work with academics, I do work with
kinda the small institutes. I do work with all sorts of
people who aren’t experts and I don’t want it to be
all validation all the time but I do need some
transition for the people who aren’t big pharma or big
biotech to use this document and figure out how it applies to them. And it can’t be the very end,
in FDA submission, it can’t. I know it has to be somewhere before that where they jump into the
document and they can access it. – Question over there. – [Man] I think it was
unanimously we want you to stand in line Mark. (laughing) So philosophical question,
guys, I don’t like to get into this philosophical discussion. I think Lisa this more
toward your comment, I’d like to get your input
and the rest of the panel’s. Look we all know the
basic research science, the brain, the power behind
a lot of the drug developing comes from academia. There are pros and cons
of what academia can do. Let me give you some examples,
you pointed to something. I inherit as a CRO clinical trial samples that have gone through
nine freeze thaw cycles because the lab kept running
out, I have to (mumbles). The samples are missing. I send academic lab water
and they report one mg per ml of (mumbles). Okay, I go in, I was a
graduate student myself and we know you get that
pipette and you basically calibrate it when it
comes, and 10 years later after five graduate students
drop it on the floor you throw it in the trash. That was the level of calibration
of the pipette, right? So I want to make sure
that again we come back to planet earth and
not in a utopian world. This document, right, we
want to get valuable input in order to make a decision as
what to do and move forward. So the stake holders. I get it, this is again,
if you are again suggesting this paper, we’re gonna send it out, and we’re gonna go ahead
and through this paper and even if it’s finalized
we’re gonna train our academic colleagues okay
on what is the difference between how to develop a
biomarker, what was the difference between precision accuracy,
I don’t think that’s the way to go. And I have some ideas, if you
want we can talk over a beer of what NIH need to do
and hold them accountable for some training program, training files, I think even grants. But so for this one though,
again, and this is again we’re talking about
bioanalytical assay validation for clinical biomarker qualification. So to me, no disrespect
to any of the academic over the phone, this is needs to be done, we have to finally get
some of these consortiums, we gotta go, this gotta go
to every pharma, biotech, CRO in the world as much as we
can see and get their input. For somebody who still does
not know the difference between (mumbles) accuracy,
even if they’re expert in oncology and Alzheimer’s
and whatever else, I’m not sure what value it will add. Okay so that’s just my count. – I still think you and
I are having a disconnect about what we mean when we
say biomarker qualification. – [Man] Okay. – Because we have a lot of
academics that engage with us for which we can qualify a context of use if they have the data and
it’s not to the extent that there’s a marketable assay
or anything to that caliber and that’s okay for us
depending on the context of use. – [Man] Right but what
that means here is that 99% of FDA’s time is spent
on correcting and teaching again some of these laboratories, right, versus using those resources
for the biomarker work that comes out of the
pharma and biotech that is a lot more tidied up. I’m not saying, no? – No, I don’t disagree. What I’m saying is if we’re
going to hold early phase, non risk, no change to
patients in the clinical trial, and we’re gonna hold them
to a level of standard for analytical validation
that would be the same as a CDRH clearance or
approval, we will never learn anything in the science
space because it’ll never get incorporated in
clinical trials where we can collect data to learn more. That’s my point. – [Man] Point’s well taken but again. – So you guys, we do have
like five minutes left. So important point, but let’s– – [Man] This is another
four or five beers. – Over here. – [Man] I just have a very
simple point that I think that the hastag help us
promote biomarkers for biomarker qualification
would be a good one. That was a joke. (laughing) I saw your face. So I’ve got it qualified. No the real point is
with all these kind of communication tools, we left out one. And the one is the peer
reviewed publications. Because we need to get it in literature. We had issues getting our
one in but we need to really concentrate on that because
as you said who posts something on LinkedIn and I said, I can post something on LinkedIn? (laughing) So that’s you have the audience. If that’s your metrics
it’s really really bad. But who read a paper? Everybody, right? So we need to get it into the literature, and that’s it, I’m resting my case. – There is a comment
behind you, why don’t we? – [Man] Can I just say the
end goal of the white paper was to publish it. – Okay, next please. – [Jen] Hi, I’m Jen Madsen,
I’m with Arnold (mumbles). A law firm, I’m not a lawyer. I was thinking about this a
little bit just in context of having done a lot of
work in the clinical lab in pathology world that you’ve got sort of two totally different
populations of people. The kids doing research in graduate school in the research lab and
the kids that are going to get the MT technologist
certification from the ASCP are not the same people. And they don’t have the
same training and so I think that some of that
is in order to kind of get the research lab and the
clinical lab to sort of understand each other better
you’ve got to sort of start with the thought that we’re
not even talking about the same populations of college students. But then further from that
I think when you look at the academic setting and where papers need to be published, the archives
of laboratory and pathology, engage with the association
of pathology chairs which represents all the chairs of all the departments in the country. And I think that by getting things sort of better coordinated between
oncology and pathology and what have you that
that would probably help a good bit actually. ‘Cause I just keep finding that drug folks and device folks and
diagnostics folks and lab folks just don’t all speak the same language and that seems to come up at every one of these
meetings that I attend. – Okay, alright, thank you. So another comment? – [Cindy] Hi, I’m Cindy
Lancaster, I work in regulatory affairs at Astra Zenica. I work with the Critical Path
Institute on some projects in the past and Transcelerate
and so I’d like to offer up. I’m not as technically
background as some of the folks in here that do validation
work, but I’ve spent a lot of time working
with development folks who don’t necessarily know about device and biomarker qualifications. And part of what you could
probably help them with is having a community of
practice to kind of implement this once you get it published. I certainly agree with all the comments that have been made about
things that would be helpful, but a lot of times people
don’t know what they don’t know until they need it and so
the other thing you could do is publish a list of low
hanging fruit because I think FDA said that
they’ve had two examples of prognostic biomarkers. I read the best paper, got
a little bit of education. My background. I cut my teeth on hypertension
and heart failure studies. Long time ago and so you know
I saw blood pressure there. Maybe people need just
a little bit of guidance about aha, here’s a list
of low hanging fruit. If I’m working in this
area, who do I contact to get a little bit of education? Now it’s a lot of work to
have a community of practice. But if you’re gonna publish
you need to do something to help people implement because again I’ll go back
to people don’t necessarily know what they don’t know until
they’re in over their heads and if you want basic
data from the scientists, they have to know how to
get it to you and they need to know that what you’re
talking about they have. So that would be my thoughts. – Great thank you. Responses? – Can I comment a little on
the issue of will the paper that’s already been drafted
serve the academic community? And I agree, I don’t think
that your average academic, unless they’re really in tuned
to biomarker qualifications is gonna sit down and digest
that paper word for word. But what we can do is put
it out there as this is a destination where some of the biomarkers you’re working on could wind
up and if we appropriately reference things, have
a lot of references to more basic papers that explain
some of the fundamentals and there are such ones, you
know, I could tell you some for like the statistical concepts
and analytical validation that they might at least
find that valuable. So write it in such a way so
that they get the concept, they can say okay, I’m not there yet, but oh
gee look at all these things I’m gonna have to worry
about and here are a couple of references that might
give me the background that I’m currently missing to understand this situation better. – [Man] So just a 30 second response. There are great white papers
out there that they will do a better job serving
that audience for exactly the purpose you mentioned. If academic lab doing
biomarker work and they’re not aware of Jean Lee paper, we got a problem. That lab gotta go start
from scratch and start moving forward. I get it. I don’t agree with it. – I’ve not read Jean
Lee’s paper, but I think one of the biomarker
qualification advantages is that you can put it in that whole
framework of understanding the big picture and so it’s
not the same as reading a paper, I don’t know
what Jean’s paper says, but if it’s focusing
primarily on the analytical validation part it would be
extremely useful for that. But is it putting it in
this bigger framework of understanding your
clinical context and all those other things that you
need to be thinking about simultaneously to gather
the right evidence? – Great, so that about
wraps up this session but I know the very specific
question that we had was ideas on additional
groups to sort of reach out to and bring in comments so
I think throughout some of the discussions I heard
quite a few sort of societies like American Society of
Neurology, Michael J. Fox Foundation, so societies and
patient groups, et cetera. I also heard industry,
reaching out more to industry and getting feedback from
those groups too which I think is a good– – Howard Hughes. – Howard Hughes, okay. – I think Howard Hughes
would be a great institute for the clinical community that
are interested in research. Especially since many of
them become academics. – Yes so the other is pharma. The Pharmaceutical
Manufacturers Association. And Bio. DIA.
– DIA has a broad reach. – And there are very
specific people in that. (mumbling) – There we go. Okay, now we’re flowing. (laughing) – Yeah NCATS is a good. – [Man] Medicine Intiative. – Okay that’s right. – [Man] GCCEBF. JBF. – Okay great. Also line items, line numbers
instead of track changes is really gonna be helpful. We’ll make sure that as I
mentioned the web address and the email is broadly
communicated too so with that I’d like to thank the
panelists for another great discussion and all of your for comments. (applauding) We’ll move now directly into
our next and final session. Thank you. Does Mark need this? (low murmuring) – [Mark] I’ll use this one for
now unless you want to stand. I’m thinking maybe you
can sit here and I’ll… – [Shavri] I’m actually
good with standing. – [Mark] Okay. – [Shavri] Yeah, I’m good with standing. We can kinda share. – Alright if all of our
panelists for the final session would come on up, we’re
gonna go right into it. So I want to add my thanks to… Are we good here? Good here, can you all hear me okay? Alright. I wanna add my thanks
to Greg’s to all of you who have contributed to the efforts over the last two days
and of course everything that went into the white paper
in preparation before then. It’s great to have such
a distinguished group and it’s great to see people
so committed to sticking with this effort through
two full days of meetings. For this session Shavri
and I are going to moderate a review of some of the
major themes and topics that came up over the
course of the last couple of days and have a chance to talk about some of the major takeaways,
prioritize some of the next steps towards the
completion of the white paper and how it fits into this
very important pathway towards better evidence
related to biomarkers. So we’re gonna try to make
this a back and forth session. Our panel has had a chance to
reflect over the lunch break on some of the themes that
they thought were most prominent and so we’re
gonna start with that but we very much want the
participation from all of you and people who are still with us online. We’ve had a lot of participants online over the last couple of days. So Shavri, maybe I’ll
turn to you first for any opening comments and we’re
gonna pull up a slide in just a second too that
describes some of these main takeaway themes. – Sure. Thank you, and I wanted to
echo our thanks to everyone, especially the writing group
who’s done a tremendous job of making this something
tangible for us to react to. I do want to make one clarification. I’ve heard we use the word guidance and we kinda throw different words around. So there’s big G Guidance
which is when us at FDA we put things out, we
think of big G Guidance, there’s little G guidance
and we kind of confuse those terms. This white paper that
we’re developing is not big G Guidance. Okay? I think it is an important
component of a variety of documents that we will take a look at and think about as we
develop big G Guidance. So I want to make sure
people are clear about that and we’re not confusing terminologies. But I cannot understate
enough how critical it is that the timing on this is
so important as it informs where we’re going to need to go. Biomarker qualification is
a big deal at this point. Not only because of 21st
Century Cure’s legislation, but also because of the
upcoming hopeful PDUFA IV reauthorization where it
clearly speaks to the need to develop guidance for
evidentiary considerations for qualification. So I’d like us to do
introductions maybe and then we’ll go from there. – Yeah if you all wanna go ahead and do the introductions yourselves. – Sure, so my name’s Ross Grant, LabCorp. For those of you, this is the
face that you’ve been hearing, sorry about that. I’ve been fortunate enough
to be at the other side of this conversation
in diagnostic medicine. So 15 years actually
validating, deploying, transferring FDA approved and also LDTs in the direct care of patients. So obviously I’m coming
from a different angle back to this interesting conundrum. I’m personally thrilled
with the positive feedback and I’m very, very thrilled
with the engagement that we’ve had over the last two days. But I’m also a little
perturbed and so I’m gonna use the words of…
(chuckling) You don’t know what’s coming
yet and you’re still chuckling. This is good. So galaxy’s greatest
philosopher, Master Yoda, and he said, “Fear leads to
anger, anger leads to hate, “and hate leads to suffering. “Fear is the path to the dark side.” – [Steve] Always with you bias it is. – You got that right.
(laughing) – [Man] Now was that rehearsed? Yes. – So what I want to communicate
is there should not be… Worthy not being there.
(laughing) There is lots of support
and lots of reaching back across a boundary and
I would just recommend that if you know clinical chemists, we are talking a different
language to some in the room but a very common language
in that context so feel free to talk to a good old clinical
chemist or even a young one ’cause they’re a chatty bunch. And it’s been a pleasure so far. – My name is Yoda and I’m short and green and have large ears.
(laughing) Steve Piccoli, I have a long
history in clinical chemistry, diagnostic and pharmaceutical industries, and actually even a stint
at the FDA on a medical devices panel so I have seen
all sides of this argument that exist. I’ve spent much of my
career designing tests to do very specific
things in the diagnostic or drug development continuum. Sorry. – No problem. My name is Sue-Jane Wang, I’m at the FDA for more than 20 years. Prior to joining the FDA I was at the Medical Genetic Institute
so I was doing a lot of (mumbles) stuff. Coming to the FDA I then do
a lot more on the clinical trials, I see both
sides from the academia, semi-academia to regulatory
but not industry. But through this discussion
today I finally realized that when I see a phase one
protocol and I will see a statement said oh the
assays are fully validated. But there is no number
or anything like that. But we don’t question further
that it was a phase one. In terms of the biomarker
qualification program, I joined the group and start
from the very beginning until now and I am pretty happy
that the statistical team, although I heard a lot
of statistician comments during the meeting, but I’m
pretty sure our stat team really contribute a whole
lot to get the three critical biomarker being qualified. So for that I think I finally
hear more statisticians’ participation when we
receive any biomarker qualification submissions. For today’s takeaway for me, because this is for
biomarker qualification rather than for drug approval, I thought I heard from the various parties and I come down to this
one takeaway message. I hear that the analytical
validation is a continuous process so at the high
level I think it may be good to have a analytical validation road map at the high level. And there you will then be
honest with what you have done from a regulatory ready
assay design your assay with biological variability,
very well-characterized. So give all those information
to us in a way that is representing the major
information or major result that you want to share with us rather than just say it’s fully validated. So we see those numbers
and we know what’s going on and we have confidence in the assay itself so that we have confidence
of the biomarker that’s going to be measured. – Am I going to limit
myself to introduction or comments too? – [Mark] I think introduction
and brief comment if you want but we’re gonna have
plenty of time for that. And any impressions you’d like to do. – Okay.
(laughing) No, I’m not good at that. Steve, you’re it. Okay now just to give me my background, after my post doc work at UCLA I joined a diagnostics company and then
I was in a biotech company, then in a nonprofit
organization before joining FDA so I have seen it all except
the pharmaceutical industry I think but we have
indirectly dealt with that. So that’s my background and
one thing I wanted to clarify is that we have never denied qualification of a particular biomarker
submission because the assay was bad when they came in. We would say hey, you know,
you need to do this this and this to improve the assay so
it’s not such a dealbreaker if something is missing
we’ll let you know. Just hope that you come
to us with that early. And with the new process you
will have to come to us early with all the analytical validation. And I echo what Chris said
about how the document should be more readable and
we heard from others also in the audience, and we do have
about 70% of our submissions come from consortia. We also have submissions
from academia, CROs, disease foundations, and
diagnostics companies too. So there is a wide variety
of people who send in their submissions so it’ll
be great if the document is something that’s readable
by any biomarker developer. – [Mark] Great, thanks Shashi. – Good afternoon everyone. My name is John Catival
and I’m a team lead with the office of study
integrity and surveillance. I am one of the coauthors
of the 2013 revised draft guidance for
bioanalytical method validation. Lot of familiar faces
here from Crystal Cities and WIRBs and other meetings of course. OSIS, as we would call our
office under the acronym, we are the custodians for
bioanalytical and clinical inspections for bioequivalence,
bioavailability, immunodinicity, ADA assays,
as well as biomarker assays. So our touch point is very
different from a lot of the discussion that you’ve
heard over the last 48 hours. It hasn’t been quite 48 hours. Unless you include–
– Seems like it. – Yeah it seems like it probably. So it’s a little later in
that stage so by the time I and my colleagues see a
biomarker assay for an inspection it’s very critical for me
to be here to understand the series of events to
connect from qualification to the stage where it might be serving in a phase three safety
and efficacy study. So that’s why the dialogue
here’s been very important to me to understand that and also, and I’m pretty sure I’ve read
about 3,000 of your comments for the bioanalytical
method validation guidance. I’m not exaggerating.
(laughing) ‘Cause we had 6,000. So I got half of them.
(laughing) So the important takeaway
is we saw the evolution of the need for biomarker
attention when it comes to analytical assay validation
but for the last two days it was important to see how
it fits in qualification and also, not that we have
to talk about it right now but down the road how’s it
gonna fit with further drug development under IND NDA stages. Thank you. – Hi, John-Michael Sauer,
Critical Path Institute. I’m gonna use my time to
really thank the team. I think you’ve seen a
demonstration of what this team has been like working
with them for a year. It’s actually been a lot of fun
putting this paper together. And I really look forward
to finishing it up with everyone’s comments. – [Shashi] Edits not comments, right? – That’s right, edits, edits, direct edits into the document. So really look forward to that. I mean, we still have more work to do. I mean we created the framework, that was a great step
forward for qualification in general. I think here we can come
out with a bioanalytical framework, a validation framework. I mean there’s more to
do around qualification. I mean we’re gonna talk
about in the future about imaging biomarkers and
what’s the validation required there? We still have a statistical working group, that’s going on. Sue-Jane plays an important role in that. And there’s a little bit more to do. And once we get this
all established I think qualifying biomarkers is
gonna be relatively easy because we’re trying to
find that consensus now across industry, across
academia, across FDA and other regulatory agencies
on how to move this forward. – [Shavri] John-Michael, do
you want to take the first two bullets on the slide and then– – I can say we’ve got because
of the work of this group in putting together key
themes and takeaways we’ve got four slides that
we’re gonna go through and we’re gonna do this as kind
of a back and forth between different people on the
panel and then we are gonna try to leave plenty of time
for comments and questions from all of you. I think we’d like to go
through all of the main points but if there is a burning
question or comment or clarification that you’d
like as we go along please let us know and
we’ll pause for that. – [Shavri] Mm hmm. – Perfect, I mean the
first two are basically word for word what I
discussed this morning. Right around the fact
that there have to be some core expectations
regardless of the biomarker and I think we all agree with that. I dunno, there might be a
couple lingering thoughts out there for qualification of biomarkers and the assay validation. And then I think the other
piece that we really aligned around was the fact that
that context of use, the behavior of the biomarker,
the patient population is going to then drive
what additional validation expectations it’s gonna be associated with that final validation. The bottom one we modified
during the discussion right– – [Shavri] Russ wants to take that one. – That’s mine.
– Oh I’m sorry. Oh no no, go ahead. – [Russ] That’s what we
mean by back and forth. This is my Russ to Russ moment. So again. Hello brother. Exploratory biomarkers
are outside of the scope of the white paper clearly
listed on page in the intro section, and the however statement we just wanted to just refine what we mean in the however statement
so we’ve added here just for a bit more clarity. The principles of the
white paper may be applied at the discretion of the investigator in exploratory situations. Let us least just briefly
explore the idea there. We had a beautiful word
earlier which was derisking. So a little bit more work up front, the principles apply maybe earlier. I know we’ve talked about creep, I get it. And Russ, honestly when I asked you do you do a little bit more work earlier to make an informed kill decision? I graciously accept that you said yes and that is not a bad thing
’cause that’s derisking with a little bit more evidentiary data. So we’re trying to leave
the latitude in there even though the flavor is
really pointing to that phase two phase three pivot. We are trying to generate
a principled document to process that doesn’t apply but at least is something to lean up
against or at least a framework or even a milestone. That’s where Oz is, you’re
on a yellow brick road. Are you going that way or are you going to be taken by flying monkeys? Up to you. But that’s where you’ve gotta go. It’s really to try to give
some feel for a waypoint or a marker post. And that’s the intent. So I thank you for your
honest answers, Russ. – [Mark] And before we
go on to the next slide any other comments on these
themes from the other panelists? – [Shashi] I just wanted
to ask the crowd– – [Man] Microphone, microphone. – I just wanted to ask if you know there was a lot of
discussion about whether this should move to the front of the paragraph or the back of the paragraph. If we bolded it would
that make everybody happy? Irrespective of where it lives? – Okay. – Who wants to take this one? – Whose turn?
– Steve you wanna? – Key themes and takeaways. Biomarker assay validation in
the biomarker qualification space cannot be done in isolation. It requires input from other
scientific disciplines. The clinicians who want
this test to be done to define the clinical
statement that they need to be tested, biologists
to allow us to examine the underlying system’s
biology of the patient, and statisticians to let us
know when we have done enough and we can make assertions about the data. It is impractical to create
a universal checklist for the validation of a biomarker assay. I think we’ve all agreed to that. But a framework can be derived
to guide the development of assays for biomarker qualification. This is what we have
taken the first stab at in the white paper. And lastly, although the
CLSI documents provide approaches to solving many of the issues associated with assay
development and characterization for biomarker assays, they
need to be appropriately adapted for use in the
biomarker qualification space. They are not designed for the
biomarker qualification space. They do however contain a
great deal of information that can be very helpful
to someone who is looking for an answer to an
analytical biomarker problem or a validation problem. Comments? – Other panelists, all
clear enough on these? Yes, please, go ahead. – I have a comment but not on this slide. But in following up with
this phase two three assay that you mentioned. I wonder maybe that’s
possibly the confusion because biomarkers when you
come to ask for qualification it could be just for proof of concept. So it’s not a phase two three thing. And using the term phase
two three make people think confirmatory, but we’re
really talking about what particular context of use is this biomarker proposed for? – If we’ve learnt nothing
in the last two days we’ve learnt that there
are no absolutes, okay. Everything is shades of gray.
(chuckling) Point taken. – And it all depends
on the context of use. – Correct. – Touche. (laughing) – Okay, who’s up for this next one? John-Michael?
– I could do it. – Do you wanna go ahead
with a comment first? – [Woman] I just wanted to
offer an edit, not a comment. (laughing) – We’ve learned that
is a good takeaway too. – You were paying attention. – [Woman] Solving seems a bit concrete in an area of gray, addressing maybe? – Fair enough.
– And the other thing maybe we need to highlight at
the current moment is they are currently not
universally available. So we can’t appropriately adapt them– – They are for money. We are going to work on that. We realize that they are
purchasable documents and not necessarily freely
available but they are– – [Woman] Freely available,
perfect, even better edit. – They’re not yet, but we’re
passing the hat, right? – Yes, yes. That’s why we’re doing this act up here. We expect money at the end.
– That’s right. (laughing) So actually was it you that
accepted the responsibility to reach out and see if we
can do something about this? – Yes, yes. – So point taken. – Does anyone else have any edits? (chuckling) To this slide or the previous? Okay. – You know after the
final discussion yesterday I think what we saw is
that there is a different expectation that is around the new path to qualification and
that a final validation of your bioanalytical method
is really gonna be needed for the qualification plan. And so that means that
we’re gonna have to come with a bit more data. And understanding of the
biomarker before we begin qualification, thus regulatory ready. – Just wanted to add
that we will prepare you for that in the letter
of intent step itself we’ll ask you if your assay
is analytically validated and we’ll also ask you if
you have a statistician on your team. – Among a host of other questions. (laughing)
– Yeah, yeah. – Yes sir. – Absolutely. – [Man] You know I like
your comments, your opening statement yesterday that you
said you’re an action driven person and I love the way
okay, we gotta go ahead, what is the next step, right? So if I may ask, what
is the next step for FDA in terms of what is a timeline
to get a guidance out? – Excellent question. So if you look at our
PDUFA IV commitment letter, pending PDUFA IV reauthorization, I just reviewed it right
before this session, and it says that before FY 2020 I believe and I’m looking at others in the audience who can confirm but I’m pretty
sure in the commitment letter it says before FY 2020 we
are issuing draft guidances in this evidentiary consideration space. So that gives you a
target date right there. If PDUFA IV is reauthorized
and FY 2020 that means it’s October 1st of 2019. And what are we, what’s today? 15th of June 2017. So it’s saying right there
that put some parameters around trying to get some concreteness. Does that help?
– It does, thank you. – Okay. – [Woman] Let’s have
a quick enhancing edit to that first bullet point. We talk about full validation. I know there is a space
in the document where we refer to partial and all
of that but it’s better to be very explicit to
say that the validation and all those seven core
elements identified in the paper, that’s what we mean by a full validation. So it’s not subject to interpretation. Shavri could come and
tell you I’ve done these three parameters fully, right,
that’s not what you mean. – So we should substitute
validation of key parameters as outlined in the document. – [Man] Or round up
because as we’ve all said the context of use drives
the level of so we can say the appropriate validation
based on the context of use. – Sure, I just hope we
can have that conversation to identify exactly what
that is before we do the qualification plan. – [Man] We heard that loud
and clear, we got it, right? – Yes, yes. – [Woman] Right, and also I
think it’s important not to say that some of these
parameters are optional. What we are saying is the
acceptance criteria for them may be variable depending
on the context of use, but you have to have
a strong justification if you’re not planning to
demonstrate selectivity or specificity. So either way you have to address it. – Mm hmm.
– Right, thank you. – Good point. – I think this last
discussion has illustrated the next bullet.
– Mm hmm. – Can I just make a comment?
– Oh yeah please. – Sure, and just again, if
it wasn’t perfectly clear last time in the last session, what will make certainly
the edit, editing, is that the right word we’re using? Editing. What will make our editing
process easier is not strikethrough line by line,
copy chunks, provide content with appropriate reference
details as direct replacement insertion in a full manner. Not a word here and a word
there, ’cause we can only imagine a fraction of the pain
that John went through and quite frankly I don’t
want to have to deal with that so please, we ask that you
make our life a touch easier by sectional rewrite with
appropriate references as we contemplate all of the feedback if you’d be so gracious. – Can I ask the writing group a question because I heard two
different schools of thought. And one school of thought
said give us the Word document so that we can track change it. I heard another school of
thought which was give us the PDF with line numbers. You’re the working group, tell us how you want to disseminate it. – So we have to date
within the writing group been doing it all with
track changes in Word. And that works well with a
limited number of people. With a much larger document,
it is probably a better idea to go out as PDFed line
numbers and then have people respond with line number X
to Y, this is what it says, this is what it should
say or should not say and that would be a much
easier way to deal with hundreds, hopefully not
thousands, of comments. – Okay, that’s helpful. – Is that acceptable to everybody? I think that’s manageable from our side. – And actually from the
writing group standpoint if we could have a couple of
days to actually get this out in the format that it
needs to be on the website, that would be very helpful. – Right. – So early next week. – Okay, and with clear
distinction of how you expect to get comments.
– Yes. – And also, what did we
say the deadline was? I think Martha said the end of August? – August.
– August for commentary. Did you want to do the big reveal? – Go ahead. – No you do it. – So actually we actually
would in our section was going to have comments open for
a period longer than August but it doesn’t really matter
if we think we can get the comments faster. If people need to socialize
this to their professional groups it may take longer than an August 8th date to do this. – Let’s ask the folks
here what do they think. Is August too ambitious? – So this is what happens
when we have two groups deciding on the same
thing in noncommunication with each other. We got two different dates. Surprise. – Are there objections
to the end of August? (murmuring) Okay. Okay, we’re hearing an early fall. – End of September?
– End of September? – [Man] I would just
offer for those of you ’cause some of you were
involved in the workshop, the framework discussion, it will take longer than you think. And you will get most of your comments in the first two or three weeks. And you need to understand
what you want to do with those comments before
you go to pharma, LD kit, or the medical council or
whatever because they are gonna want to see something
that’s mostly done. So personally, so Joe’s
suggestion would be go early, get it to something that’s almost final by the end of the summer or something. Then go to your bigger
associations like pharma and others and get, kinda say,
this is like the 95% thing. Will it cause an uproar
and then work from there. I think that’s just, and
then you’ll end up having it before the end of the year. – So did I hear the August 8th date? – I heard end of August. – End of August?
– We heard end of August. – Okay, well let’s keep it
earlier rather than later and that point, that’s
still two full months and two weeks on the Jersey Shore. (laughing) – Okay, anything else on this slide? Okay. But we may not be able to advance. There we go. Okay, we took the first bullet right? – So we’ve sort of thrown
out a putative execution date for a cleaner version
which was the end of year. Now you just made us do a
reset, we thought we’d be closer to finishing but
yeah okay, that’s a bucket of cold water but a really
harsh dose of reality I guess. So our goal is comments
and then a clean version circulated certainly well
before the end of the year. So one set of feedback,
recirculate, and then end of the year make a decision if
there’s any five five whether we go, whether we don’t. I guess it’s an open question
then is it follow up, is it more workshops? I guess that’s the open
question, what do people want? – Well I think it’s
where it is at that time. So we may or may not
need another workshop, we may or may not need a teleconference with appropriate people,
we may be able to go directly from there. – [Man] Maybe it’s the work,
again it’s a continuous process like a validation. So we’ve got to get the
feedback from everyone. What if again you know
we don’t need to serve another consortium but I like the idea that was just proposed, but on top of it to get one more final feedback from people and hopefully it’s not two steps forward three steps back. Then hopefully it’s more confined. We can take advantage of these other APS, the NBCWR but the other
meetings that are coming up to at least solicit one
more round of discussion hopefully with a similar,
open up the WebX and all that to get some more feedback. One final round to either
say what you want to say or that’s it, let’s move forward. – So we’ve contacted a lot
of those groups already. We’ve made overtures to 80%
of the analytical societies, professional societies that
were talked about today. So there has been some contact with them and they have all been
encouraged and as far as I know, they are all participating
in this meeting either here or online so they should be
available to provide comments in a more rapid fashion
without going to the meetings. As you well know we did
that at the APS NBC WRIB and the last couple of months as well. – [Man] What I meant is after
this round of discussion right, after that one we
do one more quick hey, here’s a clean up version. – As you said, your own
clients would not permit an iterative validation to go on forever. We need to set a deadline at some point and deliver the goods.
(laughing) – Touche. – I agree with that 100%. We have to set that deadline and deliver. – So we’re loosely
considering the end of– – End of August. And then a date before the end of the year as a 95% completed document. – Perfect.
– That is cool. I just wanted to add, I think
Shavri you were gonna actually get the last bullet,
but I want to say again as we talk about this being
one of a body of documents that will perhaps engage
and support big G Guidance, I want you to remember I
am in community, me is not, but unity is in community
and I ask all of you, we all have to live with this. I thought that was good. – [Man] Flying monkeys. – I had you at flying monkeys. – It’s the second time. – So I think it’s important
that we take advantage of a change. We’ve had wonderful documents
that have served us, BNV 2001, the Lee papers,
many of those 2006, 2009, the white papers from WRIB. We have had a change in
context, change in language subtly here because I think
it’s a 21st century problem and there are some other
aspects of clinical medicine that are following through
from this so this is a bit of a chance to take a different step and a different path, but it
needs to be done in a manner that’s not painful for everybody
so I really look forward to letting Steve to do all the hard work and edit all of the comments
and I get all the glory. (chuckling) – I second that. (laughing) – Sounds good to me. – Okay, any other comments
on the publication piece? White paper?
– No. – I think we’ve articulated that, and I’ve already
articulated at the beginning and with your ask, I
cannot emphasize enough the analytical validation
issues we all know are building blocks to
anything else we want to do. I enjoyed Lisa McShane’s
comments at the last panel where she said this impacts
ethical use of samples and ethical use of
patients in trials and so we’ve got to get this right
and with these important piece of legislation
with 21st Century Cures it gives us an opportunity
to shine a bright light on something that we all know, not only for the academic community, but for industry and for
consortia, this is a critical piece so I appreciate everyone’s
time in helping us get this right. – Other comments from
the panel, themes, ideas that we haven’t already
covered as next steps? – One thing that I heard loud
and clear was more examples are needed. We’ll have to come up
with hypothetical ones because we are limited by
the clinical biomarkers that are qualified and we
could probably take one of them but you know. Would people be comfortable
with hypothetical examples? Does anybody want to
contribute their case studies as an example? So that’s something else. You could hide the name of the biomarker. – [Man] I’ll offer a comment I saw. Something very rare I think
happened at a (mumbles) group. A few years ago which
was a company, so FDA, we can’t reveal any
confidential information. So you wanna go hey, tell
us about the failures, they’re really educational. And we go, we can’t talk about that. But if a very noble company comes forth and talks about a failure,
then we can comment on it. And it takes a lot of
guts to do that, a lot. So I don’t underestimate
that, but in terms of examples there is nothing like that for education. So I just put that forth, if someone takes up that challenge. – Thank you. – [Man] Yeah it’s a
great point and I think that there will be some people
that do have case studies, real examples of both
positive and negative outcomes and I think they both have
a place, a part to play. Certainly before and since the Lee paper, some of us have been given
educational workshops along those lines for quite a long time. I certainly have case studies
I’d be very willing to share. – Great. – [Man] Of real data. One of the things I would
encourage the community in general to do is when we’re working
as service providers or pharma companies is not
to be afraid to share data. Just to give you a short of
idea of how bad it can get, you know about 2006 I
revealed and reported at AAPS of a commercial assay
that was then being used on practically every
Alzheimer’s study in the world that was totally inappropriate
to measure AB to 42. And within two weeks I had
several contacts from certain companies developing drugs all admitting that they knew there were
issues and they never told anybody about it. So that’s the bad side of it. When I do get really
interesting data I always ask our clients if we can share
that even in an anonymous way. If it’s practical data that demonstrates how these experiments tell us what the, how to characterize the method then it should be out
there in the community. We all are here hopefully
to help eventually benefit patient care and welfare. – Can I just respond to that briefly? So I want to just add a point. So Clinical Chemistry the
journal has very, very, very many examples of the example we
showed a method comparison study, a lot of inter instrument, inter platform method comparison studies do
and have always been published in Clinical Chemistry the journal. That is actually a wonderful
journal and perhaps a friend in this particular setting. Many of the markers we’ve
described today across many platforms with six folds and slopes. Even though they are harmonized
to get the same number from a reference standard, you get six fold changes
and slopes when they measure real people so I would
recommend Clinical Chemistry. Thank you for bringing
that to our attention John. – [Man] Go ahead John,
did you want to respond? – [John] I was just gonna
say you and I come from the same background so I know
where you’re coming from. – [Man] Okay you guys can
have a beer together later. So but I want to mention
exactly what John said is where we need your help. Okay I am sitting on hundreds
and hundreds and hundreds of case studies, okay? There are companies that
are looking for the same biomarker for several different indication that boy, some have worked,
some have not worked clinically. My request is that I think the
exact same thing John asked is that when we come to you the pharma and if we can go ahead and
get permission from you to share that and put
some of that information in the paper in an anonymous
fashion and make a biomarker, whatever, to biomarker ABC,
there’s tremendous amount of data we have. But at least we met some of you guys here, we know we work with your
colleagues in the pharma, so we’re gonna come to you
and ask permission from you to go work with your
colleagues and with your legal so we can release some of this. It would be very helpful. So John, I agree with you. – So I had one other question actually to posit to all of you. I’m actually very encouraged
that we’re talking about augmentation and not tear it down or burn down the house and rebuild. So augmentation is great. But how big do you want this document, are you worried about the size? There were some comments
yesterday it’s a 50 page document. If we augment everything we’ll
be talking about chapters, not a document, so do we
have to worry about the size of this thing, really? Please. – So a good question
follow up to that is why, if the document is not
complete, it’s not going to be of use. If it is 80 pages long and
it fits the bill perfectly, then it’s the right length. – And the examples can
be added at the back as appendices. – Yeah, but I mean they
count as pages to a document. So this should not also
be the Clif Notes guide to biomarker assay validation with a bunch of appendices for you to figure out. But it should be the scientific guidance, little G guidance, scientific advice on how to do this properly. – [Man] I don’t disagree and
certainly the working group will have to figure out what
they’re comfortable with. I will say on our side we have a history of creating long guidance.
(laughing) – White papers, white
papers, white papers. – [Man] No no no, and we’re
being encouraged to go forward with short guidances actually
focus on the key points so that people can take
them and actually use it rather than try to find it
buried within the 50 pages. So to the extent that you
guys can work together to create a one or two page of
bullets of the key take homes that might be a way to have
your cake and eat it too. – So let me rephrase the
question then in light of his answer. Should this be complete
with the information that is needed to do that? (mumbling) – No no no, but brief
is an arbitrary term. Should this be comprehensive
enough to allow someone to fully understand how to
validate a biomarker assay for biomarker qualification? Yes or no? – Yes.
– Yes. – Then if we have that
complete, the document is the right length whether
or not it’s five pages or 100 pages. We already know that
five isn’t gonna do it, I personally hope we don’t get to 100. But if it may take that
with all the examples, then shouldn’t we do that? And if it’s well-written
enough we can go to the point in the document that addresses
the point we need help with, look it up in the index,
and there it should be. – [Woman] So can you generate a summary that you publish independently
so that people actually know that your document, your 80 page document, has the information they need? Because many labs, when they
download your 80 page document, are going to say no,
there is no way I can find what I need in this document. However, if they read a
summary and say oh look, in this document is something that I need, they’re far more likely to go look it up. – So to recapitulate, we
could certainly provide an executive summary of the
document in brief of really what the document should
cover and then let people go to the detail. – Can I just change one word? It’s our document.
– Yes, thank you. – Please. Microphone? – [Man] It should be
a peer reviewed paper. – It will be a peer reviewed paper. – [Man] That’s important. – Yes. – [Man] I’ve been trying
to put my head around this for the last two days trying to figure out what this paper actually
is targeting and I kinda, we’ve talked about that
there’s 90 or 95% of the work from biomarkers don’t go to in vitro test. They don’t end up being
on a clinical analyzer. They’re used for other decisions. So it’s sorta like
taking, saying to people, we have 100 people and we want
to send them to Washington, we say but five of you are
gonna go to Antarctica. Everybody put on your
snowshoes and let’s go. I mean in this time of year too. I’m a little concerned
that this is being driven for the extreme as far as the
papers being 80 pages long and we want to be sure that we get the 95% to the right place. – So let’s be careful of our 95%. This paper is targeted
at one specific thing, validation of biomarker assays
for biomarker qualification. That’s it. Anything else doesn’t have to fall in. So we want to get nearly complete
of that particular aspect. Period. And anything prior to
that or subsequent to are out of scope. – [Man] But what I’m concerned
is what I’m hearing from FDA representatives here is
they’re really leaning toward that end of the spectrum for putting together in a big G Guidance. And I’m afraid we’re gonna
leave 95% of the people how they’re using out of the picture. – But I hear what you’re
saying, but remember if we’re looking at 21st Century Cares act and the PDUFA IV reauthorization goals, those specifically speak
to biomarker qualification. So yes, I think the principles
that are articulated it would be wonderful
if once these principles are articulated they could
be extracted to be used by the academic community in
a much more understandable way for that group, or for
other communities at NIH or what have you. But just to be clear,
kind of where we’re headed with all of this is just
really that, like you said, that perhaps 5-10% that’s
really about biomarker qualification specifically. Right? And what level of
credibility and reliability of assays we need for just that piece. So I think the principles are valuable, but what’s going to inform
perhaps future guidance is really gonna be that small subset. Does that help clarify? – I think so, but also– – [Man] It really comes,
what your comment is, here because you get our stuff. – So yeah. So take the 2001 guidance. That was a PK guidance. Then the 2013 revised draft. It was still PK assays, and
remember those discussions we had of theses bioanalytical
forums where you had the chromatograph group there
and the LBA group there and they were arguing well ours is harder, ours is more difficult? There was this stereotype that PK assays, that was more straightforward. How did a 2001 guidance then become almost double the size in the revised? Because there were more
scenarios that had to be captured as a result of
publications that happened over a span of 12 years. The reason I’m bringing this up is because regardless of the size of
what we, we talked about the big G Guidance. Regardless of the size of
that, there are way too many stakeholders involved in biomarker assays. I don’t care if it’s academia,
CROs, or for other research purposes, that, and I think
you’re alluding to this, that may go to this document
and do a control F to find well what do they say when it
comes to bias or precision? So there seems to be, and
I don’t think anybody here explicitly said this, but I’ve heard it at the watercooler
conversations or wherever else, people are yearning for this
all-encompassing document. However, that’s a
difficult one to write up. Now given what I said about
the PK example, imagine all the scenarios were biomarker assays. A lot of you that are
familiar with generic drugs you heard of product specific guidances. So for this particular
drug, this particular drug, there are specific
guidances that’ll talk about in vitro tests that you should do. Arguably you could actually come up with, and I am not suggesting
this, but what I’m saying is biomarker specific guidances
because each one may have their own characteristics
when it comes to precision, the technology used. The challenge here is just
what we saw with the PK realm. There are more scenarios
and it’s very difficult to predict how robust this,
if there is a big G Guidance that is a result of all
this, how that’ll look. But what I’m trying to
emphasize is that there are a lot of stakeholders that
will do a specific search to see well I just need to
know how to address stability. How many samples? Some people were talking about that. We do need to have specific approaches. The reason is because of
the lab to lab variability in the approach, which is
why somebody from Merck yesterday, Frank from Merck
yesterday I think he was talking about what about
the number of samples, what about the number of
repeats, what about the number of days across the days, how many? That is something we
really have to think about if we really need to go
down those specifics. Some people say don’t
wanna be prescriptive. But please understand why
sometimes that is the outcome because they just want a
baseline where do we start. – [Woman] Sorry, just one comment to this. I mean I hear a lot of concerns about it and we hear that we have a
lot of different stakeholders here and to be honest I think
first version will not be 100% perfect but why not
looking at the good side of it? I mean, just get it finished fast, (laughing)
put it out there– – Are you listening Steve? – [Woman] And then maybe
meet like one year, two years later, whatever,
and discuss how it was working and then maybe refine it
then, get comments on okay, so this didn’t fit really
and this was a pain to do it, whatever, but I think we
only will know when it will go live what really is
appreciated and what may we need to refine and
not having all concerned. Because we will not be
able to have all concerns fit in that document. That’s just a comment. – Well just to paraphrase,
better is the enemy of good enough and of course good
enough is an arbitrary term like four six X. – Did you have a follow up? – [Man] I just wanted to say that I agree with what John said. To have a document that
is totally prescriptive of all the scenarios I think,
I would be very frightened if I was you. Think about all the different
biomarkers, the combination. We have protein, peptide,
straight mass spec, digestion with mass spec,
captured digest mass spec, immuno assay, and this is all single plex. What happens when now you
wanna take it to the next step of having multi plex assays? I think it’s nearly
impossible to have something that covers all scenarios
and if you did it would be 1,000 page big G Guidance. – So what I think you
just said which we may, I’m sorry Frank, you’re not in the room, we make have put a stake in that one. Multiplex, I hope you can
appreciate the dimensionality in the conversation just
talking about two technologies and a singleplex right now
so I hope you’ll give us a bit of a pass on
multiplexing in this one. I’m not saying that those
concepts cannot be translated from whatever comes out
of this, but, you know, that’s not a problem doubled,
that’s a problem powered. You know?
– Yeah, agreed. – Alright any other comments? Yes. Or edits, I’m sorry. – [Man] It’s a comment. If you don’t address
exploratory biomarkers but you also want to address
the academic community, be aware you have to tell
them even for exploratory biomarkers we have to have
technical validated assays. So it’s also a type of education that is assay validation. Is not only for the biomarker
qualification program. – So you may make a
great point and there are a number of authors here,
I’m looking at Russ, I don’t see where Jean is. But John. Can we refresh the Lee
paper and just translate some of the CVI CVG? Because I think that
framework fits that problem but we have a disharmony in language. Can we think about that
as an update mechanism feeding into this and then
whatever comes from it? Is that possible? (mumbling) – [Man] And the other comment. I wouldn’t mind if there
are dozens of examples added and the document is getting
several hundred pages because let’s say
examples of whatever type of biomarker qualification
helps to understand at least the academics much
easier and you don’t have to read all those, you can pick. – It’s a fair point. How we gonna keep our wives happy when we’re writing all this up, mate? Appreciated. If you have any specific
feedback on the (mumbles) talked about, safety
biomarkers, but we haven’t talked about different
functional biomarkers or use of, again, if there are some
specific examples really rigorously listed in literature,
that helps us deconvolve and translate and the more
crystalline the examples the better the quality of the data, the more open those examples
the better we can convert that into a story, really, if
you want to think about it. – [Man] And maybe you
can also add, let’s say, examples, the links to our
letters of support material which we’ve submitted to the FDA. That could also be possibility
to have a list of those links added in the appendix. Otherwise it’s great. (laughing) – Can we stop now?
– That’s good, thank you. – Thanks.
– I think maybe one or two more comments. – [Man] I’m sorry I don’t speak for Jean. I would leave Jean as the lead author to respond to that, but I
think what would be good is if we can keep the
text of this document as brief as possible but I would encourage more case project studies, real examples in an appendix and preferably
as many as possible categorized by those seven core components that if you get this right or wrong this is the potential outcome in each of those seven categories. I think that would be
helpful for people to address some of the comments that have been made. – I don’t doubt we’re going
to get that in a clear, concise manner from you
John so I appreciate that, thank you. – Any others? Okay, Shavri, any final
comments, thoughts? – No, just a big thank
you to everyone who stayed til the bitter end. This is really, really helpful. I know this is a lot of
work but we’re committed to helping move this forward. – Yeah and I too really wanna
thank the writing group, the presenters, the
panelists, the commenters, or, I’m sorry, editors.
(laughing) Who all contributed to
the discussion here and what sounds like will be
some valuable refinements to a very important white paper. This has been a lot of
time and effort on a very serious topic and I think
the interest from diverse perspectives and all the people
who joined us in the room. We had hundreds who joined online too. Shows just how important
this issue is for this overal goal of evidentiary considerations of biomarker qualification. So it is foundational for that work. There are more steps to
come, but very important progress here. I particularly want to
thank John-Michael Sauer, Martha Brumfield, and Nick King at CPath and Shavri of course and
all of the FDA colleagues who have put a lot of time
and effort and collaboration into making this work
as relevant as possible and making this meeting a success. And a final thanks to
my colleagues at Duke who have worked to help put
this together with CPath and FDA, Meredith Free, Joanna
Higgison, Jessica Bernell, Katherine Frank, Grant
McCoors, Elizabeth Murphy, and Liz Richardson. So thank you all for your participation. Please enjoy the rest of the afternoon and safe travels back,
thank you very much. (applauding) (low murmuring)

Leave a Reply

Your email address will not be published. Required fields are marked *