Prevention Profiles: Take Five - Dr. Peggy Glider (University of Arizona)

Image

Peggy Glider

Audio file

This week, the University of Arizona's Dr. Peggy Glider, who serves as coordinator for Evaluation and Research at the school's Campus Health Service, is our guest. Topics include: the importance of program evaluation, challenges around the process, her advice for for campus drug prevention professionals, and more.
Rich Lucey: Hi, this is Rich Lucey with the Drug Enforcement Administration's community outreach and prevention support section.
 
And welcome to the next episode of "Prevention Profiles:Take Five."
 
This is our new podcast series where I get to interview people at the federal, national, state and local levels talking about current and emerging issues around drug abuse prevention in higher education.
 
My guest today is Peggy Glider.
 
Peggy is the coordinator for evaluation and research at the University of Arizona's campus health service.
 
She has a PhD in educational psychology from the University of Arizona.
 
Peggy's work focuses on evaluation and research to determine effectiveness of nonclinical or pharmaceutical programs, primarily those that are grant funded in a variety of wellness areas.
 
So Peggy, welcome.
 

Dr. Peggy Glider: Thank you.
 

Lucey: I'm gonna start right in with the first of our five questions.
 
So you and I met when I started my federal career working at the U.S. Department of Education in 1999, a few years ago.
 
And part of my job then was overseeing prevention related grants to schools, both K-12 and college.
 
And in fact we had a grant with the University of Arizona and that's how we met.
 
Looking back it seems to me that many of the schools writing those grants viewed evaluation as an afterthought.
 
Do you think that's changed in the past 20 years and people now view the importance of including program evaluation at the start of a project?
 

Glider: I do think it's changed somewhat as funding agencies have really stressed using more evidence-based programs and practices and you see data-driven decision making being talked about a lot.
 
However I think there's still a fairly long way to go unfortunately.
 
Many agencies doing prevention work are really not comfortable with or knowledgeable about evaluation, and that makes it difficult.
 
I think they often focus on process evaluation like counting the number of people that came or feel-good measures, how much participants liked the program versus looking at outcomes, things like knowledge, attitude and behavior change.
 
And those are parts of evaluation but that's not the place to stop.
 
But I think because that's where the comfort level ends, that's where the evaluation ends.
 
So I think some things are being built in but not necessarily the necessary outcome measures.
 
If they're not built in at the start, obviously it's impossible to know if your program was effective.
 
So I think we have a lot of work ahead of us.
 

Lucey: Yeah, I'm glad you brought up.
 
I mean, there are essentially; I want to say there're only two parts of evaluation but you mentioned process outcomes and process evaluation.
 
And understanding the number of people that have come to a workshop or the number of trainings you've done in a year, that's important.
 
But that doesn't speak to the outcome evaluation.
 
 
Glider: Right.
 

Lucey: So, so I'll segue into my second question then.
 
So what do you see as one or two of the biggest challenges that people have around evaluation?
 

Glider: Well, I think the first one is what we just really talked about in question one, preventionists and not evaluators.
 
And they often don't know what good evaluation looks like or where to find help if they don't know.
 
There are also often limited dollars put aside for evaluation unless the funder specifies that they have to have a sufficient level.
 
So I think that that poses a problem.
 
If you just assume again evaluation is counting participants or asking about feedback, that's pretty easy to do and inexpensive.
 
If you're not looking beyond that to see what actually is working, you don't really need an external evaluator.
 
So I think that's one issue that people just don't really understand the whole topic.
 
The other I think, and I've encountered this a lot when I've worked not so much at the university but more with community-based agencies that worked in schools doing prevention work.
 
There's a fear of being judged.
 
They see evaluation being equated with judging how good they are.
 
And I mean, that certainly is part of it.
 
You're looking for, “Are they doing a good job?”
 
 But it's not meant to be judging them.
 
You're really just judging outcomes and looking at outcomes.
 
And so preventionists are very dedicated.
 
They're caring people and they want people to see that they're doing a good job.
 
They're very invested and in their hearts they know they're making a difference.
 
So I think that they feel that's good enough.
 
When it feels good for them, then they know it's working.
 
So I think that the issues, evaluators really need to work with prevention programs to help them understand that they're there to help point out their strengths as well as the areas that need improvement.
 
Nobody's perfect.
 
No program is perfect.
 
So if you work as a team rather than someone from the outside looking in and judging.
 
I think we just need to shift that mindset a little bit.
 
So to me those are two big challenges that preventionists and evaluators face.
 

Lucey: That's excellent that you've been able to identify those two, and I'm intrigued by the second one.
 
I hadn't really thought of it that way, this fear of being judged.
 
But because you've brought it up, that's an excellent segue into our third question.
 
So I want to talk about the front end of strategic planning, so conducting a needs assessment.
 
And I certainly know that in addition to evaluation you've also been involved in conducting surveys.
 
One of the things that I continue to hear from people, you know, they've administered a survey among their students and the numbers aren't so great.
 
So the administration puts a lid on releasing the numbers for fear of the school getting a bad reputation.
 
What are your thoughts on that?
 

Glider: Unfortunately that is an all too familiar scenario.
 
School reputation is extremely important.
 
It's important for enrollment.
 
It's important for funding levels.
 
So you want to have the best, most positive image you can have.
 
That said, survey numbers even if they're negative can be released in the context of what you're doing to improve student behaviors rather than just focusing on the negative.
 
So I would never put out numbers without the context.
 
And when you put the context that can remove some of that negativity.
 
I think it's also important to look at the larger picture.
 
I cannot imagine any case where you have an entire survey that is negative.
 
There are things that the majority of students are doing well.
 
So you need to point out the balance between the positive and negative that helps build the true story from the data.
 
Burying the data or not collecting it at all, which I also hear because of that fear, it doesn't help anybody.
 
You need data to move forward.
 
So I think those would be my suggestions.
 
It's just really looking at the larger picture and the context.


Lucey: That's a really important point that you brought up at the end there, and I know that when I've done presentations and talked to people around the country about the importance of doing needs assessment, I mean, I can certainly understand the fear, if you will, of not wanting to release numbers that on the surface don't appear to be so great.
 
But you know, the flip side to that is, all right, so don't release the numbers but you need the numbers to drive the program that you're gonna develop and ultimately evaluate.
 
So how do you know what you're going to address if you don't do the survey?
 
 
Glider: Right. That's exactly it.
 

Lucey: So let's now just flip this completely to the other side of strategic planning and look at program evaluation.
 
What advice do you have then for people who do an evaluation and they don't get the results they were hoping for and now they're discouraged going forward?
 

Glider: Well, I think there are actually two critical parts there.
 
When things don't turn out, which is pretty common, especially when you're just starting off using a new program or whatever, I think the main thing for me is that it points out the necessity for collecting both process and outcome data.
 
If you don't get the outcomes that you want or expect, it may be that something in the process needs to be improved.
 
So it may be the dosage of the program.
 
You need to do it more frequently; or the length of the program.
 
It's not long enough to get the content across and actually have, start to be internalized.
 
Or it's too long and students are just tuning you out.
 
It may be the profile of people that are participating.
 
They may not even be the audience that was targeted, but they're the ones that come.
 
Often you see the healthy students, at least on campus, are the ones that go to presentations about health because it's of interest, not that they necessarily need the information.
 
It also may be that the program isn't a good fit for the population.
 
Especially you see this when people look at lists of evidence-based practices or they go to a conference and see a presentation on a program and they think, oh, that's really cool.
 
I'm gonna take that back to my campus.
 
But they don't really investigate.
 
Is it a good fit?
 
And so sometimes you don't get the data you want because the program itself was not a good fit for the student population, maybe not culturally appropriate for your group.
 
There could be all kinds of reasons why the program just hung down, isn't appropriate; might be adaptable but just using it as is, I think it's not helpful.
 
So I think the key is again to just collect and process information.
 
If you know what you did, you can go back and explore.
 
That's not gonna take away the negative results obviously, but it can give you the positive energy to try again from a data-driven perspective.
 
 
Lucey: I have witnessed and been part of and have talked to people about this nuance on this very issue.
 
You mentioned adaptability.
 
And I think sometimes people get hung up on; I've said for the longest time prevention is not a cookie-cutter approach.
 
And so a lot of times you simply cannot take a program off the shelf and implement it exactly as it is on the campus or in the community or in a K-12 school.
 
Some adapting may be necessary, but then there's that fine line between adapting and fidelity, being true to how the program was designed.
 
I mean, you cannot take a program that is intended to be 10 one-hour sessions for example and you've changed to three half-hour sessions.
 
And then you're wondering why you didn't get the results you were hoping for.
 
It's because you've changed the program significantly.
 
So what are your thoughts or what advice do you have for people around that whole issue on adaptability and just being true or the fidelity of the program?
 
 
Glider: I think it's kind of a two-way thing here.
 
First of all you need to; you could try it the way it's written.
 
Do the process evaluation and then see if that actually works.
 
And then you've got true fidelity because you're true to the model and it did work for your population so you're good to go.
 
If you do with fidelity and it doesn't work for your population, that's why you need the process data.
 
So you can go back and look at what needs to be tweaked.
 
And it may be minor adaptations that really don't hurt the fidelity of the program.
 
So you're covering the same content using the same methodology as much as you can, the same length of those kinds of things, but it may just need to be tweaked a little bit from a cultural perspective.
 
Or you may instead of having 10 one-hour sessions because you can't get people to come, you may try doing five two-hour sessions and see if you have better turnout.
 
But again you need to measure those things.
 
If you change it too much so that there's no fidelity to the model, it's not the model anymore.
 
You now have a new program.
 
And that's okay too.
 
But you can't claim it as an adaptation of the other if there's no fidelity at all.
 
Does that make sense?
 
 
Lucey: Yeah, absolutely.
 
That's really good advice for people that are listening as they move forward with taking a look at what their efforts are and if they're working or not.
 
So I'll close out and move on to our fifth and final question.
 
So for our listeners, what's the best piece of advice around evaluation that you can give to the professionals who are working to prevent drug abuse among college students?
 
 
Glider: Build it in from the start, keep it going and ask for help when needed.
 
It isn't exactly just one piece of advice.
 

Lucey: No, that is nice and succinct.
 
Why don't you say those three again so people really get it?
 
 
Glider: Build it in from the start, keep it going and ask for help when needed.
 
 
Lucey: That's excellent.
 
I really appreciate your time, Peggy.
 
Really good advice, very clear and concise.
 
And I'm sure that our podcast listeners are really going to appreciate it.
 
So again, Peggy Glider from the University of Arizona.
 
Peggy, thank you again for your time.
 

Glider: My pleasure.
 
Thank you for asking me.
 

Lucey: And stay tuned for another future episode of "Prevention Profiles: Take 5."
 
Again, this is Rich Lucey with the Drug Enforcement Administration.
 
Take care and have a good day.