Sunday, July 6, 2025

Experimental Research Design (Part 2): Interview with Bruce Thyer, PhD, LCSW

Photo of Bruce Thyer
[Episode 145] Today’s episode is the third of a three-part series on research design (and the second of a two-part series on Experimental Research Design) with Dr. Bruce Thyer, Distinguished Research Professor and former Dean with the College of Social Work at Florida State University.
 
Dr. Thyer is a Distinguished Research Professor and former Dean with the College of Social Work at Florida State University. He is also an Extra-Ordinary Professor with North-West University in the Republic of South Africa and an adjunct faculty member with the Tulane University School of Social Work. Previously he held the position of Distinguished Research Professor at the University of Georgia. Dr. Thyer received his MSW from the University of Georgia and his Ph.D. in Social Work and Psychology from the University of Michigan. He is a Licensed Clinical Social Worker in Florida and Georgia and he is a Board Certified Behavior Analyst. Dr. Thyer has been a long-term promoter of the evidence-based practice model within social work. His work is largely informed by social learning theory and has taken a recent turn in the direction of exposing and discouraging pseudoscientific theories, interventions and assessment methods within social work practice.

Download MP3 [19:16]

Bio 

Dr. Thyer is a Distinguished Research Professor and former Dean with the College of Social Work at Florida State University. He is also an Extra-Ordinary Professor with North-West University in the Republic of South Africa and an adjunct faculty member with the Tulane University School of Social Work. Previously he held the position of Distinguished Research Professor at the University of Georgia. Dr. Thyer received his MSW from the University of Georgia and his Ph.D. in Social Work and Psychology from the University of Michigan. He is a Licensed Clinical Social Worker in Florida and Georgia and he is a Board Certified Behavior Analyst. Dr. Thyer has been a long-term promoter of the evidence-based practice model within social work. His work is largely informed by social learning theory and has taken a recent turn in the direction of exposing and discouraging pseudoscientific theories, interventions and assessment methods within social work practice. 

Dr. Thyer is the founding and current editor of the journal Research on Social Work Practice, and Co-edits the Child and Adolescent Social Work Journal and the Journal of Evidence-based Social Work. He is one of the original Founding Board members of the Society for Social Work and Research. He has produced over 290 journal articles, over 130 book chapters, and over 35 books in the areas of social work, psychology, behavior analysis and psychiatry. He is a Fellow of the American Academy of Social Work and Social Welfare, American Psychological Association, the Association for Psychological Science, and of the National Academies of Practice.

Transcript

Introduction

Hey there podcast listeners, Jonathan here. Today’s episode is the second of a two-part series about experimental research designs with Dr. Bruce Thyer. Dr. Thyer is a Distinguished Research Professor, former Dean with the College of Social Work at Florida State University, founding and current editor of the journal Research on Social Work Practice and author of the 2023 Columbia University Press book Experimental Research Designs in Social Work: Theory and Application [https://cup.columbia.edu/book/experimental-research-designs-in-social-work/9780231201179/]. In Episode 144 Dr. Thyer gave a historical overview of social workers and experimental design. In today's episode, Dr. Thyer unpacks how social work practitioners can think about experimental research design.

We explore why you don't need to be embarrassed if you find experiments intimidating, but why critically appraising research, including randomized experiments, is a core expectation for all social workers. Dr. Thyer shares practical tools like the CONSORT-SPI checklist to help you evaluate studies and encourages you to respectfully challenge your professors about the empirical evidence behind therapies they teach. We'll also tackle some of the challenges and biases in social work against experimental designs, the inherent risks of empirical study where results might not confirm expectations, and the ethical considerations when designing interventions, especially for serious issues like suicide prevention. Plus, we'll clarify the difference between true experiments and quasi-experiments (often mistakenly called 'natural experiments') and understand why precise research is vital, sometimes with its full value recognized much later.

If you want a refresher on some of the terms Dr. Thyer uses in this episode, you can check out his book, Experimental Research Designs in Social Work: Theory and Application, or go to socialworkpodcast.com for a glossary of terms used in this episode.

And now, without further ado, on to episode 145 of the Social Work Podcast: Experimental Research Design (Part 2): Interview with Bruce Thyer, PhD.

Interview

Jonathan Singer: So, Bruce, you and I are both PhD trained researchers and scholars, and I think a lot of what you're talking about applies really well to doctoral level social workers. But most of the folks that are listening to this podcast are MSW students, BW students. They're going to be practitioners. What words of wisdom do you have for them? What advice or cautions do you have if they're listening to this thinking, "Wow, that's… experimental designs. I can do them. They're able. They've got a lot of value. What would you say to them?"

Bruce Thyer: I would say “don't be embarrassed if you don't think that RCTs or other types of experiments are beyond your capacity.” You know, we don't expect our family medical doctor to do randomized experiments in his or her practice. We don't expect our dentist to do that or anything. But what we can do and what we can and should expect of all BW students and MSW students is that they have learned to read and critically appraise different types of studies. descriptive studies, survey studies, all kinds including randomized experiments. And um as you know there have been some checklists that have been written that are publicly available. Uh one is called CONSORT-SPI which stands for Consolidated Reporting Standard for Randomized Trials - Social and Psychosocial Interventions. And if you do CONSORT-SPI you can find it on Google and this is like a 25-item checklist that a reader of an article can use to see did the writers of this paper cover all the major points. Think of a pilot doing a checklist in before setting off on their plane or a surgeon doing a checklist before he or she begins surgery. An author of a of an experiment should follow the CONSORT to write it up and make sure no crucial information is omitted. And a reader of an experiment can use the CONSORT diagram or I'm sorry, the CONSORT checklist to look and guide their analysis about um how people were recruited and how are the participants described and were measures taken properly pre-test and post-test. So no BWS and MSWs um are not expected to routinely do experiments. I recognize that by golly they really need to be able to read it just like your medical doctor needs to be able to read the latest experimental research that occurs in medicine in his or her discipline. So um become familiar with it because experiments give you the very best um judgment about what the true effects of treatment are.

And I would encourage students in class to ask their professor, particularly in practice classes, to raise their hand respectfully and say something like, uh, Dr. X, I'm really interested in learning more about this therapy. Can you tell me of any randomized trials that have tested this therapy and been shown to have positive effects? And see what your professor says. A good professor will have that information at the tip of their fingers and they'll be able to provide it to you. Uh, another good result is for them to say, "That's a great question. Um, I'll get back to you next week in class and they'll give you the citations." What you don't want to tolerate is a professor who says something like, "Well, you know, uh, we shouldn't worry about that. We should just rely upon the authority of the field or our clinical intuition or how dare you challenge me as the professor." You don't want to be challenging. You want to be friendly. You want to say, "I'd like to learn more about these different types of treatments." let's say mindfulness which does have a good body of research behind it and a good professor will acquaint the students with the available evidence about the effects of interventions that they teach and um I would hope preferentially teach about in practice classes interventions that do have a good degree of empirical support and not about things that are not well supported.

Jonathan Singer: So let's say a student um uses this CONSORT checklist goes through and the article checks all the boxes. What might a clinician, a practitioner still want to be critically thinking about about some of the downsides of experiments even when they check all the boxes?

Bruce Thyer: Well, sometimes experimental interventions can be pretty expensive. If you're doing an analysis of long-term psychotherapy, um that can cost a lot of money, particularly if you have a lot of people assigned your different conditions. So that may be one reason why we see don't see as many um experimental studies on long-term therapies. So that could be a downside. Um it doesn't say they don't lend themselves to such analysis and RCTs are still the best way um to evaluate the outcomes of therapies long-term or short term, but uh certainly if you're doing something that you expect is not going to have an impact until six months have gone by or one year gone by of weekly sessions, that's going to lend itself less well to doing a true experiment. So that's certainly a downside.

Another downside will be your colleagues, whether it be in your BW or MSW internship or your doctoral training and and you say, "I'd like to learn more about experiments. I'd like to read them." Or as a dissertation student, you'd say, "I'd like to do an experiment." Well, be a faculty member said, "Oh, no. Those are too hard. You shouldn't do those. You should give them my book and say, "Look, there's over a thousand that have been done. We can do this together." And I'm really interested more not in a survey, not in a correlational investigation, not in a secondary data analysis. I want to do a study that really tries to help people and see what the effects are. And so try and walk in and and be prepared to be a little uh assertive in in your if you do have a true interest in doing an experiment, don't let anybody deter you. Uh try and persuade them that it is possible and maybe write up a little simple uh proposal of how you might do an experiment for your dissertation. Way back In 1987, I wrote an article in the Journal on Social Work Education encouraging all doctoral students to consider doing outcome studies and experiments for their their doctoral dissertations in social work. Even we see the rise of the DSW program um and the students have to do some type of capstone project. Um we don't think of the DSW as being a research intensive degree, but if you go on the website of the University of Pennsylvania DSW program, you'll find out that their capstone project is actually called a dissertation and they provide examples of the types of projects students can do for their DSW dissertation and at the high end is to do a randomized controlled study of a social work intervention. So even DSW programs are encouraging students to do this type of research and I think that's a good move. Another thing that I thought about when you were talking though is that if you're pinning all your hopes on treatment A being better than treatment B, but then it turns out they're not, right? That they're the same. That that that's something that you also have to be aware of and okay with as a researcher that maybe your experiment will show that there's no difference.

Jonathan Singer: You're absolutely right. You put your finger on an important point. Doing experiments is risky. Doing any type of empirical study is risky.

Bruce Thyer: And we need to adopt the mindset or the behavior, if you will, that we're not trying to prove that something works or doesn't work. We're trying to see if it works. We're trying to find out what the effects of an intervention are and not be wedded to proving that something works. Now, if you've had extensive training in one type of therapy, there's going to be a natural bias on your part, but you can help control for that by using independent evaluators to uh do the pre-test and post-test with your participants. You can have an independent statistician to do the analyses. Um, so you can try and and control for things using techniques like that. And And you really have to be permitted to finding out what nature tells us, not what you already decided you want to find out.

Jonathan Singer: Now, what about situations where withholding a treatment could be dangerous? Like for example, you know, I my area is suicide, right? And if there's a suicide prevention program that has been shown to reduce suicide ideation or increase the number of youth that that um report having um ideation to adults who can then refer them out for for treatment. How would you do an experiment where you would have half of the kids not get the intervention? That that seems to be ethically dubious.

Bruce Thyer: And you're absolutely right. And we would never want to design an experiment that would put put people at risk in a situation like you've described. First of all, I would not imagine a social worker would want to do such a study because of the risk element. And I'm also quite sure that your local institutional review board would not approve something like that. So, some things may not lend themselves to a no treatment comparison. But let's say if the treatment was brief, you could compare immediate treatment to delayed treatment. That might get through the IRB and might not be too risky for people. Um, but I think an experiment may well be called for even if the problem is serious if the intervention that you're testing is of completely unknown usefulness. If you have an intervention, you don't know if it's going to work at all. I don't see it's unethical to provide people a a a waitlist control group for example, but you're you're absolutely right. Um where people are going to engage in self-injury or hurt other people or something like that. Then a no treatment control group would not be desirable at all. We'd have to do something else. We might do a quasi experiment for instance. Those can be extremely useful as well and in some cases provide causal inference. But uh generally speaking uh experiments are better for causal in in quasi experiments.

Jonathan Singer: Now, during the pandemic a lot of people called what happened in schools in workplaces all across society natural experiments where they were able to say this is what people did before the pandemic and then there was shelter in place orders or people wore masks or this sort of thing. Um and and let's just use schools again as example because it's an institution where uh at least for two weeks. Like every school in the country was shut down and there were only a small group of kids that were going to those schools and those were typically kids with severe uh disabilities whose whose providers were in the schools. Um if you were measuring something that you know you were measuring up to March 13th and then you had data March 13th to say April 1st, would that be considered a natural experiment or is that not considered an experiment in the way you've been talking about it?

Bruce Thyer: Generally, experiments involve the deliberate manipulation of an intervention or some other type of independent variable. So, um the term natural experiments is around um for example, one state raises its minimum wage and an adjacent state doesn't. And you might want to look at the effects of unemployment. You know, some people say that if you raise the minimum wage, unemployment will go up among low-cost service workers. Those type of natural experiments have occurred but the intervention was not randomly assigned to one group or anothers. Um those can be very informative and they don't lend themselves to true experimental studies like we can't randomly assign half the states to get one thing and half the states do the other. But I will point out that I think are people becoming pretty sensitive to the value of experiments because early on there was push back against government mandates about some things because they did didn't have adequate experimental evidence like the use of cloth masks. There were people that said this is stupid. There's no evidence that these help people and they were publishing that and and it was causing controversy. And the fact is cloth masks don't help. The N95 masks can help. But another thing, this is January right now. It's January of 2023.

Jonathan Singer: [interrupts] 2024.

Bruce Thyer: 24 [laughs]. Sorry. Dr. Fauci said that the six-foot distance mandate that was put in place in schools and other places - remember the little dots on the on the road and the sidewalks and things like that? There was absolutely no cross-scientific evidence to support that whatsoever. And I think that now that that is known, if they ever have a return of a of an epidemic, um there's going to be a lot of resistance to people going back to having a six-foot distance mandate because think of the incredible inconvenience it was to schools. They had to reduce class sizes and things like that. These are not decisions without consequences. So the best way we have to make decisions about the wearing of particular types of masks or distancing or something like that is to do experiments whenever that's possible. And sometimes we have to be nimble and work really quick to do the types of studies and that was done in the case of masks real quick. So yes, natural experiments are are good thing to do, but they're actually more properly called a quasi experiment because they weren't deliberately assigned to different conditions.

Jonathan Singer: That's really helpful and and also as you're speaking it makes me think that um sometimes the real value of an experiment is later on because if if somebody had done experiments where they said okay so for this this aerosol projection what is the distance how how far does this go what what is the in terms of like exposure like what is that in the moment somebody might be like well that's kind of esoteric funamental research, but a pandemic hits and suddenly the CDC says, "Well, we need experiment-based guidelines to tell people how far away they can stand without being at risk for contracting COVID, right?" And so the the real value might come later. Which doesn't diminish the the importance of it being an experiment in the first place, but the value of it is actually later. And I think that's an interesting thing for social workers to think about as well.

Bruce Thyer: I agree with you completely. And to give an example of usefulness of quasi experiments. The country of Denmark has a really wonderful health care system. It keeps meticulous detail of everybody's health records. And in the 1990s um they looked at a cohort of about 400,000 kids. And in a natural course of parenting about 300,000 of the kids got um vaccinations when they were very young and about 100,000 didn't. Their parents didn't give them vaccines. Then they followed this cohort of 400,000 or so and And they found out that the incidence of autism was no higher in the group that got vaccinations than the group that did not. That is very strong evidence of a quasi experimental nature. Vaccinations do not cause autistic spectrum disorder. So by no means am I minimizing the value of other types of research like quasi experiments because they can tell us they can be tremendously informative under certain conditions.

Jonathan Singer: That's yeah that's amazing. Well Bruce this has been as always uh informative and fascinating. and inspiring. Any final words for the listeners about experimental design other than obviously to get your book?

Bruce Thyer: Yeah, I would encourage BW and MSW students to be sure and ask their professors to talk about experimental research in the classes that they teach and to use uh the the standards of good quality experiments to critically evaluate research studies that they look at in the social work and other disciplinary literature. I would really encourage doctoral students to consider doing an experiment for their PhD or DSW projects. And I would really encourage social work faculty to consider applying for grants to enable them to do experiments because that's really the one of the best ways that we're going to be able to see whether or not the services that we provide as a profession actually help people and that's going to determine the long-term survivability of our discipline.

Jonathan Singer: That's great. Thank you so much.

Bruce Thyer: Thank you, Jonathan.

~~~END~~~

References and Resources

Glossary of Key Terms

Randomized Controlled Trial (RCT): A type of scientific experiment, commonly used in clinical trials and social science research, where participants are randomly assigned to either an experimental group (receiving the intervention being tested) or a control group (receiving a placebo, standard treatment, or no treatment). Considered the "gold standard" for determining cause-and-effect relationships.

CONSORT-SPI (Consolidated Reporting Standard for Randomized Trials - Social and Psychosocial Interventions): A publicly available 25-item checklist designed to improve the reporting of randomized controlled trials, specifically adapted for social and psychosocial interventions. It helps authors ensure all crucial information is included and guides readers in critical appraisal.

Critical Appraisal: The process of systematically examining research evidence to assess its validity, results, and relevance before using it to inform decisions. For BSW and MSW students, this includes evaluating experimental studies.

Quasi-Experiment: An empirical study used to estimate the causal impact of an intervention without random assignment. Instead, it relies on other methods to establish a comparison group that is as similar as possible to the intervention group, such as through natural occurrences or pre-existing groups.

Natural Experiment: A type of quasi-experiment where the "treatment" or intervention is applied due to naturally occurring events or policy changes, rather than being deliberately manipulated by researchers. Researchers observe and analyze the effects of these naturally assigned conditions.

Causal Inference: The process of drawing a conclusion about a causal relationship between two or more variables, meaning that a change in one variable directly leads to a change in another. True experiments are considered the strongest method for establishing causal inference.

Institutional Review Board (IRB): A committee that reviews and approves research involving human subjects to ensure that it meets ethical guidelines and protects the rights and welfare of participants.

Independent Evaluators/Statisticians: Individuals who are not directly involved in delivering the intervention or collecting the initial data, brought in to conduct assessments or analyze data. Their independence helps to minimize bias in the research findings.

Dissertation/Capstone Project: A substantial piece of research work, often required for doctoral or DSW programs, that demonstrates a student's mastery of their field and ability to conduct independent scholarly inquiry.

Empirical Support: Evidence for a practice or theory that is based on systematic observation or experimentation rather than intuition or anecdotal evidence. Interventions with strong empirical support have been shown to be effective through rigorous research.

 


APA (7th ed) citation for this podcast:

Singer, J. B. (Producer). (2025, July 6). #145 - Experimental Research Design (Part 2): Interview with Bruce Thyer, PhD, LCSW [Audio Podcast]. Social Work Podcast. Retrieved from https://www.socialworkpodcast.com/2025/07/Thyer3.html

No comments: