Tuesday, July 1, 2025

Experimental Research Design (Part 1): Interview with Bruce Thyer, PhD, LCSW

Photo of Bruce Thyer
[Episode 144] Today’s episode is the second of a three-part series on research design (and the first of a two-part series on Experimental Research Design) with Dr. Bruce Thyer, Distinguished Research Professor and former Dean with the College of Social Work at Florida State University. Today's episode is all about experimental research
 
Dr. Thyer is a Distinguished Research Professor and former Dean with the College of Social Work at Florida State University. He is also an Extra-Ordinary Professor with North-West University in the Republic of South Africa and an adjunct faculty member with the Tulane University School of Social Work. Previously he held the position of Distinguished Research Professor at the University of Georgia. Dr. Thyer received his MSW from the University of Georgia and his Ph.D. in Social Work and Psychology from the University of Michigan. He is a Licensed Clinical Social Worker in Florida and Georgia and he is a Board Certified Behavior Analyst. Dr. Thyer has been a long-term promoter of the evidence-based practice model within social work. His work is largely informed by social learning theory and has taken a recent turn in the direction of exposing and discouraging pseudoscientific theories, interventions and assessment methods within social work practice.

Download MP3 [21:37]

Bio 

Dr. Thyer is a Distinguished Research Professor and former Dean with the College of Social Work at Florida State University. He is also an Extra-Ordinary Professor with North-West University in the Republic of South Africa and an adjunct faculty member with the Tulane University School of Social Work. Previously he held the position of Distinguished Research Professor at the University of Georgia. Dr. Thyer received his MSW from the University of Georgia and his Ph.D. in Social Work and Psychology from the University of Michigan. He is a Licensed Clinical Social Worker in Florida and Georgia and he is a Board Certified Behavior Analyst. Dr. Thyer has been a long-term promoter of the evidence-based practice model within social work. His work is largely informed by social learning theory and has taken a recent turn in the direction of exposing and discouraging pseudoscientific theories, interventions and assessment methods within social work practice. 

Dr. Thyer is the founding and current editor of the journal Research on Social Work Practice, and Co-edits the Child and Adolescent Social Work Journal and the Journal of Evidence-based Social Work. He is one of the original Founding Board members of the Society for Social Work and Research. He has produced over 290 journal articles, over 130 book chapters, and over 35 books in the areas of social work, psychology, behavior analysis and psychiatry. He is a Fellow of the American Academy of Social Work and Social Welfare, American Psychological Association, the Association for Psychological Science, and of the National Academies of Practice.

Transcript

Introduction

Hey there podcast listeners, Jonathan here. Today’s episode is the first of a two-part series about experimental research designs—what they are, why they matter, and how social workers have been using them more than you might think. My guest is Dr. Bruce Thyer. Dr. Thyer is a Distinguished Research Professor, former Dean with the College of Social Work at Florida State University and founding and current editor of the journal Research on Social Work Practice. Dr. Thyer was my guest on Episode 143 where he talked about single system design. And, for episodes 144 and 145, he will be talking about experimental research designs, the topic of his 2023 book Experimental Research Designs in Social Work: Theory and Application from Columbia University Press (link).

In today’s episode, Bruce walks us through social work’s deep roots in research—from early survey studies to Jane Addams’ community-based investigations—all driven by a desire to make our work more effective. He explains how the field evolved from using simple descriptive statistics to more sophisticated analyses like correlation, t-tests, and ANOVAs, which allow us to track real client change and understand what interventions actually work.

We talk about different research designs—pre-experimental, quasi-experimental, and true experimental designs—and spend time on one of the most misunderstood: randomized controlled trials. But wait, social workers don't do experiments, right? That's psychology. Dr. Thyer sets the record straight, highlighting that social workers have published over a thousand true experiments since the 1940s. He outlines how random assignment and sample size affect group equivalence, and why that matters for making causal claims.

We also cover specialized methods like vignette studies, which are used to study clinical decision-making, and we touch on ethical issues like IRB approval—because even cost-effective, small-scale studies need to be done responsibly. Finally, Dr. Thyer encourages practitioners to adopt what he calls an experimentalist lens—a way of seeing opportunities for simple, real-world research in everyday practice.

If you’ve ever thought experimental research was only for academics in ivory towers, this episode will change your mind. Oh, and if you want a refresher on some of the terms Dr. Thyer uses in this episode, you can check out his book, or go to socialworkpodcast.com for a glossary of terms used in this episode.

In part 2, Bruce and I talk about how BSW and MSW social workers can think about experimental research, the difference between quasi and true experimental design, and how to think about research in areas like suicide prevention.

And now, without further ado, on to episode 144 of the Social Work Podcast: Experimental Research Design (Part 1): Interview with Bruce Thyer, PhD.

Interview

Jonathan Singer: Bruce, thanks so much for being here on the podcast, talking with us today. You have this new book, Experimental Research Designs in Social Work: Theory and Application. So, what is experimental research and why should social workers know about it?

Bruce Thyer: Well, Jonathan, thank you for having me back.

As you know, from the very beginnings of professional social work, we are intensely concerned with research. From the early surveys of big cities in the UK and North America to settlement houses and Jane Addams' work with the Hull House papers and the demarcation of the statistics of neighborhoods and so forth, we've had a long-standing concern with research because we realized that we thought with proper research we could make social work a more effective discipline at helping people.

In the early 20th century, we saw the emergence of statistical methods that were more advanced than simple descriptive information like frequencies and percents and means. We saw the emergence of things like measures of correlation that permitted more quantified associations between different measures, different variables. This was a big advance. So, we could not just recognize that there's a link, say, between drinking and spouse abuse or lack of parental supervision or juvenile delinquency or being raised in a deprived environment and how that affects kids, we could actually quantify that in a way which really aided our ability to do good quality research.

Social work began to adopt correlational investigations along with the pre-existing robust body of survey studies. Then measures were developed in the field of statistics permitting analysis of differences between and among groups. For example, the early developed t-test allowed us to quantify differences that occurred from one group from time one to another or between two different groups. This allowed us to see in a quantified way whether or not people actually have improved. And then a new type of statistics was developed called the analysis of variance (ANOVA) that enables to look at more than two groups and more than two points in time. So you could look at one group measured pre-treatment, post-treatment, then at follow-up. Or you could have two groups, one group that got one intervention, another group got another intervention, measure them both pre-treatment, post-treatment and follow-up, and so forth.

You get very large studies with analysis of variance. So we began to incorporate these techniques into our field and we saw the emergence of what are called pre-experimental designs, formal studies involving systematic measurement before and after treatment with just one single group. That's the characteristic of a pre-experimental study.

Then people began to do what are called quasi-experiments where you have more than one group. You have two groups, three groups, maybe more than that measured post-test only or perhaps pre and post or even pre-post follow-up. Quasi-experimental studies are characterized by having groups who get different exposure to interventions or non-intervention or to alternative treatments and so forth. But the groups aren't created with random assignment. We began to develop a body of robust correlational statistical pre-test post-test studies involving t-tests and ANOVAs and so forth.

The limitation of quasi-experiments is that because the groups are created through non-random assignment, there might be some type of existing differences between the groups that aren't very evident that might cloud the results. However, we can use these designs to investigate all kinds of very interesting things. Groups are created naturally, non-randomly. For example, some clients in an agency might get brief treatment. Another group of clients might be assigned non-randomly to get long-term treatment. And we can look to see if the effects of long-term treatment and brief treatment are similar or different. We could look to see if Rogerian counseling versus psychodynamic therapy produced any different outcomes and so forth. Some clients are going to get Rogerian counseling, some are going to get psychodynamic. But again, the limitation of the quasi-experiments is there might be some type of pre-existing difference. Perhaps more intelligent people would be assigned to get psychodynamic therapy and other folks would get Rogerian counseling.

To control for that, the technique of random assignment was created. Here, you'd have a group of people that are qualified for an intervention study and they agree to be randomly assigned to different conditions. You tell them honestly, "we're not sure which is better and you could help us by agreeing to receive one or the other." And if we find out that one is better when the study is completed, if you were assigned to the group that was less positive with the outcome, we then provide you with the treatment you did not get.

So we began to do experiments, but not a lot of them. In the early 1970s, Joel Fischer published a review of quasi-experiments and true experiments in social work and he only found about 17 in his big book on the effects of social casework. So there weren't apparently a large body of true experiments being done in our field.

When I was a master student and a PhD student, I was explicitly taught experiments were very rare. They were hard to do. They might be unethical. They're too difficult. And that there were somehow better ways to do evaluations of outcome. I've always been concerned about evaluating outcomes because that's what the field is all about: really helping people.

While I was sort of marinating in this literature that said experiments are rare and difficult and hard to do and maybe unethical, I kept encountering experimental studies done by social workers. And I slowly began building up a bibliography, if you will, of these studies. And much to my surprise, I found an immense amount of them.

When I ask students in class or other faculty in gatherings, they often give small numbers like 10, maybe 50, or 100 for how many true experiments involving random assignment they think have ever been published in English by social workers.

Well, in 2015, I published a bibliography of randomized experiments in social work and at that time I'd accumulated and actually listed in the bibliography 740 plus randomized experiments published in English language journals by social workers, which is an immense amount and puts to bed the idea that somehow these types of designs are rare or unusual.

In 2013, I published a small book called Quasi-Experimental Research Designs. As this bibliography grew, I realized maybe I could write a new book on experimental designs in social work where the defining characteristic is people were randomly assigned to different conditions.

So I began a contract with Columbia University Press and I began writing a book which took about three years of part-time work to do. As you say, it's called Experimental Research Designs in Social Work: Theory and Applications. In this book, we have an online bibliography of these experiments which I order by year, and the earliest one I've been able to find is in the 1940s. I found over a thousand experiments, which is a remarkably large number.

In the 1940s, when we began doing experiments as a social work discipline, that's about the same time that clinical psychology began using random experiments in psychotherapy and medicine began using randomized designs to evaluate medications. So we were right there at the very beginning of random evaluations of different types of practices. We have nothing to be embarrassed about that.

With a body of knowledge of well over a thousand, and I'm very sure I've missed a lot, we can be very proud of this incredibly large body of knowledge that we've accumulated. In my book, I wrote about what an experiment is, why do we need them, and I wrote about the philosophy of science behind experimental studies. We don't have a large literature on philosophy in social work and even smaller literature on philosophy of science.

So, I devoted a very long chapter on that topic. And then I talked about designs that only use post-tests only.

I can give you an example of a thought experiment that unfortunately a student did not do. She was the office manager of her husband's dental practice and she didn't know what to do for her DSW capstone project. So I asked her where she worked, and she told me she managed the office of her dentist husband. I asked if he saw children, and she said, "Yeah." I asked if some of the children came from families who received Medicaid, and she said, "Yeah." I said, "Well, don't kids who are on Medicaid, aren't they eligible for two dental cleanings and exams every year for free?" And she said, "Yeah." I said, "You have some families who don't do this." And she said, "Yes."

I said, "There's your project. Pull out all the files of the kids who are eligible for a free cleaning and dental exam." And then randomly assign half of them to get a message encouraging them, the families, to bring their kid in to get the exam, and the other half don't get that message. And then follow up three months later to see if you had an uptake in the free cleanings. That would be a random experiment, post-test only. One group gets the intervention, which is the prompting message or letter or email to come and get your kid an appointment, and the other group got nothing.

I said, "This could be free. It wouldn't cost you any money. You don't need a grant. And it wouldn't take too much effort. You could have the staff do all this." And it was a very beautiful, elegant study. Unfortunately, she chose not to do it. But it just goes to show that if you view your environment through the experimentalist lens, you can find low-hanging fruit for experimental opportunities everywhere in every single field of practice.

Jonathan Singer: So that's a really interesting example about um something that's low cost and I imagine would have made a difference, right? Getting the nudge to come in and get the cleaning. And you said that that was a post-test only experimental design, right?

Bruce Thyer: Yeah, there's different ways that we can classify true experiments, and one type is the post-test only where you have measures only after treatment was received. And these are very elegant because if you have a sufficiently large sample size divided randomly into two groups, and by sufficiently large a minimum of 20 to 30, more is better, you can be pretty sure that pre-treatment, pre-assignment, the groups are equivalent on all relevant dimensions—severity of psychopathology, demographics, age, race, gender, all that stuff. So we can just assume pre-treatment they're the same. And then when you do the post-test after one group has received intervention, the other one has not, any differences at post-test logically can be ascribed to the intervention the one group got.

But then we can do true experiments where we have pre-test and post-tests and we can do them with pre-test and post-test and then follow-ups one month, three months, six months, a year or maybe even years.

And we also have a type of experimental design called a vignette study which is used for clinical decision-making wherein we could give we could write up a case scenario describing let's say a single parent family and the mother is described as either white or black and everything else in the vignette on one page printed is identical and then you give this to child protective service workers on a random basis. They only read one version and are asked to make a decision. Should this child be removed from the home or not? And that way you can see if race plays a role in influencing social case workers decisions about child removal decisions. Large literature on clinical decision-making using these true experimental vignette studies. They're not outcome studies, but they are still true experiments.

So, I talk about post-test only studies. I talk about pre-test post-test. I talk about different refinements in experimental designs and then I have separate chapters on recruiting participants and particularly recruiting participants from various diverse and historically underrepresented groups which is very very important. I talk a little bit about types of designs that permit causal inference without using a true experimental design such as interrupted time series designs and then I talked about the ethical considerations that go into doing true experiments. Usually, you need to have institutional review board approval for doing these types of studies.

So, it's a substantial body of literature, one that every social worker certainly at the doctoral level should become familiar with because it's an amazingly valuable tool. You look at the bibliography of over a thousand true experiments authored by social workers. You're going to find that they address many interventions, psychosocial treatments, community interventions, family work, group work. You can look at social policies from the eyes of an experimentalist. I found a surprisingly large number of social workers involved in conducting pharmacological trials of psychotropic medications, medications for other bodily ailments and also other types of interventions like ECT, electroconvulsive therapy and massage therapy, not together.

But just the example that if we have an intervention that is expected to produce a change in people, it can usually be the subject of a true experimental study. There have even been true experimental studies looking at the effects of prayer on people's health and well-being. So again, if you expect it to have an influence, you can test it with a true experiment. We also look at lots of different problems. Certainly, all the DSM conditions and then non-DSM conditions, domestic violence for example, child abuse, spouse abuse. virtually any problem that social workers are involved with has been subjected to some type of experimental research and they've also been done in many, many countries.

I get a lot that I found in my bibliography coming out of China, coming out of India, coming out of Africa as well as Europe, some from South America. So, the idea that somehow that these are western-oriented methodologies much of it was developed in the west, but they've also had huge contributions from people all around the world in experimental methodology and actually the design and conduct of experiments.

And as I alluded to, sometimes they can cost virtually nothing. You don't need a grant to do them. and if you're not really adept at statistics you can always hire a statistician and they'll work for you and you can pay them or they'll work for you in return for co-authorship so you can be sure that your statistical analysis is really right up to snuff and include proper measures of effect sizes and advanced topics like that.

So I think that experiments are one of the very best ways to have to determine the real effects of an intervention and if we have a let's say a placebo control condition of some type we can subtract the positive effects of placebo from the positive effects seen with the real therapy to get a real appraisal of the genuine effects of real therapy minus placebo. If you have a no treatment control group, you can subtract any improvement seen in the no treatment control group from that obtained by the people who got the real treatment to figure out how the passage of time affects people's improvements and so forth. So, you can have all kinds of comparison conditions, different treatments, short versus long and so forth, medicine versus psychosocial interventions. So this is just a tremendously valuable tool and I hope that my book generates some traction for people becoming encouraged to design experiments.

We have a lot of people that do surveys, they do secondary analysis of existing databases and so forth which last week I came across a phrase to describe secondary analyses and some researchers call these I don't call them this, but this is a term I found parasitic research where you relying upon the work of other people. You do no original data collection on your own. You have no control about how variables are measured and so forth. And this article, speaking of secondary data analysis in a disparaging way, I personally don't see it that way, but if you're truly interested in evaluating the outcomes, then by all means consider doing an experiment. It is not beyond the reach of most people.

And as you know, Jonathan, for 33 years, I've been editing a journal called Research on Social Work Practice. But produced by Sage publications and we have an almost exclusive focus on publishing outcome studies and we do pre-experiments, quasi experiments and true experiments and another virtue of doing an experimental study is that true experiments are usually the building block for an approach called systematic reviews and meta-analyses and I think you have some podcasts on those topics in your series here.

So doing an RCT, a randomized control trial on some type of intervention will allow somebody else in the future to come along and add your individual study to a meta-analysis or a systematic review. And by aggregating these, we can have a more robust appraisal of what the real effects of treatment are in many different contexts.

~~~END~~~

References and Resources


APA (7th ed) citation for this podcast:

Singer, J. B. (Producer). (2025, July 1). #144 - Experimental Research Design (Part 1): Interview with Bruce Thyer, PhD, LCSW [Audio Podcast]. Social Work Podcast. Retrieved from https://www.socialworkpodcast.com/2025/07/Thyer2.html

No comments: