Monday, April 14, 2008

Social Work Research for Practitioners: Interview with Allen Rubin, Ph.D.

Allen Rubin, Ph.D. [Episode 37] In today's podcast, I talked with Dr. Allen Rubin about research and social work practice. You might recognize the name Rubin from the widely used social work research text "Rubin and Babbie," or as it is officially known, Research Methods for Social Work. In addition to the Rubin and Babbie text, he has authored well over 100 publications, most recently focusing on evidence-based practice.

Download MP3 [27:10]



Since so many of us have learned research from the Rubin and Babbie text, myself included, I thought it would be appropriate to interview Allen for the first social work podcast on social work research. I'm excited about offering a series on social work research because research is essential to good social work practice. Most practitioners I know have an impressive command of assessment, diagnosis, intervention and the myriad of factors that go into providing services to clients. These same practitioners get fairly lost in even the most basic research articles and couldn't distinguish an ANOVA from a logistic regression to save their life. So, I thought I would take this opportunity to find out what research concepts Allen Rubin thought were essential for social work practitioners to understand.

During our interview, Allen made it clear that there is at least an entire textbook worth of research concepts that social workers should know. He was kind enough, though, to identify and define a few essential concepts that he thought social workers needed to understand in order to be informed consumers of empirical research. My friends will be vindicated knowing that he did not include ANOVA and logistic regression in his list of key concepts. He did, however, suggest that social workers should understand the difference between reliability and validity, how to identify sources of error in measurement, and researcher and respondent bias. He talked about these concepts within the framework of evidence-based practice. He distinguished the process of evidence-based practice from evidence-based practices.

Download MP3 [27:10]

Dr. Rubin is the Bert Kruger Smith Centennial Professor in the School of Social Work at The University of Texas at Austin, where he has been a faculty member since 1979. He served as an editorial reviewer for 14 professional journals, was a founding member of the Society for Social Work and Research and served as its vice president from 1996 to 1998 and then as its president from 1998 to 2000. He is the recipient of many awards, including the co-recipient of the Society for Social Work and Research Award for Outstanding Examples of Published Research, the 1993 recipient of the University of Pittsburgh, School of Social Work's Distinguished Alumnus Award, and the 2007 recipient of the Council On Social Work Education's Significant Lifetime Achievement in Social Work Education Award. A native of Pittsburgh, Pennsylvania, Dr. Rubin continues to be a big fan the Pittsburgh Steelers football team.

Dr. Rubin can be heard sharing his advice for young social work researchers [Episode 38] and his current research and publishing projects [Episode 39] at the Social Work Podcast. This series of interviews was recorded using Skype.
References and Resources

Transcript

Interview
[02:22]
Jonathan Singer:  So Allen, thanks so much for being here and talking with us today about social work and research.  And first question, how did you get into research?

Allen Rubin: Well, I got my master’s degree a long, long time ago with a major in community organization.  I wanted to be like Barack Obama.  [laughs]  I did.  I didn’t know Barack Obama in those days, of course, but I wanted to do pretty much the same thing that he was doing when he first started out.  I ended up getting a job in a community mental health program, and I was doing various community outreach mental health kinds of work.  And they also wanted me to learn how to be a family therapist, and they were training me to be a family therapist.  We were being told at that time, this was back in the very early 1970s—actually, it was right around 1970—we were being told that we should view all mental health problems as family problems; that there were no sick individuals, there were only sick families.  There were no exceptions to that, I mean, even for things like schizophrenia.  As we now know, that’s way off, that’s wrong, and that’s harmful, but at the time that’s what we were being taught.  I was being trained in that.  I remember sitting in the in-service training session with two very prestigious psychiatrists and after being in the training for quite a few weeks, if not months, and hearing much of their lectures, seeing their videos, reading their materials, and so on, that I got the courage amidst a room full of psychiatric residents to ask a question, which was basically what scientific evidence they had that the stuff that they were telling us to do was really effective.  And one of the psychiatrists responded by simply gazing at me, rubbing his beard, being silent for a moment, and then sort of looking around to the others and then at me and saying something to the effect of—that I should look inward to think about what personal insecurities I have that would prompt me to need such certainty.

[05:10]
Jonathan Singer: So he tried to do a little psychoanalysis on you there in the moment.

Allen Rubin: Right.  He tried to defend his own lack of evidence for what he was teaching by trying to make me the problem.  So that was one of quite a few things that I was experiencing and observing in those days, one year or so after getting my MSW, that made me feel that, you know, I really want to go back and get my Ph.D. and learn how to do evaluation research to see what really helps people and whether the stuff that we were being taught is really helpful, and so on.  Of course, as most people in our field now know, the research that was just beginning to come out at around that time and really blossom throughout the 1970s and 1980s showed that calling families of people with schizophrenia the “cause” of the schizophrenia was not helpful, but it was harmful.  I don’t need to elaborate on that.  I am sure that people already know about that stuff but—

[06:31]
Jonathan Singer: We actually did a podcast—I interviewed Carol Anderson, yeah, about psychoeducation.

Allen Rubin: Right, yeah.  At any rate, I think that’s kind of interesting that the question I asked was on that topic and then all that research came out since showing that stuff.  So that’s how I got into research.  I went back and got my Ph.D. and learned how to do research.

[06:55]
Jonathan Singer: Okay, and you said evaluation research—could you clarify for somebody who might not know what evaluation research is, what that means, what some of the parameters are for what evaluation research is?

Allen Rubin: Yeah, it’s pretty broad.  In fact, in my book, I make the point that in its broadest definition it’s hard to distinguish evaluation research from just social work research.  Back in the old days, most people thought of evaluation research as outcome research: evaluating the effectiveness of programs, policies, interventions, and so on.  I still feel, just personally, that that’s the most important part of evaluation research, but it’s much broader than that.  It has to do with any kind of research that could be useful in informing practitioners, administrators, and so forth about how to improve what they’re doing, whether it might have to do with monitoring who’s using their services or whether services are being implemented as intended.  It might have to do with qualitative studies on processes in programs.  It might have to do with needs assessment.  What do clients need?  What are the needs in the community that aren’t being met?  So in other words, in addition to being used to evaluate whether programs are being effective, it could be used for the purpose of planning programs.  What services should we provide?  What needs are out there? —Monitoring program implementation, and so on, beyond just evaluating outcome.  And naturally, if you only evaluate outcome and don’t evaluate implementation and it turns out that things aren’t as effective as you had hoped, the question would be unanswered as to whether the lack of effectiveness has to do with the idea itself.  The theory of the intervention may be a great idea, a great intervention that just wasn’t implemented properly, so evaluation research covers all of that, and some of the best evaluation research looks simultaneously at implementation and outcome.

[09:25]
Jonathan Singer: So it sounds like it serves at least those two purposes and probably others as well.

Allen Rubin: Yeah.

[09:32]
Jonathan Singer: Now, you mentioned that evaluation research, as it is now defined, and you mentioned in your book—and I’m assuming you’re referring to “Rubin and Babbie,” the research text that most schools use—

Allen Rubin: Right.

[09:44]
Jonathan Singer: —that evaluation research, the way it’s defined now, is almost synonymous with social work research.

Allen Rubin: Yes.

[09:52]
Jonathan Singer: And I was just wondering how you might distinguish social work research from research in other disciplines?

Allen Rubin: Well, the easy answer is [inaudible 10:01]. [laughs] You know, and what that means is that social work research is guided by the needs of social work practice.  It’s meant to guide social work practitioners, to give them the information they need to provide better services.  It is not generated by abstract theoretical questions but rather by the practical needs of social work practitioners and agencies.

[10:33]
Jonathan Singer: Okay, so really very practical research as opposed to maybe some of the other research that is done in other disciplines that might be considered maybe fundamental research that doesn’t have specific applications that people can think of off the top of their heads.

Allen Rubin: Right, but the boundaries are ambiguous; people in psychiatry, psychology, nursing, counseling, and social work could all be doing their research on, for example, whether a particular form of intervention to treat sexually abused kids is effective.  So that would certainly be social work research but would also be psychiatric research.  It would also be psychological research.  I think that the main difference is that in fields like psychology, there’s a greater likelihood that research might not be guided by the practical questions.  So I guess what I’m saying is that there’s a lot of overlap.

[11:45]
Jonathan Singer: Okay, sure, and that makes sense because I think ultimately all disciplines would say that we’re just trying to help people.

Allen Rubin: Yeah, definitely.

[11:53]
Jonathan Singer: Yeah.  Now, you mentioned that one of the things that seems to distinguish social work research or is a hallmark of social work research is that the research is there to help the practitioners to do their jobs better, and I was wondering if you could talk about what you think of as some of the most important research concepts that a social work practitioner should know about.

Allen Rubin: Okay, let me answer that in the framework of evidence-based practice.

[12:27]
Jonathan Singer: Okay, and what is evidence-based practice if you can define that?

Allen Rubin: Evidence-based practice is a process that involves approximately five steps.  First, the practitioner identifies a question and a need for information that they have: What’s the best way to intervene with a particular type of client?  How do I improve the utilization of my shelter by homeless people? —And so forth.  I mean, there are an infinite number of possibilities. So it begins with a question.  Then the second phase is searching maybe through the internet—professional literature databases—to find solid evidence, research evidence that provides scientific evidence to guide the answer to the question.  The third phase is to appraise the quality of the evidence you find in terms of things like its validity and so forth.  The fourth phase is integrating that appraisal of the evidence you find with your practice expertise and your knowledge of idiosyncratic client attributes, because it just might be that the stuff with the best evidence isn’t going to fit your client, or maybe it doesn’t fit what they’re able to do, and so you have to say, “That’s nice. That’s Plan A, but it’s not feasible, or my client won’t go for that, so I have got to go to Plan B and hope for something else.”  So in other words, it’s not just the research evidence itself that guides what you do, but it’s the integration of the best evidence with your expertise, your practice expertise, and client attributes, and based on the integration of all of that you make decisions.  You make decisions, you know, “Can I go with Plan A?  If not, what’s the next best thing I can do that might be more feasible and that may not have quite as great evidence as Plan A? It still has some good evidence, and I can do that.”  Or you might even have to go to Plan C.  Then the final stage of the evidence-based practice process is monitoring the outcome, whether it’s an intervention, or a policy, or an administrative procedure, or whatever it is, monitoring what happens to see if the desired outcome is achieved. It may be, for example, that the intervention with the best evidence supporting it isn’t going to work for your particular client, so it’s important to not just implement the intervention and say, “That’s it.”  You have got to see what happens, you have got to evaluate, and then perhaps modify what you are doing based on the outcome of the evaluation.  So that’s evidence-based practice, and that often gets confused with the plural term evidence-based practices, which refers not to the process, but rather to a list of specific interventions that some people have found to have been supported by good evidence.

[15:56]
Jonathan Singer: Like for example, Dialectical Behavior Therapy is often cited as an evidence-based practice for women with self-harming behaviors who have a diagnosis of borderline personality disorder.

Allen Rubin: Right.

[16:09]
Jonathan Singer: That would be an evidence-based practice among the list but that’s not what you are calling the process of evidence-based practice, picking that is not necessarily the process of evidence-based practice; it’s merely implementing something that has been identified as an evidence-based practice.

Allen Rubin: Exactly, and it’s the process that you go through that leads you to pick that or something else that is the evidence-based practice process and that process definition shifts the original intent of the evidence-based practice movement.  But over the years, it’s gotten confused, particularly when managed care companies come to practitioners and say, “Here’s a list of seal of approval interventions, evidence-based practice interventions, or evidence-based practices that we’ll reimburse you for.”  And then practitioners often misconstrue evidence-based practice as being that list, as opposed to a process that if they went through—might lead them to go back to that insurance company, that managed care company and say, “Hold on, the evidence supporting this particular intervention was based on studies that didn’t involve any clients with co-morbidity and with co-morbidity there’s evidence that maybe something else might be better,” and so forth.  It’s real important to distinguish the process from the seal of approved lists—the list of seal of approval interventions.

[17:35]
Jonathan Singer: And it really sounds like what you’re talking about—about the evidence-based process—I mean, it sounds like the research process.  It sounds like you ask a question, and you try and gather the evidence to find out what’s been done before.  And then you try something, and you evaluate whether or not that’s worked.  It sounds like the research process.

Allen Rubin: Right, except that you’re not doing the original research.  The process that you’re going through is trying and appraising the research that others have done, and then, of course, you’re implementing research techniques in the final stage when you’re monitoring what happens after you implement the—whatever it is that you decide to do.  Now, that then sets the framework for your earlier question.  You know, there’s many things to know, and here are the things I think are really, really the most critical within the evidence-based practice context.  That would be one: understanding measurement, validity, and reliability, understanding sources of error and measurement, things like social desirability biases, the way interviewers or researchers can bias respondents in the answers they get, how they can be biased in the ratings, and how people could be biased in what they tell you.  So understanding measurement bias, sources of error and measurement, particularly bias, and understanding the concepts of reliability and validity so that if you are engaged in the evidence-based practice process, and you read a study, you can accurately and critically appraise whether its results should be taken seriously or not given the measurement procedures that were used.  For example, one of the early studies on EMDR was based on a therapist who was continuously asking the client on a scale of zero to 10 how much distress they felt, and over a 90 minute interview having a therapist ask you that umpteen times is going to put a lot of pressure on you eventually to say you’re not nearly as distressed at the end of the interview, over the therapy session, as you were at the beginning, even if you feel very distressed, because to say otherwise would be to insult the therapist and to make yourself look like a failure as a client, and like you’re wasting your money.  So that’s one example of how both the therapist and the client could be experiencing all kind of bias in responding as to whether the therapy is helping them.

[20:24]
Jonathan Singer: And that’s an example of social desirability, for example.

Allen Rubin: It’s social desirability bias on the part of the client, and there’s another term on the part of the therapist that I think is called experimental expectancies, where the person that’s collecting the data is, if not explicitly then certainly implicitly, pressuring the client to give them the answers that they want to hear.  So there’s bias going on, on both ends, and I cite that as an egregious example.  Here’s another egregious example: there was a study published not too long ago evaluating the outcome of the effectiveness of an intervention with kids that were delinquent, and one of the things they looked at was whether the intervention was successful in motivating the kids to do their homework.  These were kids who were at risk for delinquency; maybe they were already referred for delinquency, I’m not sure, I can’t remember, but there was obviously a reason to suppose that these kids probably weren’t going to do their homework very often without any intervention, and part of the intervention was to pay them a monetary reward for doing their homework.  Now, how do you think the researchers collected data on whether the kids were really doing their homework, these kids that were being paid to do their homework?  Well, I couldn’t believe my eyes when I read the article, they simply asked them.

[21:56]
Jonathan Singer: [laughs] It was self-report with these delinquent kids who were getting paid to be a part of the study.

Allen Rubin: Yeah, and they were saying, yeah, well, you know, they were doing their homework more after the behavioral intervention was implemented than before when they weren’t getting paid. And that’s another example of an egregious bias.  Now, I think it’s important for social work practitioners to learn to distinguish between egregious or what I would call fatal flaws and minor flaws that are more acceptable.  And so, that’s one area of measurement, the measurement area, that I think is real important, that they understand these concepts of measurement error, bias, and so forth—reliability.  If you were asking people from another culture questions with language that they don’t fully understand, you’re going to get haphazard responses that don’t mean anything because they don’t understand the language—same thing if you’re using big words with kids that they don’t understand.  They might say yeah, I agree, or I disagree, or whatever, just to say something without really knowing what they’re answering.  So, that’s the reliability concept, the validity concept, and the bias concept with measurement—real important.  The other really critical concept that I think they need to learn is this concept called internal validity, which has to do with what’s really causing what.  When a study is done to evaluate whether an intervention is effective and clients improve from pre-test to post-test, what really caused the improvement?  If you go to survivors of Hurricane Katrina in the immediate aftermath of the disaster and assess their trauma symptoms that they’re experiencing at that time and then compare that to how they’re feeling a year or so later, you don’t have to do any intervention to find out that with time most of them are going to be doing better.  Now, some of them are going to get full-blown, chronic PTSD and not be doing better, but if you’re taking a representative sample of all Katrina survivors, not everybody is going to have PTSD, not everybody’s going to get PTSD, but most of them, perhaps all of them are going to have trauma symptoms in the immediate aftermath of the disaster, trauma symptoms that if they don’t develop into the full-blown PTSD are going ameliorate over time.  Now, if you just do a pre-test post-test study and you supply some kind of trauma intervention, you’re going to find that they’re doing better after your intervention, but what really caused it?  Was it your intervention or was it merely the passage of time, or was it other contemporaneous events such as family members, friends, and so forth from other areas coming to their aid.  And then there’s this concept called statistical regression that you can read in my book about; I won’t go into that here time does not permit that.  So there’s a whole slew of things that could be explaining why people are doing better after they receive an intervention, a program, or a policy than they were doing before.  And so when you, as a practitioner engage in evidence-based practice are looking at the research to see, “Hey, what really works? What’s really effective?” you need to be able to distinguish between those studies that adequately controlled for these alternative explanations versus those studies that did not.  And it means looking at this, whether they were internally valid. Did they really control for those things?  And as I said earlier, what measurement procedures did they use?  How reliable, valid, unbiased, and objective were those measurements and procedures?  Those are the things. I mean, there are many, many things to learn about research, but if I had to say one of the two or three most critical things that practitioners know, that’s what I would state, and that’s what I emphasize when I do continuing education workshops for practitioners on evidence-based practice.

[26:20]
Jonathan Singer: I appreciate you distilling the thousands of concepts out there down to perhaps the most important ones for practitioners to know, because I know there are a lot of practitioners that listen to this podcast, and I think sometimes the sheer number of research concepts can be overwhelming to folks in the field.  So, I appreciate that.

[END]
APA (6th ed) citation for this podcast:

Singer, J. B. (Producer), (2008, April 14). Social work research for practitioners: Interview with Allen Rubin, Ph.D. [Episode 37]. Social Work Podcast [Audio podcast]. Retrieved from http://www.socialworkpodcast.com/2008/04/social-work-research-for-practitioners.html

2 comments:

daniel john said...

Thanks for such a nice information, i can't wait to get this episodes, your blog is nice and informative, will get this season.
Term papers

The Kims said...

Thank you so much for your work, I really enjoyed these serises.