Matthew Nock, Ph.D. is one of the leading experts on n-of-1 experiments and single-case experimental designs. Matthew became a MacArthur Fellow in 2011 receiving the MacArthur “Genius” Award. He studied at Yale and now is a professor at Harvard where he also runs the Nock Lab. In addition to his research interests, Dr. Nock has been counsel to the World Health Organization’s World Mental Health Survey Initiative, the National Institutes of Health, the American Psychological Association, as well as other prestigious health organizations.
1) Assuming I have captured the basic methods of single-case experimental design (SCED):
- Identification of specific target behavior
- Continuous and valid measurements
- A baseline period (data is gathered before the intervention is applied)
- Stability of the specific target behavior (target behavior changes only when the intervention is applied)
- Systematic application of intervention
What are the considerations, risks and advantages for someone partaking in self-experimentation — someone who wants to use these methods to help determine the efficacy of a new habit or practice (e.g. determining the effect of meditation on mood)?
These are the basic methods, but it is important to note there are some variations in how you would apply different types of single-case experiments. Once the intervention is applied, then something else is going to happen next, right? For instance, there is “AB-AB design” also known as “withdrawal design.” In this application, you apply the intervention, you then remove the intervention and examine whether the behavior/condition reverts to the baseline level. You then reapply the intervention — so the A state stays as baseline, the B state stays as an intervention — so you do AB, AB and measure the change.
For instance, if you wanted to see if a reward program for not smoking cigarettes worked for you. You start with cigarette smoking as your baseline. Let’s say you smoke two packs a day. Now you apply the reward (intervention). After the reward you now smoke half a pack a day. You then remove the reward (intervention), going back to baseline (smoking without a reward for not smoking), and you see if you go back to two packs a day. You then reapply the intervention (in this case the reward) in an attempt to determine that it is when, and only when, the intervention is applied that your behavior changes. This method helps you rule out alternative explanations. For instance, in this hypothetical example you rule out that you stopped smoking because of some historical event, or your wife told you she’s going to leave you if you don’t stop smoking at the exact time you started the intervention.
What you are trying to accomplish is identifying the result from the experiment is from the intervention and nothing else. You can do an AB-AB design as described, or, if you have access to other participants, you can do a multiple baseline design. In this example, the first person, they would have a one-week baseline and then you apply the intervention; the second person would have a two-week baseline, then you apply the intervention; for the third person, a three-week baseline then you apply the intervention. Again, if you can show when, and only when, you apply the intervention something has changed, you have evidence that your intervention causes change in people.
A single person can also use a multiple-baseline approach across behaviors. For instance, I am trying to change my smoking and drinking and eating. I could apply the intervention to my smoking, then apply it to my eating, and then apply it to my drinking. If I see that when, and only when, I apply the intervention my target behavior changes, it provides evidence that my intervention is effective. You can apply the multiple baseline approach across people or across behaviors.
If someone is self-experimenting, they will want to do their best to collect their own data objectively. Using these methods on yourself, you run the risk of tricking yourself into seeing something that is not there or failing to see something that is there. When it is a clinician or a researcher observing you, they are going to be, with their own objective eyes, carefully measuring some behavior of interest. If you are not carefully measuring objectively what it is you want to change, again, you might see change that is not there or fail to see change that is there. It is important to do your best to objectively measure.
The benefit of this approach is you are the one following the data. You have a real-world answer to whether or not your intervention is working. It can be just a little bit of extra work to do something like this, to quantitatively, objectively measure your own behavior. However, in my opinion, that is also a benefit: knowing what’s effective; knowing what can change your behavior at a fairly minimal cost.
2) For many, “lifestyle design” is about optimization. For example, using meditation as the hypothetical again, it appears that many find benefit from only minimal exposure (Creswell, Pacilio, Lindsay, & Brown, 2014), but one could posit the effective duration is unique to the individual. Since interventions generally come with an opportunity cost, reducing this cost has a benefit. What are some good strategies for expediting the determination of the minimum effective dose (MED) of any given intervention?
In my mind, there are two philosophies about this. One is start small, and measure carefully the effects of the small dose/intervention, and then increase, increase, increase, until you see maximum benefit(s) and then you might know how much is needed. The other is the opposite; start with the maximum dose and then work down from there. Each has pros and cons, right? It certainly depends on what it is you’re using as an intervention. If there is any toxicity associated with the intervention — drugs are an obvious example — if there are toxic side effects to an experimental drug, you would want to start very small and work up to see what is the needed dose to cause change. The benefit here is you are not exposing the subject to toxicity; the downside is it could take longer for an effect and the person could be engaging their harmful behavior, or suffer from disease, for longer intervals of time than giving them more from the onset. On the flip side, if you start with the maximum dose, you generally will know right away whether it has an effect and then you can work down from the initial amount. The downside is you are now exposing the subject to any toxic side effects from potential overdose. If you are certain the intervention does not have any toxicity and/or limited risk, I think the best thing to do is start with the maximum amount and then work down from there to see how much is needed to maintain the effect.
3) Technology is making the recording and analysis of self-experimentation more accessible. There are an abundance of consumer and condition-specific wearables for collecting data, ecological momentary assessment (EMA) protocols are accessible to anyone with a smartphone, the statistical package R is free to use — enabling anyone willing to take on the learning curve the ability to crunch their own numbers. What technology and innovation excites you in this area? And, is there anything that is currently helping democratize one’s ability to run these types of experiments?
There are a lot of tools at the ready now with smartphones and other wearable devices, so people can collect and analyze their own data quite easily. The big bridge is people often are not going to want to learn something like an open-source statistical program. Learning a statistical program like R, even though it is free, is not a minor endeavor. People want ready-made solutions to problems, so they want an app that is turnkey and ready to go. Technology that is going to monitor their behavior, apply the intervention, whatever it is … to the extent that we can create applications that bridge that gap for people, that are easy to use, people will likely use them.
So yes, there is some great open-source stuff out there, but getting someone to figure out how to collect their own data effectively, then create and apply their own intervention, learn statistics (even if it is free to do), analyze their data; wow, this basically requires an intervention in and of itself to get someone to do that.
The thing that excites me most right now is using wearable devices and smartphones to collect data about people and apply interventions that are beyond their own awareness. There are apps available now that allow us to collect data from people’s smartphones passively. We can monitor their GPS, we can monitor their sleep, we can monitor their activity level, who they’re calling, who they’re texting, who’s calling them, who’s texting them, and we may pick up information that can predict future behavior that people are not aware of themselves.
For instance, if a person’s activity level is decreasing, they have outgoing calls and texts and none are getting returned, and their sleep becomes more irregular, we might predict this person is becoming more depressed. So a condition a person may not even realize they have themselves — we can use information from their phone to help identify potential problems and deploy an intervention remotely before the condition can cause any negative effects. We now have e-interventions, smartphone interventions, where people can engage in a little quick, game-like app that they can play to try and change their behavior. The old model of going to a doctor, the doctor does an assessment and tells me I have a problem, then gives me some kind of treatment — this model is changing. We can now go out and find people who are in need of help before they know they need it, and send interventions out to them that they can use and apply themselves. We can deploy this on demand, 24 hours a day, 7 days a week, whenever it works for the individual.
4) You are a Harvard psychologist. You are also one of the leading experts on destructive behavior. There seems to be a resurgence of William James’ ideas lately, specifically that if we master our free will and make ourselves 100 percent accountable for our actions, this process will increase our chance of positive outcomes. Do you believe in the validity of this assertion? And, given your expertise working with people where this process might pose difficulties, what are some strategies to assist one to increase their ability to be accountable in this area?
My department resides in a building called William James Hall, so the spirit of William James is still present. The idea of holding ourselves 100 percent accountable, as it pertains to the way I am interpreting your question, comes down to the rewards and the costs of a behavior. If we want to change our own behavior, we need to accurately understand to what extent the behavior in which we are engaging is rewarding or beneficial. We also want to accurately understand what the costs involved are. We have to seriously evaluate both the rewards and cost. For instance, if I am smoking cigarettes, I probably feel good after I smoke. In this case, what are the rewards and costs of smoking? It means realizing there are benefits, but there are also significant costs engaging in the behavior. I need to weigh both, but to do so I need to accurately consider present and future elements of the behavior.
So for me, holding ourselves accountable means realistically realizing the cost and benefits of our behavior and weighing those carefully. If the costs are going to ultimately outweigh the benefits, then I think we have a chance of decreasing risky behavior. If the benefits are perceived as outweighing the costs, it is much tougher to change someone’s behavior. For instance, take a self-destructive behavior like cutting oneself or burning oneself, why would someone do that? It turns out that cutting yourself or burning yourself, for many people, removes aversive thoughts and feelings. This behavior has a benefit for them. For these people, the reward of removing these thoughts appear to outweigh the costs of seeing tissue damage, and so they engage in the behavior. Getting people to stop engaging in this behavior is a lot about figuring out other ways to get the existing benefit for alternative behaviors that do not carry such a heavy cost.
I think the same is true with smoking, drinking and overeating — as well as other problematic behaviors. These behaviors have associated rewards, but they also can come with significant costs. To make good choices, we need people to understand and appropriately weight the costs and the benefits. An important part of the process of behavior chance is to figure out ways to have people find similar benefits that do not carry the same costs of the behavior one hopes to change. The challenge is how to get yourself to feel good and/or distract yourself from aversive psychological states, without doing harm to your mind and/or body. If the spirit of your question is, “How do we increase our chance of positive outcomes?” then you can look at it as benefit-cost=outcome. To do this, you need accurate information about the behavior’s costs so you are not discounting and/or ignoring these. Then look at the behavior’s benefits and find suitable alternatives that offer comparable benefits without the associated costs of the behavior you are trying to change.
5) A young student has walked into your office and proclaimed they want to become the leading expert on self-experimentation. What are three rabbit holes you suggest they explore (i.e. ideas, concepts, models)?
Three rabbit holes they should explore …
1) Read up on the decades of research that people have done on single-case experiments and N of 1 designs. There are a lot of well-worked out-methods and approaches to measuring behavior and carefully, systematically applying an intervention to change behavior, as well as observing the effect of the intervention. When you really understand these validated methods, then you are aware when you are truly doing experimentation. We have existing study designs where one can carefully observe the outcome of self-experimentation in an empirical manner — opposed to reinventing the wheel, there are decades of existing work that one can build on, so mastering the current available literature in this area is a big one.
2) Mastering new technology. As we discussed earlier, there have been significant, recent advances in technology available to people interested in experimentation in the form of smartphones, wearable devices, the Internet and free access to educational information. We have easy access to data at our fingertips now. Through technology we can easily measure our real-world behaviors. Mastering new technology will allow a person to tap into a huge new source of objective data on our behavior.
3) Once you master experimental design and you master the latest technology, the last rabbit hole I’d suggest is how to engage and measurement your experiments. You need to figure out how you can use advances in technology to develop new interventions based on what we already know works. Questions like, “Are we effectively using carrots and/or sticks? Are there ways that we can use computers, the Internet, smartphones, wearable devices, to try and apply new interventions?” The new frontier regarding behavior change is to master the way that we try and modify people’s behavior (or modify our own behavior?). With the right creativity — coupled with an existing mastery of the first and second rabbit holes — there is a lot that can be done using the new tools that we have at our disposal. We now have the ability to apply personalized behavior-change interventions, in real-time, at scale.
There is a downside to this third rabbit hole, too, though, especially if you are building tools that help others self-experiment. There are now thousands of thousands of apps out there that are purported to improve health and well-being. However, by my reading, there is very little data to support that most of these apps are actually effective in any meaningful way. Moreover, there is little evidence to suggest that most of these apps will actually change anyone’s behavior. Worse, there is a financial incentive to create apps and to market to people, “This app will make you healthier and happier.” In my opinion, there is not a good public understanding of how to evaluate scientific evidence. That makes it difficult for most to evaluate claims about effective treatment and/or interventions. It’s the Wild, Wild West out there.
Before scientific medicine, people just created their own methods. They could sell snake oil. They could put anything in a bag or box and sell it to us as effective. Some were and some weren’t, and many times the ones that were effective, weren’t effective for the reasons that people thought. Luckily, now we have a much better infrastructure where, if you are going to sell some kind of FDA-approved medication, you have to know what is in it and show that it is effective in randomized clinical trials. It’s on you, you’ve got to have experimental data. I think of the app world as similar to the Wild, Wild West. People are now deploying things that they say are treatments and there is not a good, systematic infrastructure in place to know which ones are experimentally sound and which ones are not. Similar to the thoughts expressed in the previous question, there needs to be a clear benefit to making experimentally sound apps. This benefit could be a special designation, like FDA approval or FDA approval equivalent. Something that ensures it has been tested, with evidence showing that it works. If the app does not have that, then some kind of repercussion for the makers. Until we have that system in place, I think you will continue to see a market full of snake oil.