Todd A. Ward, PhD, BCBA-D
Ours is a world which resides within a web of communication networks that send behavioral products around the globe in an instant…for better or for worse. Unfortunately, during Friday’s massacre in Christchurch, New Zealand, it was for the worse.
According to Wired.com, the shooter, now identified as Brenton Tarrant, sought “to exploit human behavior and technology’s inability to keep up with it to cement his awful legacy.” His live streamed killing of 49 people on Facebook was quickly copied and reposted around the world on various social media sites such as Reddit, YouTube, and others, and broadcast on major news outlets.
Wired again noted “by the time Silicon Valley executives woke up Friday morning, tech giants’ algorithms and international content moderating armies were already scrambling to contain the damage—and not very successfully.”
Behavior analysts who study communication networks and behavioral systems, such as Ingunn Sandakar, have noted “the science of networks can help us understand how dissemination of large-scale behavior is facilitated.” One feature of networks that she highlighted was a node, which is a hub of many connections to other elements of the system.
In the age of social media, each person functions as a node, with a greater or lesser number of connections to other people in the world. Friday’s mass shootings in New Zealand illustrated how one person can leverage their power as a node to magnify the impact of their horrific actions to circulate them around the world in an instant.
And technology can’t keep up.
Tech companies deploy a host of Artificial Intelligence algorithms to automatically remove troubling content. Examples include PhotoDNA to prevent the spread of child pornography, and the Global Internet Forum to Counter Terrorism to prevent the spread of extremist content.
But live content streamed online or broadcast on the news is different. Not only does the live nature of the video create problems, but balancing newsworthy content creates further complications. Kent Walker, the General Counsel for Google stated “a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.”
Thus, the same graphic content can serve multiple functions for society depending on how it is used.
Behaviorally, the situation is further compounded due to the nature of the events themselves. We aren’t talking about recurring events from the same person. We are usually talking about a one-time event coming from a person who has already accepted that they will face dire consequences, either life in prison or death, for their actions.
So what are we to do? It might help to provide a framework for the issue.
First, we can look at the promulgation of social media content, or “viral” content, as a macrocontingency. We have a cumulative effect produced by many independently acting people around the world sharing content. The widespread sharing we call macrobehavior and, when paired with its cumulative viral effect is a macrocontingency – an if/then relation between mass behavior and an outcome.
However, the cumulative effect also interacts with, and is mediated by, the Interlocking Behavioral Contingencies of organizations, such as Facebook, Google, and others. Individuals and algorithms in these organizations filter content that do not meet guidelines. Most of the time they do a good job, but other times they don’t, as we have seen above. No system is perfect. Nevertheless, we have a general framework for the problem – a decentralized macrocontingency interacting with organizational systems designed to counteract its cumulative effects. This framework points to broad areas in which to focus prevention efforts:
Performance-based incentives for optimal filtering strategies. Pay for performance has a well documented history of effectiveness in organizations. Often times, such strategies generate discretionary effort in which employees go the extra mile to do good work. Moreover, a little incentive can go a long way in ultimately boosting the bottom line of a company. Pay for performance and earn the benefits of saved work hours.
Refinement of the antecedent conditions that evoke filtering. Optimal filtering strategies work because they know what to look for in the first place. This is what we call antecedents. If a filtering strategy needs improvement, the antecedent stimuli that it detects needs further refinement and definition. The basic behavioral concept of stimulus discrimination is relevant here. In layman’s terms, stimulus discrimination can be thought of as a simple if/then statement – “If a stimulus has x qualities, then y response is appropriate.” Such refinement can be facilitated with the performance-based incentives mentioned above.
Delay the availability of live stream recordings. While nearly anyone can record anything they see on their computer, a significant delay between a live broadcast and its availability as a shareable recording could help reduce the degree to which graphic recordings are spread. While the recording is delayed, it could be run through content filters to pick up anything suspicious. Behaviorally, the delay places time between an act (live streaming) and a potentially reinforcing consequence (greater ability to share the content and receive attention).
Incentivizing an online culture of prevention. A classic behavior change strategy involves taking a reinforcer and applying it to an alternative, more adaptive act. In the case of sharing provocative content, the primary reinforcer is attention. But if you could take that same outcome and link it to an alternative behavior, you can see similar effects. Incentivizing a culture of prevention entails two things – making it as easy as possible for regular social media users to have a role in screening and preventing content, and incentivizing their behavior of doing so. Remember, it is one thing to have the ability to do something, and another thing to actually have the motivation to do it. How cool would it be to have the ability to share a badge you earned on social media that said “I helped prevent the spread of hate”?
The main purpose here was to build a general framework and suggest broad areas to guide future preventative work. At the heart of the problem is behavior and the situations in which it occurs. In this case, it can be addressed from multiple angles. How do you think these issues should be addressed? Let us know in the comments below, and be sure to subscribe to bSci21 via email to receive the latest articles directly to your inbox!
Todd A. Ward, PhD, BCBA-D is a science writer, social philosopher, behavioral systems analyst, and the President and Founder of bSci21Media, LLC, which aims to connect behavioral science to the world in an engaging, non-academic way. Dr. Ward received his PhD in behavior analysis from the University of Nevada, Reno under Dr. Ramona Houmanfar. He has served as a Guest Associate Editor of the Journal of Organizational Behavior Management, and as an Editorial Board member of Behavior and Social Issues. His publications follow a theme of behavioral systems analysis, organizational performance, theory & philosophy, and language & cognition. He has also provided ABA services to children and adults with various developmental disabilities in day centers, in-home, residential, and school settings, and previously served as Faculty Director of Behavior Analysis Online at the University of North Texas. Dr. Ward can be reached at email@example.com