Fahrenheit 451?: Book Burning in the Information Age

Harla B. Frank, M.S., BCBA

bSci21 Contributing Writer

How easy it is for so many of us today to be undoubtedly full of information yet fully deprived of accurate information. —Criss Jami

First published in 1953, Ray Bradbury’s Fahrenheit 451 pictured a world in which informed decision-making and critical thought were all but outlawed.  Books were burned and the age of interactive television and something akin to tabloid publications served as entertainment.  Burning books was justified by explaining that contradictory opinions expressed in books were offensive to readers, so the authors eventually changed their style in order to feed the readers what they wanted to ingest.  As a result, information presented in books was homogenous and presented no counter arguments to the “facts” contained in textual media.  An underground force grew to counter the destruction of classic literature by memorizing texts.  Each individual memorized a work of literature but, in so doing, lost himself – he/she became the opinions espoused in the book.  There was nothing to counter the opinions memorized and, thus, nothing with which to measure the accuracy of the information – nothing to make one search for truth.  This attack on informed decision-making and critical thinking was overt – the evil was easy to see and the results were easy to predict.  Today, in the information age, the same thing is happening; but, the method is much more insidious.

“Massive digital misinformation is becoming pervasive in online social media to the extent that it has been listed by the World Economic Forum (WEF) as one of the main threats to our society” (Del Vicario et al, 2015, p. 554).  The control of information is nothing new.  People in power have controlled the dissemination of information since before the printing press (Burkhardt, 2017).  Writers in the early days of the printing press produced “news” that benefited “their employers” (Burkhardt, 2017, p. 6).  While “fake news” was being peddled in the early days of printed material, there were also those who were concerned about this practice.  Jonathan Swift, author of Gulliver’s Travels, wrote an essay in 1710 that pointed out the danger of false information – even when that information is later proven to be false (Burkhardt, 2017).  It seems that once information is “out there,” the damage is done.  Some false information is circulated due to human error, while other false information is circulated due to human design.  The information in early written materials could sway public opinion in a powerful way, but not nearly as powerfully as it does today in the digital age.  The massive scope of the Internet, and the rapid dissemination of the information, provides a prolific breeding ground for misinformation.

But, you might protest, the Internet is a repository of scholarly research and classic works.  How can this amazing resource proliferate misinformation?  This is where the story gets interesting!  In the very early days of the Internet, there were very few who actually understood the code that was necessary to add information to the World Wide Web (WWW), which made it difficult for the average person to add his/her “two cents worth” to the Web.  However, as computer manufacturers made using a computer much more user friendly, more people began sharing the “benefit of their knowledge and experience” (Burkhardt, 2017).  And, those who wanted to sell a product or sway public opinion flocked to this incredible medium that had so much potential to reach a massive audience.  We now see the huge surge in Internet users and those who want to push a product or agenda.

Of course, just having information that is searchable on the Web doesn’t mean that we are going to fall for every line that is published.  We are, after all, reasoning individuals who look for verification of the statements we read.  Just because those statements are in scrolling black and white does not mean we are taken in by the assertions that are made.  Quite true!  But, that was before a layer of deception was added to our information source in the form of “bots!”  The use of bots (short for “robots”) began innocently enough.  Advertisers and other interested patrons wanted to invest their money in the most efficient way possible.  That meant that they needed information regarding what the friendly users of the Internet bought, what they liked, where they made their purchases, and much more (Burkhardt, 2017).  Makes sense.  Sellers needed data to help them target their potential buyers, but bots’ work did not end there.  Just as the Internet’s reach is ever increasing, so too is the ability of bots to route information to us that we will accept (Burkhardt, 2017).  In fact, these innocent little bots can even mask themselves as humans in social media outlets.

Social media sites offer an easy way to stay in touch with friends and family and to share everyday events, pictures, and opinions.  It is a wonderful tool in our global society in which people relocate frequently and sometimes great distances.  Of course we want to stay in touch with loved ones and maintain friendships.  So, we go about our day . . . texting.  We text those we know and, thanks to “friend suggestions,” we befriend strangers.  Accepted “friend suggestions” quickly go from strangers to trusted confidants (Burkhardt, 2017).  But, guess how those friend suggestions are developed?  Right, through information obtained via bots.  Bots mine your clicks.  They learn what you like by keeping tabs on what you click and what story-video-comment you send “out there.”  Some of those “friend suggestions” aren’t people at all, but rather bots masquerading as people (Burkhardt, 2017).  As bots feed you information that is “sure to please” because they know what you like and what you do not like, you are quite apt to share that information with all your other friends.  The proliferation of opinion, confirmed by all the stories that support your opinions, is now “trending” (Burkhardt, 2017).  And, because everything you receive on social media confirms your belief in your own opinions, you really have no need to fact check those opinions.  So, rather than burning information in order to sway public opinion or to maintain the status quo, bots provide information that allows you to hold onto unverified information.  Pretty scary!

If you think this sounds like a conspiracy theory, you’re right!  But, these are facts – facts that have many people very concerned.  As behavior analysts – scientists, we always seek verification on important matters, but there are those who do not.  It is for those who may be taken in by the tendency to believe and “click” rather than read – verify – and send (or don’t send) that I write this article.  Specifically, I’m concerned about the information that is available in vast numbers about treatments for those diagnosed with Autism.  We’ve all been there.  We arrive at a new client’s home to conduct an interview with the parents.  It isn’t long before we hear that they are trying this or that therapy that they have read about on a social media site for parents of children diagnosed with Autism.  Or, they tell us about something they have recently tried that made an amazing difference.  There are therapies that are touted and “sold” to parents that have no evidence supporting their effectiveness.  Often, word of these therapies are spread by well-intentioned people who truly believe that they work, but perhaps haven’t considered extraneous variables that may have been at work when they began the therapy, such as changes in setting; physiological development; change in demands; etc.  Naturally, a parent tries a new therapy that has been touted by another parent, he/she sees a change and credits the new therapy with that change.

There are also charlatans out there who prey on loving parents who are searching for anything that can help their children.  The bots are at work here as well and are directing the charlatans’ advertising directly to those who are searching for answers.  At best, the harm done is precious time wasted that could be spent utilizing evidence-based therapies.  At worst, these unproven therapies can result in physical harm and even death (National Council Against Health Fraud, 2002).

As practitioners, we try to compassionately warn parents about unproven therapies, but we are fighting more than bots that support currently held beliefs.  We are also battling the tendency for individuals to hold as credible information found in online discussion forums (Bickart & Schindler, 2001; Johnson & Kaye, 2004).  In fact, Johnson and Kaye’s 2004 study of perceptions of credibility found that web blogs were considered credible by 73.6% of subjects (p. 629).  With such a high degree of trust in discussion-based forums, it may make our job as purveyors of fact a bit challenging.

In addition to providing our client families with information regarding evidence supporting or refuting the effectiveness of various therapies, we can provide our families with a list of “fact check” websites and tools.  More and more of these sites are emerging to help Internet users sift through the maze of existing information.  The following is just a short list of tools that can be accessed:

  1. Botometer: Checks Twitter accounts for the likelihood of a “tweeter” being a bot.
  2. Hoaxy beta: Fact check site.
  3. FactCheck.org: Primarily political issues.
  4. Snopes.com: General fact check site.
  5. TruthOrFiction.com: General fact check site.
  6. Hoax-Slayer.com: General fact check site.

Pushing fact in an age of opinion will not be easy.  With so much trust in blogs and social media, it may also behoove us to write our own blogs.  However, there are more of them than us and the bots will be busy pushing “desired information” to those we hope to reach.  But, the goal to educate our client families, present and future, is worth the battle!

In a time when society is drowning in tsunamis of misinformation, it is possible to change the world for the better if we repeat the truth often and loud enough.  – Albert Cairo

References

Bickart, B., & Schindler, R. M.  (2001). Internet forums as influential sources of consumer

information.  Journal of Interactive Marketing, 15, 31-40.

Burkhardt, J. M.  (2017). Combating fake news in the digital age.  Library Technology Reports,

53, 4-36.

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., &

Quattrociocchi, W.  (2016). The spreading of misinformation online.  Proceedings of the

National Academy of Sciences of the United States of America [PNAS], 113, 554-559.

Johnson, T. J., & Kaye, B. K.  (2004). Wag the blog: How reliance on traditional media and the

internet influence credibility perceptions of weblogs among blog users.  Journalism and

Mass Communication Quarterly, 81, 622-642.

National Council Against Health Fraud.  (2002). Policy statement on chelation therapy.

Retrieved from https://www.ncahf.org/policy/chelationpol.pdf

 

Harla Frank, M.S., BCBA earned her Master’s degree in Psychology, with an emphasis in Applied Behavior Analysis, from Florida State University.  Since receiving her certification as a Board Certified Behavior Analyst (BCBA) in 2007, she has worked primarily with children and young adults on the Autism Spectrum, but has also worked with adults with a variety of diagnoses and needs. She has served as an expert witness for Applied Behavior Analysis (ABA) in the Colorado court system and has had the privilege of providing “ABA approaches” training to foster care staff and families.

Since 2010, Harla has taught ABA course sequences, as well as general psychology courses, for Kaplan University.  Currently, she also contracts with a pediatric home healthcare company in Denver to provide ABA therapy to children with a variety of diagnoses. You can contact her at hfrank@kaplan.edu.

Print Friendly, PDF & Email

Be the first to comment on "Fahrenheit 451?: Book Burning in the Information Age"

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.