Data Collection in Schools: Validity, Reliability, Accuracy . . . and Creativity?

pixabay.com

By Shelley McLean, M.Ed, BCBA

bSci21 Contributing Writer

Sometimes I wonder if it’s just me . . .

I try to keep up with emerging literature in the field of Applied Behavior Analysis as much as I can, particularly in relation to learners with Autism Spectrum Disorder (ASD) and those who engage in challenging behaviors.  And when I read the latest research studies, one thing that always jumps out at me is the beautiful graphs the studies usually include.  Being a bit of a data geek, I am always so impressed by the consistency with which data were collected, the clearly-visible changes in level and trend from baseline to intervention, the impressive demonstrations of experimental control.  It’s a thing of beauty . . . then I inevitably find myself wondering, “Am I the only one who finds it challenging to collect data, or to support school teams to collect data, on the acquisition of skills and reduction of challenging behaviors for learners in real-world situations?”

I wish I could say that the data I’ve collected, or the data collected by the teams I’ve supported, always resembled the data presented in the impressive graphs I see in the research, but sadly that’s not the case.  Many times I would go in to consult in a school, collect some data myself, and leave the school team with a data collection plan to carry out.  There’s not a doubt in my mind that the teams always did their level best to follow the plan as it was intended, but many times the data I received bore little resemblance to the data I anticipated.  It became clear to me that something I was doing was not working as effectively as I had intended.  So, in an attempt to find a solution to my data-collection dilemma, and to assuage my guilt at being a less-than-effective data collection coach, I began to search the literature for some direction on how to do a better job.

A statement in one of the first articles I read supported my concern, and increased the urgency of my mission.  Vollmer, Sloman, and St. Peter Pipkin emphasize the importance of making sure that data are reliable and that interventions are being carried out precisely as intended.  A statement in their 2008 article, “Practical Implications of Data Reliability and Treatment Integrity Monitoring” jumped out at me like a flashing neon sign: “A failure to collect data reliability and treatment integrity measures is potentially dangerous because life-changing decisions are made based on the assumption that the data reported are reasonably accurate and based on the assumption that the prescribed procedures were conducted as specified” (Vollmer, Sloman, & St. Peter Pipkin, 2008, p. 4).  I think this statement resonated with me because I often wonder if we, as educators, consider seriously enough how much what we do, and what we don’t do, has the power to change what our learners can do for the rest of their lives.  So this article sent me deeper into the data-collection literature with renewed energy.

Fortunately, I found some research that helped with my professional practice and my guilty conscience!  Dr. Linda LeBlanc and her colleagues developed a clinical decision making model to guide behavior analysts who are collecting data on problem behavior in a variety of practical contexts, providing some valuable direction and helping with one part of my data-collection dilemma.  Their article, “A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior” also reassured me that I’m probably not the only behavior analyst struggling with this challenge, as they point out that “one concern is that behavior analysts who encounter barriers to complicated or optimal data collection systems might fail to collect data altogether if they do not have a system for selecting the most useful procedures given their constraints” (LeBlanc, Raetz, Sellers, & Carr, 2016, p. 82). 

The decision-making model outlined by LeBlanc and colleagues guides behavior analysts through questions such as:  Is the problem behavior observable?  Is the problem behavior discrete and countable?  Are there sufficient resources available to continuously monitor the behavior?  Can the problem behavior occur at any time? And so on.  Based on the behavior analyst’s response to each question, the authors recommend a specific method of data collection.  For example, even though event recording may be preferable if staff could constantly monitor the learner and count every instance of the behavior, given the reality of limited resources in many cases, momentary time sampling may be the best option.  Following the flowchart may prove extremely valuable in helping decide what type of data to collect, on which dimension(s) of the problem behavior, and on what schedule, depending on the resources and constraints in each unique context.

The other thing I discovered in my search, through research and discussions with colleagues, is that sometimes getting accurate and reliable data means being creative in your approach.  There are some great electronic data-collection tools and apps out there if finances and resources are available, and good old paper-and-pencil data collection can be a great option as well.  But in the fast-paced and sometimes chaotic world that is the school environment, strategies like putting a small dot on the back of your hand with a marker each time a behavior happens, putting 20 paperclips in your pocket and moving one to the other pocket for each instance of behavior, or using a golf stroke counter can be easier for staff to manage.  It is also important to train at least two or three staff on the intervention and data collection procedures.  That doesn’t mean that they all have to be collecting data all the time, but it does allow for overlapping observations at specific times to check for reliability and to make sure that the intervention continues to be carried out as it was intended.  It may also mean taking a look at the “team” with fresh eyes to see who might be able to play a role in collecting data.  Can a member of the administrative team do a reliability check once a week?  Can the guidance counselor or special education resource teacher pop in for a few minutes?  Who are all of the potential resources on your team?

My quest to become a more successful data collection coach taught me that some of the most effective strategies, whether for the “technology” (high tech or low tech) we would use, or for the personnel who would do the data collection, came from brainstorming with each school team what was “doable” for them in their unique context.  I also learned that collecting accurate and reliable data and monitoring that the intervention is being carried out as intended sometimes necessitates outside the box thinking and finding a creative method for getting the best information possible in each situation.

References:

LeBlanc, L. A., Raetz, P. B., Sellers, T. P., & Carr, J. E. (2016).  A proposed model for selecting measurement procedures for the assessment and treatment of problem behavior.  Behavior Analysis in Practice, 9(1), 77-83.

Vollmer, T. R., Sloman, K. N., & St. Peter Pipkin, C. (2008).  Practical implications of data reliability and treatment integrity monitoring.  Behavior Analysis in Practice, 1(2), 4-11.

smclean-headshotShelley McLean, M.Ed, BCBA is passionate about empowering educators with an understanding of behavioural principles to give them the tools and the confidence to ignite the potential in all of their learners.  She is the Coordinator of the interprovincial Autism in Education (AIE) Partnership for the Atlantic Provinces Special Education Authority (APSEA) in Halifax, Nova Scotia, Canada.  Shelley has worked as a classroom teacher, guidance counselor, high school administrator, itinerant ASD consultant, and provincial Learning Specialist for ASD and Complex Cases during her career in education, which spans more than 17 years.  She has also served as a part-time instructor for ABA courses at the University of New Brunswick and Western University in Ontario.  Shelley holds Bachelor degrees in Arts and Education, and a Master of Education degree in Counseling Psychology.  She completed a Graduate Academic Certificate in Applied Behavior Analysis from the University of North Texas and has been a Board Certified Behavior Analyst (BCBA) since 2010.  You can contact her at [email protected]

2 Comments on "Data Collection in Schools: Validity, Reliability, Accuracy . . . and Creativity?"

  1. Gabrielle Carmody | December 12, 2016 at 9:29 pm | Reply

    This article resonated with me, although I work in a private school with relatively well trained staff, data collection continues to be a concern. Just getting staff to understand and appreciate ‘what’ data I am looking for and hopefully to give them an understanding of the ‘why’ as well in order to make them better front line staff is an ongoing thought. I am looking forward to reading the related articles. Thank you again for tackling the topic of real life intervention and the challenges that poses for accurate and reliable data collection.

  2. Very informative article that resonates with my challenges and daily practice. I am a school psychologist and recently developed an app called Behavior Observation Made Easy to support school teams with data collection accuracy and reliability.

    It is a simple app that help teams collect concrete data to make solid data based decisions. Formats include momentary time sample, frequency counter, duration, and simple interval timer. A random peer comparison can be included. Results can be generated as a chart to place directly into reports or as a CSV file (Excel) for further analysis. Templates can be saved for future use or shared across multiple devices so observations can be completed by a multidisciplinary team.

    If you are interested, check out this introduction video: https://www.youtube.com/watch?v=gt0NwxY93VM
    Email me if you have any questions – [email protected]

Leave a comment

Your email address will not be published.


*