Shades of Grey in Compliance Assessment

By Becky Ferrell

 

You announce an impending audit to your team.  What’s one of the first questions asked, other than: when?  If your organization is like many of those I’ve worked in, it’s WHO…followed by smiles OR groans/mumbling when the name is announced.  If you’ve been sufficiently curious to question the responses, you may get: too hard…easy to work with…wants too much data…spends too much time on the floor or going through records…gives conflicting answers compared to previous calibrator…and a litany of others.

compliance audit

At a recent assignment, an outside audit was announced during staff meeting.  The immediate question, even before “when”, was “who.”  And then, “oh, good, we like him.”  Now, the immediate thought running through my mind was:  do you like him because he’s a great person, full of knowledge and help…OR…do you like him because he’s easy? Having worked with that organization for several weeks, the answer was obvious to me.

 

Calibrators vs. Valid Calibration

 

For those of us who perform the calibrations / assessments, these reactions to the “who” SHOULD be troubling; they are indicative of a problem that needs to be addressed:  calibrators should be “the same”, but they aren’t.  The fact that they aren’t should lead you to question the validity of the calibration results. Is the organization getting a good assessment because they truly are compliant to the standards and have  documentation to support their on-going compliance?  Or, are they getting the benefit of the doubt because they have an “easy” calibrator?

 

Before you answer with a shrug, think about the ramifications…… what does it do to the intrinsic value of the calibration or assessment if the organization gets a pass even if they don’t meet the standard????  And, for the assessment itself or the assessing organization, what does it say about your standards if your population of calibrators are not consistent in their evaluations?

 

Compliant? Yes, sort of…

 

Let’s start with the basic question:  What is the purpose of a calibration or assessment?  Simply put, it’s to ascertain performance or action against a standard; does the organization comply with designated standard(s)?

 

For the most part, standards are clearly written.  If they are not, then that is another issue that needs to be resolved.  The best standards are written to be answered “yes – no”, promoting objectivity in the response.  The problem comes when the organization answers with a “yes or no, BUT….”and the calibrator listens, and then allows the “BUT” to influence the rating.  (Sliding scale assessments exist, with ratings of Red, Yellow, Green; however, I view these as being mainly useful to visualize the progress of an organization from non-compliant to compliant.)

 

Let me give you an example.  A calibration statement reads:  The organization uses EDI to transmit schedules to their suppliers.  Clearly this question is designed to be answered yes or no.  However, I’ve been on calibrations where, in the self-assessment, the organization has answered “yes” even though they don’t use EDI, but electronically send Excel schedules, due to their size and the cost of implementation of EDI.  And when challenged, the organization states the previous calibrator accepted their answer due to their logic.  Frankly, from my standpoint, this is not a problem with the organization, but rather a problem with the calibrators.

 

Standards are not grey, sir.

 

As a Lean Calibrator for a major manufacturing company, it was my job to review the processes used by the plants and assess them according to a published standard.  This standard consisted of questions, the vast majority of which were OBJECTIVELY written to be answered “yes” or “no.”  As I assessed one of the plants, it became increasingly clear the previous calibrator had allowed SUBJECTIVITY in accepting responses. Statements that were obviously NO, had been allowed to be YES because the plant had a rationale for why, despite the fact they were not compliant, they should be allowed to answer YES.  With a new disappointing assessment in hand, several meetings followed where I was challenged to support my assessment and I spent a good deal of time explaining the purpose of the standard and the “yes/no” nature of the question, along with the objective evidence expected to demonstrate compliance.

 

I ran into the Plant Manager a couple days later, who stopped to discuss the status of the calibration.  With a smile on his face, he mentioned several of the managers were somewhat upset with what they deemed to be my “black and white” approach…and suggested he talk to me about softening my stance and learning to work with shades of grey.  We reviewed a couple of the statements, and I told him I would be happy to calibrate based on a continuum from black to white, IF the assessment format was ever changed to such a subjective standard.  I left him with the thought his managers might want to consider the idea of stepping up to the plate of compliance.

 

Bottom line:  Organizations should expect consistent messages from all calibrators. It should NOT make any difference which calibrator shows up for the calibration. Having this consistency will drive compliance improvements and reduce friction between the organization and calibrators.

 

So… how do you ensure these situations do not occur?  You need to “calibrate the calibrators.”  This can be accomplished through calibrator training and collaboration, which will be discussed in a future blog. Or, give us a call to discuss compliance objectives in your plant!

 

 

 


Did you enjoy this blog? Search our blog library for other topics of interest: https://highvaluemanufacturingconsulting.com/blog/