Too little (or too much) too late?

As details emerge that the world’s largest meat processing company (JBS) paid and $11 million ransom to the Russian-speaking ransomware gang “REvil”, there are questions left unanswered. Reports indicate that JBS had been able to restore their operations prior to paying the ransom. So, what exactly was the reason for paying a ransom if they were already back up and running?

Some speculation is that despite returning to operations, JBS was responding to an additional threat of release of content within the hijacked data. If that turns out to be the case, there is a huge flaw in the logic that paying the ransom will make JBS safe from the potential misuse of that information. Unlike a kidnapping where you receive the only copy of the person back in the exchange, you do not get any guarantees where data is concerned. If the stolen data was not properly encrypted, there is no way to ensure that the thief will not take the ransom and then still auction off a copy of it to the highest bidder. If the data was encrypted using the strongest means available, then there is a very low chance that the thiefs would be able to break the codes. In either case, it seems that paying the ransom does nothing more than reward the illegal activities and expose your company to future attacks.

IT Chickens Coming Home To Roost?

Starting around 2001, many companies took an approach to IT that centered around reducing the overhead costs. These reductions manifested in 2 different ways. The first approach was to downsize the IT team, particularly by trimming out senior (and most expensive) technical professionals. The second was to tranform infrastructure from traditional servers to virtualized machines, and eventually to those hosted by others. The virtualization approach itself was thought to be a brilliant idea, but the implementation often fell short as resiliency was sacrificed for budget.

Recent cyber attacks and ransomware events have shined a spotlight on the risks that were accepted when the cost savings initiatives were adopted. As infrastructure budgets were trimmed, resiliency and multi-tiering of vetted data were considered to be excessive measures which only add overhead to the IT expenses. Similarly, as complicated data protection solutions were eliminated there was no longer the need for the experience IT professionals who could summarize the potential risks being adopted. Seasoned IT professionals, architects and engineers, have been replaced by coders who delivery a constrained set of deliverables set for the current SCRUM with no vision to the strategic plans.

In light of the increased risk of attack and potential costs, the question is now how companies (particularly those handling critical infrastructure) will respond. In a new infrastructure world that provides cloud services such as AWS and Google Cloud Platform, the costs of hosting may still be managed effectively. But, it may be time to re-think the role of strategic planning IT professionals to ensure that companies have available talent to review the data plans. Zero data loss, even under the most vicious ransomware attack, is completely possible with the correct data plan in place. The question is whether companies may now recognize this need, or if it might even reach the point of the government requiring a minimum level of integrity and recovery for any companies which provide services deemed to be critical infrastructure.