Is Kaspersky Still a Thing?

On Friday, March 5th the FCC added Kaspersky to their list of restricted companies alongside two Chinese companies. In a statement regarding the move, Chair Jessica Rosenworcel commented that this “will help secure our networks from threats posed by Chinese and Russian state backed entities seeking to engage in espionage and otherwise harm America’s interests.”

The Kaspersky response accused the Federal Communications Commission of playing politics stating that this action was purely in response to the invasion of Ukraine, and not done with any basis in technology.

This wasn’t the first time that Kaspersky Labs were singled out by the United States Government. In December 2017, President Trump signed a law banning their antivirus software from federal agencies under suspicions that the popular software was being used in a cyber espionage collaboration with the Russian government.

In both instances, Kaspersky claims innocence and counter accuses the US of exerting political biases against their company. The latest action has been labelled as a retaliation against their Russia-based company over the actions of Putin and the Kremlin in Ukraine.

Whether this was politically-motivated or a true response to a real and present danger, what this geek finds amusing is that somehow Kaspersky is actually still in the marketplace (I didn’t realize they were).

What would ending Daylight Saving Time mean to the tech world?

This week, the U.S. Senate unanimous passed a bill to make Daylight Saving Time permanent for most of the United States. While this must still make it through the House and President Biden, it is very likely to pass thoses steps. There still may be some social discussions about very late sunrise times, especially for those with children who will be waiting for school buses in the dark during the winter months. However, the bigger impact may need to be planned for by the tech industry.

Many of us remember the scurry and panic over the anticipated Y2K bug. For those who do not, it was the unknown effects on computer systems and programs as we transitioned from 1999 to 2000 because of a common use of 2 digit years instead of 4 digit in many places. As many computer systems use a localized configuration setting to translate UTC strings into a local time, there may be a need for tech professionals to ensure that their systems are set to adapt to this change, if adopted.

Why do we even have DST in the first place? The practice began during World War I, and was originally implemented by Germany to support conservation of energy in the factories producing materials for the war. The United States, who had only just implemented time zones in the 1880’s adopted the practice soon after. After the war, the management of DST was turned over to the states until 1966 when it was again put into practice federally as a response to transportation safety issues.

In the world of computers, the issue of changing clocks has transformed into an automated practice that is managed at the server level based on a time zone setting in the operating system. The question will be how much time and effort will be needed to review all systems to ensure that a significant change can be safely adapted to all of the critical systems that run environments such as healthcare, finance and military defense. On the bright side, should a law establishing permanent Daylight Saving Time be passed there would be a window of opportunity for all responsible for data center operations or software maintenance to verify that the change can be safely made. Similarly, as the time zone change only occurs twice a year at an instant, the change could be implemented on servers at any time prior to the actual moment of implementation.

Should passage become inevitable, we as computer operations professionals should begin creating test plans to ensure that our systems can be updated efficiently and that all of our time sensitive application receive sufficient regression testing to ensure we do not experience any critical failures at the time when the clocks would normally have changed.

Concurrently, we need to advise our lawmakers of the amount of effort needed for implementation to ensure that an arbitrary date is chosen which does not allow time for that testing.

Fudamentals of IT Architecture

The key thing to remember when thinking about IT architecture is that it is a tool independent philosophy. The goal of architecture isn’t to tell people how to employ the technology, but to give direction as to how to guide that process. Meaning… it doesn’t follow the project management level tools such as waterfall or Agile which define timelines, delivery dates or any specifics to the deliverables. IT architecture focuses more on setting common vernacular and standards so that disparate teams working on larger projects have a clear understanding of the goals, expectations, assignees, exit criteria, et al.

The goal of IT architecture is to help define clear communications between various development/delivery teams, their key stakeholders and business executives to ensure a successful implementation of any project or component.

What is IT Architecture?

As I’ve worked with several organizations over the years, I’ve noticed a wide range of concepts of what IT architecture is. More than once when I’ve asked for an architecture, I’ve been provided a simple block diagram that represented an application or data flow. Other times, somebody will give me a network connection diagram and call that their architecture. Actually, their answer isn’t wrong, it’s only incomplete.

An IT architecture is actually a collection of diagrams and documents which combine to describe the current state of your environment at a starting point. They add in any description of the evolution of your technology which will strategically lead you to your business goals. They also serve to mark a historical record of the evolution you have already made and the decisions which were made to get there. The latter is extremely important as future choices can use past research to guide the decisions where you may save the effort of re-investigating options that were already proven unfeasible. Or, they may also be used to re-evaluate whether a desired path may now be feasible based on technological improvements.

As you read this, you might be wondering why I’ve gone to such an elementary definition. Simple… in the coming weeks, I plan to dive deeper into the role of architecture in a world that is moving more towards agile, devops, continuous improvement and customer requirements driven. To best describe the role, we first must set a common definition of the component we are talking about. There are many who debate that IT architecture does not fit in the new paradigm, or that it might be in clear contrast with an agile environment. We’ll be looking deeper into that philosophy and I will demonstrate how they may actually support each other in a rapid-changing environment.

Agile VS Architecture?

This is actually a trick question as contrary to common beliefs they do not exist at the same level in the IT framework. There is a common fallacy that enterprise architecture interferes with the fluidity of the continuous delivery/continuous improvement model. In a high performing IT environment, architects and delivery teams have defined the delineation between planning and delivery in such a way that the architect teams provide a strategic roadmap for products. As delivery teams receive requirements, those can be matched against the roadmap to ensure that designs are aligned with the overall strategic direction of the IT organization.

When properly implemented, an environment which incorporates an Agile methodology with a functional enterprise architecture can increase effectiveness as the architectural framework can reduce the amount of design that needs to be done within the sprint cycle. Concurrently, by following an architectural framework development teams will have a library of reusable assets to draw upon as they start their development cycle. On the IT service management side of this equation, operational and support teams also benefit as and increase in the amount of reused assets simplifies the training and knowledge needed to support the product portfolios.

Without an enterprise architecture, product teams are often left to their own resources. As the organization grows, this often leads to disjointed IT solutions across product teams… which in turn increases the complexity for operation and confusion for the customers and executives when attempting to describe the portfolios that are provided.

Too little (or too much) too late?

As details emerge that the world’s largest meat processing company (JBS) paid and $11 million ransom to the Russian-speaking ransomware gang “REvil”, there are questions left unanswered. Reports indicate that JBS had been able to restore their operations prior to paying the ransom. So, what exactly was the reason for paying a ransom if they were already back up and running?

Some speculation is that despite returning to operations, JBS was responding to an additional threat of release of content within the hijacked data. If that turns out to be the case, there is a huge flaw in the logic that paying the ransom will make JBS safe from the potential misuse of that information. Unlike a kidnapping where you receive the only copy of the person back in the exchange, you do not get any guarantees where data is concerned. If the stolen data was not properly encrypted, there is no way to ensure that the thief will not take the ransom and then still auction off a copy of it to the highest bidder. If the data was encrypted using the strongest means available, then there is a very low chance that the thiefs would be able to break the codes. In either case, it seems that paying the ransom does nothing more than reward the illegal activities and expose your company to future attacks.

IT Chickens Coming Home To Roost?

Starting around 2001, many companies took an approach to IT that centered around reducing the overhead costs. These reductions manifested in 2 different ways. The first approach was to downsize the IT team, particularly by trimming out senior (and most expensive) technical professionals. The second was to tranform infrastructure from traditional servers to virtualized machines, and eventually to those hosted by others. The virtualization approach itself was thought to be a brilliant idea, but the implementation often fell short as resiliency was sacrificed for budget.

Recent cyber attacks and ransomware events have shined a spotlight on the risks that were accepted when the cost savings initiatives were adopted. As infrastructure budgets were trimmed, resiliency and multi-tiering of vetted data were considered to be excessive measures which only add overhead to the IT expenses. Similarly, as complicated data protection solutions were eliminated there was no longer the need for the experience IT professionals who could summarize the potential risks being adopted. Seasoned IT professionals, architects and engineers, have been replaced by coders who delivery a constrained set of deliverables set for the current SCRUM with no vision to the strategic plans.

In light of the increased risk of attack and potential costs, the question is now how companies (particularly those handling critical infrastructure) will respond. In a new infrastructure world that provides cloud services such as AWS and Google Cloud Platform, the costs of hosting may still be managed effectively. But, it may be time to re-think the role of strategic planning IT professionals to ensure that companies have available talent to review the data plans. Zero data loss, even under the most vicious ransomware attack, is completely possible with the correct data plan in place. The question is whether companies may now recognize this need, or if it might even reach the point of the government requiring a minimum level of integrity and recovery for any companies which provide services deemed to be critical infrastructure.