The key thing to remember when thinking about IT architecture is that it is a tool independent philosophy. The goal of architecture isn’t to tell people how to employ the technology, but to give direction as to how to guide that process. Meaning… it doesn’t follow the project management level tools such as waterfall or Agile which define timelines, delivery dates or any specifics to the deliverables. IT architecture focuses more on setting common vernacular and standards so that disparate teams working on larger projects have a clear understanding of the goals, expectations, assignees, exit criteria, et al.
The goal of IT architecture is to help define clear communications between various development/delivery teams, their key stakeholders and business executives to ensure a successful implementation of any project or component.
As I’ve worked with several organizations over the years, I’ve noticed a wide range of concepts of what IT architecture is. More than once when I’ve asked for an architecture, I’ve been provided a simple block diagram that represented an application or data flow. Other times, somebody will give me a network connection diagram and call that their architecture. Actually, their answer isn’t wrong, it’s only incomplete.
An IT architecture is actually a collection of diagrams and documents which combine to describe the current state of your environment at a starting point. They add in any description of the evolution of your technology which will strategically lead you to your business goals. They also serve to mark a historical record of the evolution you have already made and the decisions which were made to get there. The latter is extremely important as future choices can use past research to guide the decisions where you may save the effort of re-investigating options that were already proven unfeasible. Or, they may also be used to re-evaluate whether a desired path may now be feasible based on technological improvements.
As you read this, you might be wondering why I’ve gone to such an elementary definition. Simple… in the coming weeks, I plan to dive deeper into the role of architecture in a world that is moving more towards agile, devops, continuous improvement and customer requirements driven. To best describe the role, we first must set a common definition of the component we are talking about. There are many who debate that IT architecture does not fit in the new paradigm, or that it might be in clear contrast with an agile environment. We’ll be looking deeper into that philosophy and I will demonstrate how they may actually support each other in a rapid-changing environment.
For an infrastructure person… which phrase makes you cringe more?
1. MVP (minimum viable product)
2. Smoke test
As we push to do more work on more economical hosting, bare metal migrations have evolved to simple published API endpoints called containers or lambda services. How far has your organization gone and what investments have shown the greatest returns?
This is actually a trick question as contrary to common beliefs they do not exist at the same level in the IT framework. There is a common fallacy that enterprise architecture interferes with the fluidity of the continuous delivery/continuous improvement model. In a high performing IT environment, architects and delivery teams have defined the delineation between planning and delivery in such a way that the architect teams provide a strategic roadmap for products. As delivery teams receive requirements, those can be matched against the roadmap to ensure that designs are aligned with the overall strategic direction of the IT organization.
When properly implemented, an environment which incorporates an Agile methodology with a functional enterprise architecture can increase effectiveness as the architectural framework can reduce the amount of design that needs to be done within the sprint cycle. Concurrently, by following an architectural framework development teams will have a library of reusable assets to draw upon as they start their development cycle. On the IT service management side of this equation, operational and support teams also benefit as and increase in the amount of reused assets simplifies the training and knowledge needed to support the product portfolios.
Without an enterprise architecture, product teams are often left to their own resources. As the organization grows, this often leads to disjointed IT solutions across product teams… which in turn increases the complexity for operation and confusion for the customers and executives when attempting to describe the portfolios that are provided.
As details emerge that the world’s largest meat processing company (JBS) paid and $11 million ransom to the Russian-speaking ransomware gang “REvil”, there are questions left unanswered. Reports indicate that JBS had been able to restore their operations prior to paying the ransom. So, what exactly was the reason for paying a ransom if they were already back up and running?
Some speculation is that despite returning to operations, JBS was responding to an additional threat of release of content within the hijacked data. If that turns out to be the case, there is a huge flaw in the logic that paying the ransom will make JBS safe from the potential misuse of that information. Unlike a kidnapping where you receive the only copy of the person back in the exchange, you do not get any guarantees where data is concerned. If the stolen data was not properly encrypted, there is no way to ensure that the thief will not take the ransom and then still auction off a copy of it to the highest bidder. If the data was encrypted using the strongest means available, then there is a very low chance that the thiefs would be able to break the codes. In either case, it seems that paying the ransom does nothing more than reward the illegal activities and expose your company to future attacks.
Starting around 2001, many companies took an approach to IT that centered around reducing the overhead costs. These reductions manifested in 2 different ways. The first approach was to downsize the IT team, particularly by trimming out senior (and most expensive) technical professionals. The second was to tranform infrastructure from traditional servers to virtualized machines, and eventually to those hosted by others. The virtualization approach itself was thought to be a brilliant idea, but the implementation often fell short as resiliency was sacrificed for budget.
Recent cyber attacks and ransomware events have shined a spotlight on the risks that were accepted when the cost savings initiatives were adopted. As infrastructure budgets were trimmed, resiliency and multi-tiering of vetted data were considered to be excessive measures which only add overhead to the IT expenses. Similarly, as complicated data protection solutions were eliminated there was no longer the need for the experience IT professionals who could summarize the potential risks being adopted. Seasoned IT professionals, architects and engineers, have been replaced by coders who delivery a constrained set of deliverables set for the current SCRUM with no vision to the strategic plans.
In light of the increased risk of attack and potential costs, the question is now how companies (particularly those handling critical infrastructure) will respond. In a new infrastructure world that provides cloud services such as AWS and Google Cloud Platform, the costs of hosting may still be managed effectively. But, it may be time to re-think the role of strategic planning IT professionals to ensure that companies have available talent to review the data plans. Zero data loss, even under the most vicious ransomware attack, is completely possible with the correct data plan in place. The question is whether companies may now recognize this need, or if it might even reach the point of the government requiring a minimum level of integrity and recovery for any companies which provide services deemed to be critical infrastructure.
Do you have a formal risk assessment tool you use, or are you still managing risk using the SWAG method?
On 5/7/2021, the Colonial pipeline which provides almost half of the east coast oil supply was shutdown by a ransomware attack. Service returned five days later, but only after Colonial paid almost $5 million in ransom to get the cyber keys.
Could this have been avoided?
The simple answer is YES. Legacy monolithic infrastructure architectures leave systems highly exposed to these type of attacks. The fact that a vital component to U.S. infrastructure was so exposed is both scary and mind numbing. It highlights gross incompetence in resiliency planning. At the worst case, it’s unimaginable that a vital service like that did not have the ability to rebuild infrastructure and roll back data to a pre-attack state.
In a forward thinking technology culture, it would seem more critical that resiliency planning would have looked more towards an IaaS or PaaS design in order to respond swiftly to any factors impacting production operations. Technology is available to have near real-time failover to alternate resources.
The Colonial Pipeline incident should serve as a message to all who oversee critical production environments to review their plans and ensure they are leveraging the best options to mitigate or avoid the evolving risks in the digital age.
What’s your favorite web type… wordpress, php, html/css, asp or something else?