This is actually a trick question as contrary to common beliefs they do not exist at the same level in the IT framework. There is a common fallacy that enterprise architecture interferes with the fluidity of the continuous delivery/continuous improvement model. In a high performing IT environment, architects and delivery teams have defined the delineation between planning and delivery in such a way that the architect teams provide a strategic roadmap for products. As delivery teams receive requirements, those can be matched against the roadmap to ensure that designs are aligned with the overall strategic direction of the IT organization.
When properly implemented, an environment which incorporates an Agile methodology with a functional enterprise architecture can increase effectiveness as the architectural framework can reduce the amount of design that needs to be done within the sprint cycle. Concurrently, by following an architectural framework development teams will have a library of reusable assets to draw upon as they start their development cycle. On the IT service management side of this equation, operational and support teams also benefit as and increase in the amount of reused assets simplifies the training and knowledge needed to support the product portfolios.
Without an enterprise architecture, product teams are often left to their own resources. As the organization grows, this often leads to disjointed IT solutions across product teams… which in turn increases the complexity for operation and confusion for the customers and executives when attempting to describe the portfolios that are provided.
What parts of IT need to be architected?
I would say all of the above. But what do we mean by architect? The concept of architecture in IT follows that of civil engineering. It’s really quite simple, let’s refine our model on paper (metaphorically speaking) before we spend money on hardware.
If I had to highlight the biggest weakness I’ve encountered over the years, it was in process. Too many folks in IT cave to executive pressure to deliver fast. The problem with moving fast is that you don’t always have time to ask the necessary questions on requirements. And, I’ve found from experience that it takes 10 times longer to fix an issue than it would have taken to design for the requirement up front. The true goal of architecture isn’t to add red tape, it is to simplify delivery. By clarifying the goals up front, we can reduce the time and effort to deliver a functional design.
I intentionally put Data as #2 on the list as I have found that lack of understanding of the data model has been a common culprit in over-engineering the infrastructure. IT is generally the mechanism behind collecting, storing and delivering data at the most efficient means possible. A good data architect is vital to understanding the model, and from that being able to provide the best infrastructure to meet the business goals.
Next on the list is applications. This is often the black hole of IT as most applications have a difficult time providing a true architecture. So many times I’ve been given port requirements or TCP diagrams in response to requests for architecture. The reality is that the architectures need to define producers and consumers along with the methods in which data is communicated in order to produce a true picture of the work that needs to be supported.
The last 4 – storage, network, platforms and servers are the composite subgroup that make up IT infrastructure. In general, this is a collaboration of several teams typically directed by a CTO or an entrerprise architect. The goal of this group is to evaluate business forecasts to ensure that capacity and planning activities provide reasonable resources in advance of the business needs. If capacity is underforecast, then delays occur during projects as they become subject to availablity and supplier timelines. If overbuilt, then valuable budget dollars end up in a store room or sitting idle in the data center.
The overall purpose of incorporating architectural methodologies into your environment is to ensure that your organization can produced repeatable and efficient results in each project that comes. A critical output of architecture is asset harvesting. During the retrospective, the architect will look at any new processes, tools or artifacts were generated and add them to the library for future reference.
It’s that time of year where folks make a lot of promises that usually falter before Groundhog Day. This year, I’ve started a notebook with the plan to make a weekly post. As you may see, I have a vested interest in process and methodologies… for a good reason since my 35 year IT career came more from how I do it than the tech that evolves every 18 months. However… emerging tech is just as important in planning an IT roadmap, so I’d love to hear what others see as important to their strategic directions.
With the start of 2023, Joe Geek will strive to make a weekly post of interest. I plan to follow the IT architecture thread to dive deeper. But I’ll also take some time to focus on the emerging tech and trends for the coming year.
While both require a solid grasp of the technology, there is a functional difference between architecture and engineering.
Engineering asks how do I deploy my tools to satisfy the business requirements.
Architecture asks what does the business need to accomplish and what tools can we ensure the engineers have to meet that.
On Friday, March 5th the FCC added Kaspersky to their list of restricted companies alongside two Chinese companies. In a statement regarding the move, Chair Jessica Rosenworcel commented that this “will help secure our networks from threats posed by Chinese and Russian state backed entities seeking to engage in espionage and otherwise harm America’s interests.”
The Kaspersky response accused the Federal Communications Commission of playing politics stating that this action was purely in response to the invasion of Ukraine, and not done with any basis in technology.
This wasn’t the first time that Kaspersky Labs were singled out by the United States Government. In December 2017, President Trump signed a law banning their antivirus software from federal agencies under suspicions that the popular software was being used in a cyber espionage collaboration with the Russian government.
In both instances, Kaspersky claims innocence and counter accuses the US of exerting political biases against their company. The latest action has been labelled as a retaliation against their Russia-based company over the actions of Putin and the Kremlin in Ukraine.
Whether this was politically-motivated or a true response to a real and present danger, what this geek finds amusing is that somehow Kaspersky is actually still in the marketplace (I didn’t realize they were).
This week, the U.S. Senate unanimous passed a bill to make Daylight Saving Time permanent for most of the United States. While this must still make it through the House and President Biden, it is very likely to pass thoses steps. There still may be some social discussions about very late sunrise times, especially for those with children who will be waiting for school buses in the dark during the winter months. However, the bigger impact may need to be planned for by the tech industry.
Many of us remember the scurry and panic over the anticipated Y2K bug. For those who do not, it was the unknown effects on computer systems and programs as we transitioned from 1999 to 2000 because of a common use of 2 digit years instead of 4 digit in many places. As many computer systems use a localized configuration setting to translate UTC strings into a local time, there may be a need for tech professionals to ensure that their systems are set to adapt to this change, if adopted.
Why do we even have DST in the first place? The practice began during World War I, and was originally implemented by Germany to support conservation of energy in the factories producing materials for the war. The United States, who had only just implemented time zones in the 1880’s adopted the practice soon after. After the war, the management of DST was turned over to the states until 1966 when it was again put into practice federally as a response to transportation safety issues.
In the world of computers, the issue of changing clocks has transformed into an automated practice that is managed at the server level based on a time zone setting in the operating system. The question will be how much time and effort will be needed to review all systems to ensure that a significant change can be safely adapted to all of the critical systems that run environments such as healthcare, finance and military defense. On the bright side, should a law establishing permanent Daylight Saving Time be passed there would be a window of opportunity for all responsible for data center operations or software maintenance to verify that the change can be safely made. Similarly, as the time zone change only occurs twice a year at an instant, the change could be implemented on servers at any time prior to the actual moment of implementation.
Should passage become inevitable, we as computer operations professionals should begin creating test plans to ensure that our systems can be updated efficiently and that all of our time sensitive application receive sufficient regression testing to ensure we do not experience any critical failures at the time when the clocks would normally have changed.
Concurrently, we need to advise our lawmakers of the amount of effort needed for implementation to ensure that an arbitrary date is chosen which does not allow time for that testing.
The key thing to remember when thinking about IT architecture is that it is a tool independent philosophy. The goal of architecture isn’t to tell people how to employ the technology, but to give direction as to how to guide that process. Meaning… it doesn’t follow the project management level tools such as waterfall or Agile which define timelines, delivery dates or any specifics to the deliverables. IT architecture focuses more on setting common vernacular and standards so that disparate teams working on larger projects have a clear understanding of the goals, expectations, assignees, exit criteria, et al.
The goal of IT architecture is to help define clear communications between various development/delivery teams, their key stakeholders and business executives to ensure a successful implementation of any project or component.
As I’ve worked with several organizations over the years, I’ve noticed a wide range of concepts of what IT architecture is. More than once when I’ve asked for an architecture, I’ve been provided a simple block diagram that represented an application or data flow. Other times, somebody will give me a network connection diagram and call that their architecture. Actually, their answer isn’t wrong, it’s only incomplete.
An IT architecture is actually a collection of diagrams and documents which combine to describe the current state of your environment at a starting point. They add in any description of the evolution of your technology which will strategically lead you to your business goals. They also serve to mark a historical record of the evolution you have already made and the decisions which were made to get there. The latter is extremely important as future choices can use past research to guide the decisions where you may save the effort of re-investigating options that were already proven unfeasible. Or, they may also be used to re-evaluate whether a desired path may now be feasible based on technological improvements.
As you read this, you might be wondering why I’ve gone to such an elementary definition. Simple… in the coming weeks, I plan to dive deeper into the role of architecture in a world that is moving more towards agile, devops, continuous improvement and customer requirements driven. To best describe the role, we first must set a common definition of the component we are talking about. There are many who debate that IT architecture does not fit in the new paradigm, or that it might be in clear contrast with an agile environment. We’ll be looking deeper into that philosophy and I will demonstrate how they may actually support each other in a rapid-changing environment.
For an infrastructure person… which phrase makes you cringe more?
1. MVP (minimum viable product)
2. Smoke test
As we push to do more work on more economical hosting, bare metal migrations have evolved to simple published API endpoints called containers or lambda services. How far has your organization gone and what investments have shown the greatest returns?