Enterprise networks were not designed for what we’re asking of them now. AI infrastructure, geopolitical fragmentation, automation, massive data flows between facilities: these things are piling up faster than most organizations can deal with them. We found someone who knows where the bodies are buried.
Dmytro Muzychko – a network architect with one of the rarest certification profiles in the world. Today he works at the intersection of network engineering and business strategy, building complex infrastructure decisions that organizations can act on.

2Digital: There are reportedly only 600 people in the world with your combination of network certifications. Why go through all of that and not choose something easier?
Dmytro: I never had a goal to collect certifications as such. The goal was to close knowledge gaps in areas relevant to my work. Over time, I built a methodology: theory from the study guide, then hands-on labs. By the time I was done with that, taking the exam was just putting a dot after the sentence – a personal preference, not a necessity.
Expert-level certifications are a different story. Apart from knowledge gained during preparation, their value is mostly reputational – useful if you’re job-hunting or switching companies. If you’re building a long career within one organization, honestly, that path isn’t really necessary, especially if knowledge and skills growth is evident in professional duties.
2Digital: You’ve once said: “there’s no stop in learning. It is actually the opposite. The pace of learning accelerates with tech advancements.”
For now, in 2026, I have the impression that our physical and mental capabilities as human beings are already lagging behind the speed of innovations. That we are the bottleneck.
Dmytro: Innovations do go ahead of us – yes. Keeping up with everything requires a huge mental effort. But I stopped falling into that rabbit hole of chasing every new technology. I only learn what’s relevant to what I do, and I’ve shifted my perspective more toward the business side of things.
I’m still curious – there are ideas and know-hows that genuinely excite me. But I look at them from a distance now. I don’t need the depth I used to go to for those expert-level certifications.
And honestly, technology as a whole is just too much for any single human to absorb. That’s why we’ll likely see more narrow specialists and fewer people with genuinely broad expertise across everything.
2Digital: Large enterprises constantly face the choice: fix old technical debt or fund new innovation. Is there a framework for deciding which to prioritize?
Dmytro: There’s no single thought logic I follow – it’s always a complex process. Most enterprises treat network infrastructure as a passive asset, like electricity. Nobody wants to budget for modernizing something that doesn’t directly generate revenue. So it’s always a battle to find the right reason to make changes.

For me, it starts with identifying tangible value. Not every innovation delivers that. The easiest win is when innovation drives cost savings – that’s a straightforward case. If it’s about enabling the business to do more or improve things, that’s harder, because it requires investment evaluated against strategy; it will be a tougher fight.
And even when the value is clear, a big-bang replacement is financially challenging. The smarter path is to align modernization with the equipment’s natural amortization cycle. Routers, switches, access points – they all have a depreciation lifecycle. For example, a company buys network equipment, and counts let’s say a five-year amortization period. That means that after five years, when a device hits zero book value, that’s your window to bring in the new capability, and it flows naturally with the company’s financial rhythm.
If you have that luxury, use it. Usually, though, new technologies don’t wait for your balance sheet.
2Digital: Large enterprises are investing heavily in localized research hubs and high-tech facilities. How does network design for those environments today differ from what was being built five years ago?
Dmytro: The dominant driver is data volume. Especially if you talk about AI data centers.
Five or ten years ago, data center network infrastructure was quite flat. The primary goal was high-speed east-west traffic – server-to-server communication within the same data center. That was the baseline.
Now, AI has introduced a new design pattern. Modern data centers have two distinct tiers: a front-end, which functions like a traditional data center, and a back-end – a dedicated layer built around GPU interconnects. That back-end has its own demanding requirements, because AI workloads are sensitive to packet loss, jitter, and latency. On top of that, they’re extremely bursty– they can spike network utilization to peak levels in a very short window. The infrastructure has to be built for that.
From a wireless perspective, the change is about density. Innovative facilities connect far more devices per square meter than before, so wireless coverage planning has to account for that increased device density specifically.
The rest of the architectural principles are broadly similar – e.g. protection from a single point of failure, or reusing the existing hardware for new solutions to maximize ROI. But the general trend is to increase the capacity, because the volume of data being moved has significantly increased.
2Digital: Data volumes are growing fast, and now you have multiple providers that need to be connected. How do you actually move large datasets between them?
Dmytro: The bottleneck here is wide area networking – WAN. That’s the domain that breaks down first. Standard WAN solutions work fine for regular user traffic, but they’re not built for the volumes generated by AI processing, lab research, and large-scale data transfers between facilities.
To handle that kind of throughput, you would need dedicated pipes – point-to-point connections over whatever fiber infrastructure is available in the region. That’s the preferred path, because those connections can deliver a speed of 100+ gigabits per second. This type of connectivity is practical mainly between data storage facilities, i.e. data centers, colocation hubs, or towards the cloud. Trying to do the same as a standard branch connectivity setup would be an overkill in terms of cost and architecture.
2Digital: More providers, more connection points – how do you make data accessible enough to share freely, while making it difficult to steal? How do you control what gets out?
Dmytro: Strictly speaking, this is a security question more than a network question, though the two are tightly connected.
Data itself is classified based on its level of confidentiality. Each level has its own set of restrictions and architectural requirements.There are so many security technologies developed to protect the data from different vectors, i.e. data leakage prevention systems, new generation firewalls, identity and access management platforms, etc.

At the network layer, the first principle is segmentation. You cannot have an open network. Firewalls and Network Access Control are the baseline – they filter out everyone who has no business touching the data in the first place. Beyond that, you allow only specific, approved connections between a client and the data source. Additionally, the transfer of sensitive data over the network needs to be encrypted, especially if it crosses uncontrolled segments.
Securing the network is only half the battle. Real-world breaches occur when attackers exploit application flaws, use compromised credentials, phishing, or other attacks. Even with a perfectly hardened network, you may still lose everything at the application layer or by human actions. To minimize the risk, we need to implement robust controls at all layers.
2Digital: The idea of globalization has largely failed. The world is more fragmented, supply chains are disrupted. Is it even possible to design a network that’s resilient to geopolitical pressure?
Dmytro: Geopolitics is a tricky part, because it’s really hard to be resilient to that. When a political issue happens in some region, they usually don’t come for the network itself. What is valuable is the data that flows through it. Take the Great Firewall of China. A government or any party that physically controls the infrastructure can simply unplug a cable, insert their device, and plug it back in. They’re the man in the middle, listening to everything.
Encryption is the obvious response to ensure that data in transit can’t be easily read. But it doesn’t give you a hundred percent guarantee either. We’re entering a post-quantum era where actors can record encrypted traffic today and decrypt it later with super powerful computers.
If we’re talking about protecting against a complete cutoff, there are two design principles I’d recommend.
First: make a resilient design, not just by redundancy of the same type of connectivity, but by resiliency with a different type of connectivity. If you have a site in China and you want a resilient setup, two internet links won’t save you – they’re both internet. You want one internet link and one MPLS connection from a VPN provider. Or a satellite connection. The more diverse the transport media, the more resilient you are across different geopolitical scenarios.

Second: inside a geopolitical region, intra-region connectivity between sites usually holds up fine. What breaks is cross-region traffic. The available design pattern there is to designate your largest site in that region as a transit gateway – a hardened exit point with multiple diverse transport types. All other sites in the region route through that gateway when traffic needs to leave. It concentrates your resilience investment where it matters most.
2Digital: Fail fast is a celebrated strategy in tech. But some networks simply can’t afford to fail because people’s lives depend on them. How do you bring automation and speed to infrastructure where mistakes aren’t an option?
Dmytro: It’s a hard question. The first principle is segmentation. Every part of the network that cannot afford an outage needs to be isolated and secured. For example, the OT manufacturing environments: any disruption there can cause serious operational and reputational damage to a company. There are multiple levels of segmentation – physical, macro, and micro – and that is the recommended way of protecting network infrastructure from outages.
Now, how do you bring automation even into the sensitive parts safely? There are multiple levels of automation maturity here. A simple script that connects directly to a router and makes changes is the lowest level. In this case, one error can cause production failure with no safety net.
The more mature approach is infrastructure as code. Your entire network configuration is written as code, with CI/CD pipelines managing deployment. The difference is that nothing goes directly to production. The code is analyzed, verified, and tested in a lab or some testing environment. Pre- and post-checks are run there. Only when there’s solid proof that the change works does it get pushed to the real infrastructure.
For those highly critical segments, it would be recommended to leverage this type of logic like infrastructure as code rather than just leveraging simple ad-hoc scripts written for certain activities every time. It’s better to have the mature automation tool stack and framework to bring it to secure environments.
2Digital: Automation is eliminating thousands of hours of manual work. What do you actually expect engineers to do with that freed-up time?
Dmytro: A lot of engineers are afraid of automation. That’s true. But I have a different perspective on it.
Here’s the thing: there are so many activities that engineers simply don’t do today because they don’t have time. In a busy network engineering environment, people are constantly firefighting – troubleshooting problems, delivering projects, keeping things running.
Automate the routine and suddenly there’s space for actual intellectual work. Not another small project task, but stepping back and looking at the infrastructure from the outside. Thinking about how to optimize it. How to build better integration between different domains and elements of the network.

That’s the work a senior engineer should be doing – and almost never does. It would be rare in enterprise environments to have someone evaluating infrastructure holistically. Most engineering departments just keep making incremental changes, and taking a narrow view of the whole thing.
2Digital: There’s already data on deskilling among clinicians. For example, endoscopists using AI assistance to find polyps in the colon started deskilling within three months. Wouldn’t you expect the same from engineers?
Dmytro: I’d push them toward more intellectual work – and that actually requires more effort, not less. But yes, it depends on the person. Younger engineers are generally more driven, more energetic. Senior ones can be more resistant to change.
Could some get lazy? It can happen. That’s where leadership has to step in – not just with motivation, which is temporary, but with a clear vision: this work is now handled by automation, and this is what you’re expected to do instead. There has to be enough meaningful work to fill that space. Otherwise, yes, the person might get optimized out.
2Digital: We can’t fully remove humans yet, partly because you can’t transfer accountability to a system. But humans are the bottleneck. How do you manage that?
Dmytro: The problem is that AI doesn’t know when it’s hallucinating. It predicts word after word without real understanding. For any decision that affects human lives or critical business outcomes, that level of maturity simply isn’t there yet. Human oversight is mandatory.
In the process of overall automation, if you want to have closed loop automation, then humans become the slowest part. Nevertheless, a human in the loop with AI is still faster than a human without it. So I wouldn’t frame it as a problem so much as a transitional state.
In five to ten years, that calculus may shift. There’s already research showing AI outperforming humans on specific decision-making tasks – let’s say 93% accuracy versus 92% on certain pattern recognition tests. If that gap widens, the human in the loop becomes less about accuracy and more about one thing AI fundamentally cannot provide: ownership. Accountability has to sit with a person. AI cannot own a decision.

