I'm an IT leader based in Delhi. About 17 years in, across infrastructure, cloud, messaging, and operations, at companies that run global workforces.
This site is where I write about the parts of the job that nobody trains you for. Budgets. Vendor work. Running a team that's spread across time zones and still expected to act like one. Why teams stall. Why change initiatives die at 20% adoption. The small calls that make your next year either easier or harder.
It's not one person missing a deadline. It's a cross-time-zone team where nobody's sure who owns the outcome. A cloud migration where three engineers assumed someone else was watching the spend. An incident that ran four hours longer than it should because nobody knew who to escalate to. Micromanaging doesn't fix any of that. It makes it worse.
The budget was approved. The timeline was realistic. And it still went nowhere, because nobody dealt with the wall. Not the technical wall. The human one. Every team builds it, and the bricks are always the same. Here's what I've learned about the part of change management that nobody writes decks about.
They celebrate the deployment and forget to measure the outcome. Number of tools deployed. Hours of training done. Features activated on the SaaS stack. None of that tells you whether anything got better. The real questions are harder, and most leaders haven't asked them yet.
Solid headcount. Balanced skills. Clear KPIs. And still, incidents take longer than they should, projects slip, everyone stays busy, and outcomes don't reflect the effort. It's rarely a capability problem. It's usually structural, and most of the patterns repeat across every large environment I've worked in.
Most teams I've walked into had one person keeping things running through willpower. That's not resilience, it's a risk. My first job is usually to make that person's work boring. Once the system carries the load, the humans can think again.
"The team is responsible" means nobody is. Someone's name goes next to every critical outcome. People don't need a 40-page RACI. They need clarity: this is yours, you own it, this is what happens if it slips.
You can tell me what your company values, or you can show me your IT budget and I'll figure it out. I read budgets line by line. I renegotiate when I need to. I treat every line as a choice somebody made that can be revisited.
The strongest engineers I've worked with aren't always the most certified. They're the ones who investigate an odd latency spike at 11pm because it doesn't sit right, or teach themselves a better tool over a weekend. Tools can be taught. That instinct usually can't.
Too many status meetings exist for visibility, not effectiveness. Real impact comes from clearing the path: eliminating unnecessary forums, accelerating decisions, shielding the team from avoidable noise. If I can delete a meeting and replace it with a written update, I will.
I'll tell you what I think, and I'll tell you when I think you're wrong. I expect the same back. Most hard decisions in IT get easier when people stop dancing around what they actually mean.
I've spent about 17 years in enterprise IT. Most of that has been in hybrid infrastructure: GCP, Azure, VMware, on-prem, and the networking and identity work that stitches it together.
My current role is running IT infrastructure engineering for a global SaaS company, where I look after a team of engineers and an estate with users spread across multiple regions. Before that, I did messaging and directory work at a large hardware company and at a global IT services firm, and I started out leading a small Microsoft infrastructure team at a BPO more years ago than I want to count.
I'm based in Delhi.
If you'd like the formal version of any of this, I'm happy to share a CV on request.
At small scale, accountability problems look like one person missing one deadline. You can see it. You can fix it over coffee.
At enterprise scale, it's almost never that clean. It's a cross-time-zone team where nobody's sure who owns the outcome. A cloud migration where three engineers assumed someone else was watching the spend. An incident that runs four hours longer than it should because the escalation path wasn't written down, and half the people on the bridge are guessing.
And the instinct most leaders have is to tighten the grip. More status meetings. More dashboards. More oversight.
It doesn't work. It makes the problem worse, because the problem was never that people weren't being watched. The problem was that nobody was clear on what they owned in the first place.
Control is a substitute for clarity. If you find yourself reaching for control, you've already skipped the step that would have prevented this.
A few things I've seen work at this level:
Ownership has to be named, not implied. In hybrid environments with distributed teams, "the team is responsible" means nobody is. Someone's name goes next to every critical outcome. It feels uncomfortable at first, especially in cultures that prefer group responsibility. You do it anyway.
Accountability gaps show up in incidents first. If your post-mortems keep surfacing "we didn't know who to escalate to," that's not a people problem. It's a structural problem, and it will keep costing you hours on every major incident until you fix it.
Budget ownership is where accountability gets real. When IT leaders own the number, not just the project, decisions change. Engineers start asking "should we?" instead of just "can we?" That shift is worth more than any amount of Jira discipline.
The most accountable IT organisations I've been part of weren't the most monitored ones. They were the ones where every engineer knew exactly what they owned and what happened if it slipped.
Fix the clarity. The rest follows.
The budget was approved. The timeline was realistic. The tech was the right choice.
And it still went nowhere. Because nobody dealt with the wall.
Not the technical wall. The human one. Every team builds it, and the bricks are always the same. "We've always done it this way." The legacy system that technically still works. The vendor relationship nobody wants to touch. Plain old fear of looking stupid in front of peers.
None of that is irrational. It's self-preservation. And if you try to bulldoze through it, you'll spend the next 18 months wondering why adoption is stuck at 20%.
Some things I've learned the hard way:
Name the fear before it names you. When people sense something is being hidden, they fill the gap with worst-case scenarios. Saying "here's what I don't know yet" out loud does more than any change management deck I've ever seen.
Invite the resistance in. Your loudest critic is often the person who cares the most. Resistance is data, not defiance. The moment they feel heard, something shifts. Not always fully. But enough to matter.
People don't resist change, they resist loss. Loss of identity, status, certainty. Every on-prem to cloud migration I've been part of, the pushback was never really about the technology. It was about someone's 15 years of expertise suddenly feeling irrelevant. Show them what they're gaining. That matters more than any roadmap slide.
Go first. Say "I don't have all the answers either" and actually mean it. Nothing moves people faster than a leader who's visibly uncomfortable too.
The organisations that make it through disruption aren't the ones with the boldest vision decks. They're the ones where someone decided to sit with the discomfort instead of talking past it.
The lock opens from the inside.
After watching dozens of companies rush to "implement AI," I've noticed a painful pattern. They celebrate the deployment and forget to measure the outcome.
Most AI initiatives don't fail because the technology didn't work. They fail because the business case was never clearly defined in the first place.
Here's what companies typically measure: number of AI tools deployed. Hours of training completed. Features activated on the SaaS stack. None of that tells you whether anything got better.
What they should be measuring: revenue influenced per AI-assisted touchpoint. Cost per decision before and after. Output quality, not just speed. Whether customers can actually tell the difference.
Three questions I think every leader should ask before signing an AI contract:
What specific problem are we solving? "We want to be an AI-first company" is a slogan, not a business case. If you can't name the problem in one sentence, the tool isn't going to find it for you.
What does success look like in 90 days? If you can't define it, you can't measure it. If you can't measure it, you can't improve it. And you definitely can't defend the spend at the next budget review.
What's the cost of not acting? Sometimes the ROI conversation is really a competitive risk conversation in disguise. That's a legitimate reason to invest. It's also a reason to be honest about what you're actually buying.
AI is not a strategy. It's a capability. And like any capability, its value shows up only when it's pointed at the right problem.
The companies winning with AI right now aren't the ones with the most tools. They're the ones who picked one broken process, applied AI with precision, measured the impact rigorously, and then scaled what worked.
That's it. No magic. No hype. Just disciplined execution.
Stop chasing AI. Start defining outcomes.
Solid headcount. Balanced skills. Clearly defined KPIs.
And still, incidents take longer than they should. Projects slip. Everyone stays busy, and outcomes don't reflect the effort.
This is rarely a capability or intent issue. More often, it comes down to how teams are structured and supported. Across large infrastructure environments, particularly in hybrid cloud operations, I keep seeing the same patterns:
Ownership solves more than process ever will. Breaking large operations teams into smaller, domain-aligned groups — cloud, network, endpoints — shifts behaviour in a way that no amount of process documentation can. Teams start treating their area as theirs. Response times improve. Finger-pointing reduces. The 40-page RACI that nobody reads turns out to have been the wrong tool all along.
Curiosity compounds more than credentials. The strongest engineers I've worked with weren't always the most certified. They were the ones who'd investigate an unusual latency spike at 11pm because it didn't sit right, or teach themselves a better tool over a weekend. Tools can be taught. That instinct to explore and improve usually can't.
The best leaders remove friction, not add layers. Too many status meetings exist for visibility, not effectiveness. Real impact comes from clearing the path. Eliminating unnecessary forums. Accelerating decisions. Shielding the team from avoidable noise. If I can replace a meeting with a written update that actually gets read, I will every time.
Culture reveals itself under pressure. At 2am, culture isn't what's written in a values deck. It's what actually happens. Is the fix documented or just closed? Are risks raised early or delayed? Do issues trigger blame or learning? Watch a few incidents and you'll know your real culture, not the one on the website.
Scaling a tech organisation is rarely about adding more people. Usually it's about giving the people already in place the clarity, autonomy, and air cover to perform at their best.
If your team looks strong on paper and still feels slow, start with structure. Start with ownership. Start with what gets in their way. The rest is usually easier than it looks.