Workshop

Thinking About "It Depends"

Published on: 2025-04-15

By: Ian McCutcheon

Thinking About "It Depends" – Can We Get Better Answers?

This whole topic – figuring out how all our complex tech systems really connect – is something I've been personally thinking about for, well, decades. No kidding. It probably goes back to the late 90s, sitting on my first big incident call where changing 'System A' unexpectedly broke 'System Z', and nobody quite knew why beforehand. Ever since then, the challenge of truly grasping these dependencies has been this puzzle rattling around in my head. So, what follows isn't tied to any specific project; it's more me exploring an idea about a problem that's fascinated (and occasionally frustrated!) me for a very long time.

You know that moment? Someone asks, "Can we patch this server tonight?" or "What happens if that database gets sluggish?" And the answer that often comes back, maybe after a thoughtful pause, is... "Well, it depends."

I have heard that phrase a lot, and I bet you do too. It feels like the default response when we're navigating the tech jungles we've built – all those interconnected servers, cloud services, applications, APIs, and network bits. And while it's often the honest answer, "it depends" always leaves me feeling a bit uneasy. It signals hidden connections, potential surprises down the road, and maybe a gap in our collective understanding of how things really work together. In a world where tech drives so much, that uncertainty feels like something worth thinking about.

We're Trying, But It's Still Hard

It's not like we haven't been trying to solve this! We've got monitoring tools, discovery platforms, maybe even detailed CMDB inventories. These are often great tools, giving us heaps of data about what hardware and software we own and how it's ticking along. Some even try to map out the connections automatically based on network chatter or how processes talk to each other.

But even with all that data, the "it depends" moment still pops up frequently. Why is that? Maybe it's because trying to map everything automatically can create these incredibly complex diagrams that are hard to decipher or go stale almost immediately. Sometimes, I wonder if they capture the technical 'what' but miss the 'why' – the human design choices, the reason things were set up in a particular way. It feels like we can get lost in the weeds, struggling to see the overall landscape or know which connections truly matter most.

Just an Idea: What If We Flipped the Starting Point?

Lately, I've been wondering about a different angle. Instead of diving headfirst into the massive sea of technical data, what if we began by sketching out the big picture first, using our own team's knowledge?

Imagine just drawing simple circles on a virtual whiteboard: "This is our main Website," "Here's the Billing System," "These servers handle our core DNS." These aren't just technical lists; they represent chunks of business function. We already know, roughly, what belongs in these buckets.

And we also know about things like redundancy. We know taking down one server in a pair is usually fine, but taking down both is a big no-no. That kind of built-in resilience thinking is something humans understand well, but it might get lost in purely automated maps.

Could starting with this kind of rough, human-drawn sketch give us a better framework? A map that starts with meaning and can then be filled in with details, rather than starting with overwhelming detail and trying to find the meaning later?

Could AI Be a Helpful Navigator Here?

Now, this doesn't mean we dust off the physical whiteboards and markers for good. That wouldn't keep up. But maybe this is where AI could lend a hand, acting less like an automatic map-maker and more like a helpful navigator or research assistant.

What if AI agents could plug into all those data sources we have – the inventories, monitoring feeds, logs, cloud consoles? Their job wouldn't be to just draw lines everywhere. Instead, they could:

  1. Look at the technical details: See that server A is talking to database B, or App C is calling API D.
  2. Compare with our sketch: Look at the circles we drew ("Website," "Billing") and try to figure out where these technical interactions fit best, based on clues like server names or components we already placed.
  3. Offer suggestions: Instead of changing the map directly, maybe the AI could just raise its hand: "Hey, I noticed server web01 (which you put in 'Website') talks a lot to this db_XYZ database I found. Does that database belong in the 'Website' picture too?" Or, "That link you drew between System X and Old System Y? I haven't seen any traffic there for months. Maybe worth a look?"

This way, the AI could handle the tedious task of sifting through data, while the team uses their knowledge to confirm or correct the suggestions. It feels like it could keep the map grounded in reality without letting it become an unmanageable mess, with humans staying in the driver's seat.

Just Wondering... What's the Goal?

Thinking about AI doing this reliably is still a journey, for sure. The tech is moving fast, but it's complex stuff. But maybe the specific tech isn't the starting point. Perhaps the real question is, what kind of understanding are we aiming for?

Wouldn't it be great to move past "it depends"? To reach a point where we could:

I'm not sure if this "human-first sketch, AI-assisted detailing" idea is the perfect answer, or even fully practical yet. It's just a thought I've been exploring, born from many years of watching this challenge play out. But it feels like aiming for a meaningful, curated understanding of our systems, rather than just drowning in data, might be a worthwhile direction to consider.