Within the Writer Highlight sequence, TDS Editors chat with members of our group about their profession path in information science and AI, their writing, and their sources of inspiration. At this time, we’re thrilled to share our dialog with Mike Huls.
Mike is a tech lead who works on the intersection of knowledge engineering, AI, and structure, serving to organizations flip complicated information landscapes into dependable, usable methods. With a powerful full-stack background, he designs end-to-end options that steadiness technical depth with enterprise worth. Alongside consumer work, he builds and shares sensible instruments and insights on information platforms, AI methods, and scalable architectures.
Do you see your self as a full-stack developer? How does your expertise throughout the entire stack (from frontend to database) change the way you view the info scientist function?
I do, however not within the sense of personally constructing each layer. For me, full-stack means understanding how architectural choices at one layer form system conduct, threat and value over time. That perspective is crucial when designing methods that have to survive change.
This attitude additionally influences how I view the info scientist function. Fashions created in notebooks are solely the start. Actual worth emerges when these fashions are embedded in manufacturing methods with correct information pipelines, APIs, governance, and user-facing interfaces. Information science turns into impactful when it’s handled as a core half of a bigger system, not as an remoted exercise.
You cowl a variety of subjects. How do you resolve what to give attention to subsequent, and the way have you learnt when a brand new subject is value exploring?
I are likely to comply with recurring friction. After I see a number of groups wrestle with the identical issues, whether or not technical or organizational, I take that as a sign that the problem is structural slightly than particular person, and price addressing on the architectural or course of degree.
I additionally intentionally experiment with new applied sciences, not for novelty, however to know their trade-offs. A subject turns into value writing about when it both solves an actual downside I’m at the moment going through or reveals dangers that aren’t but broadly understood. Lastly, I write about subjects I personally discover attention-grabbing and price exploring, as a result of sustained curiosity is what permits me to go deep.
You’ve written about LangGraph, MCP, and self-hosted brokers. What’s the greatest false impression you suppose folks have about AI brokers at the moment?
Brokers are genuinely highly effective and open up new prospects. The misunderstanding is that they’re easy. It’s simple at the moment to assemble cloud infrastructure, join an agent framework, and produce one thing that seems to work. That accessibility is efficacious, nevertheless it masks lots of complexity.
As soon as brokers transfer past demos, the true challenges floor. State administration, permissions, value management, observability, and failure dealing with are sometimes underestimated. With out clear boundaries and possession, brokers grow to be unpredictable, costly, and dangerous to function. They don’t seem to be simply prompts with instruments; they’re long-lived software program methods and should be engineered and operated accordingly.
In your article on Layered Structure, you point out that including options can usually really feel like “open-heart surgery.” For a newbie or a small information workforce trying to keep away from this, what’s your key recommendation on organising an structure?
“The only constant is change” is a cliché for a very good purpose so optimize for change slightly than for preliminary supply pace. Even a minimal type of layered considering helps: separating area logic, software circulation, and infrastructure considerations.
The purpose isn’t architectural perfection on day one or good categorization. It’s about creating clear boundaries that enable the system to evolve with out fixed rewrites. Small upfront self-discipline pays off considerably as methods develop.
You’ve benchmarked PostgreSQL insert methods and famous that “faster is not always better.” In a manufacturing ML pipeline, what’s a state of affairs the place you’ll intentionally select a slower, safer insertion methodology?
When correctness, traceability, and recoverability matter greater than uncooked throughput. In lots of pipelines, lowering runtime by just a few seconds gives little profit in comparison with the danger launched by weaker ensures.
For instance, pipelines that feed regulatory reporting, monetary decision-making, or long-lived coaching datasets profit from transactional security and specific validation. Silent information corruption is much extra pricey than accepting modest efficiency trade-offs, particularly when information turns into a long-term asset others will construct on..
In your Private, Agentic Assistants article, you constructed a 100% non-public, self-hosted platform. Why was avoiding “token costs” and “privacy leaks” extra vital to you than utilizing a extra highly effective, cloud-based LLM?
In my every day work I’ve skilled that trusting a system is key to system adoption. Token prices, opaque information flows, and exterior dependencies subtly affect how methods are used and perceived.
I additionally made a acutely aware selection to not route my private or delicate information via exterior cloud suppliers since there are restricted ensures on how information is dealt with over time. By retaining the system self-hosted, I may design an assistant that’s predictable, auditable, and aligned with European privateness expectations. Customers have full management over what the assistant has entry to and this lowers the barrier for utilizing the assistant.
Lastly, not each use case requires the biggest or most costly mannequin. By decoupling the system from a single supplier, customers can select the mannequin that most closely fits their necessities, balancing functionality, value, and threat.
How do you see the day-to-day work of a knowledge skilled altering in 2026?
Regardless of widespread stereotypes, information and software program engineering are extremely social professions. I strongly consider that essentially the most important a part of the work occurs earlier than writing code: aligning with stakeholders, understanding the issue area, and designing options that match current methods and groups.
This upfront work turns into much more vital as agent-assisted improvement accelerates implementation. With out clear targets, context, and constraints, brokers amplify confusion slightly than productiveness.
In 2026, information professionals will spend extra time shaping methods, defining boundaries, validating assumptions, and guaranteeing accountable conduct in manufacturing environments.
Wanting forward at the remainder of 2026, what huge subjects will outline the 12 months for information professionals, in your opinion? Why?
Generative AI and agent-based methods will proceed to develop, however the greater shift is their maturation into first-class manufacturing methods slightly than experiments.
That transition relies on reliable, high-quality, accessible information and sturdy engineering practices. Consequently, full-stack considering and system-level design will grow to be more and more vital for organizations that need to apply AI responsibly and at scale.
To be taught extra about Mike’s work and keep up-to-date along with his newest articles, you possibly can comply with him on TDS or LinkedIn.



