Building AI Into Buildings — Without Sending Data to San Francisco
31 March 2026
At Senbee, we build intelligent systems for building operations. Connect, Reporting, Scheduling — these products sit at the intersection of IoT sensors, automation, and increasingly, AI. The AI part is where things get complicated. Not technically. Politically.
The Easy Path
The easiest way to add AI to a product in 2026 is to call an API. OpenAI, Anthropic, Google — pick your favourite. You send data up, you get intelligence back. It works. It works really well.
But "up" means the United States. And the data we're working with isn't blog posts or chat messages. It's sensor readings from real buildings. Energy consumption patterns. Occupancy data. HVAC schedules. For some customers, this is classified as operational infrastructure data. For others, it falls under GDPR by virtue of tracking how people move through spaces.
Sending that to a server in Virginia isn't just a bad look. For some of our customers, it's a non-starter.
What "EU-Hosted" Actually Means
A lot of AI providers now offer "EU regions." OpenAI has European endpoints. Azure has data centres in the Netherlands and Ireland. Problem solved?
Not really.
An EU data centre operated by a US company is still subject to US law. The CLOUD Act — the Clarifying Lawful Overseas Use of Data Act — gives American authorities the right to compel US companies to hand over data regardless of where it's physically stored. Your data can be in Frankfurt and still be legally accessible from Washington.
This isn't paranoia. It's the law.
The Practical Middle Ground
So what do you actually do? Here's where we've landed — not a manifesto, just the pragmatic reality:
1. Run what you can locally
For classification tasks, anomaly detection, and scheduling optimisation, we run models on our own infrastructure. Smaller, specialised models that don't need a 100-billion-parameter brain. They're faster, cheaper, and the data never leaves our network.
2. Anonymise before you externalise
When we do use external AI services — and we do, for certain language tasks and complex reasoning — we strip identifying information first. Building IDs become hashes. Addresses become regions. The model doesn't need to know it's looking at Senbee customer data.
3. Keep the option to switch
We've built our AI integrations behind abstraction layers. Today it might be OpenAI. Tomorrow it might be Mistral, or a fine-tuned open model running on European GPUs. The interface stays the same. The provider is a detail.
4. Be honest about the trade-offs
Local models are less capable for certain tasks. Anonymisation adds complexity. Abstraction layers take time to build. None of this is free. But the alternative — telling a Danish municipality that their building data is processed under US jurisdiction — isn't free either.
The Conversation Is Changing
Two years ago, nobody asked us where our AI ran. Now it comes up in almost every enterprise conversation. Facility managers might not use the word "sovereignty," but they ask: "Is our data leaving Denmark?" "Who can access it?" "What happens if that American company changes their terms?"
These are reasonable questions. And having real answers — not marketing answers — is becoming a competitive advantage.
AI on the Inside
It's not just our products that run on AI. Internally, the adoption has been enormous. For our internal systems — tooling, automation, dashboards — AI-written code now accounts for roughly 90–95% of what we ship. It's changed how fast we move.
Customer-facing systems are a different story. When security and control matter — and they always matter when you're handling building infrastructure data — testing after the fact isn't always enough. You need to understand what the code does, why it does it, and what happens when it fails. For those systems, we're at about 75–80% AI-written code. Still a lot. But the remaining 20–25% is where the hard thinking lives.
We Don't Have All the Answers
I'd love to end this with a neat conclusion. But honestly, we don't have it all figured out. The frontier models change every few weeks. The regulatory landscape shifts. New providers appear, old ones change their terms. The tools we rely on today might not be the tools we use a month from now.
What we have is a set of principles — keep data close, stay flexible, be honest — and a willingness to revisit every decision as the ground moves. That's the best anyone can do right now.