Monitor Water For 30+ Parameters in Real-Time with KETOS SHIELD

Replace manual water sampling of lead, copper, TDS, manganese, mercury, (and more) to save hundreds of hours each year. See what all the KETOS SHIELD can measure!

Learn How Automated Water Sampling Saves Cities & Businesses Hundreds of Hours Each Year…

Save Hundreds of Hours With Automated Water Sampling

Replace manual water sampling of lead, copper, TDS, manganese, mercury, (and more) to save hundreds of hours each year. See what all the KETOS SHIELD can measure!

PFAS Exposure In the USA

Try our Proximity Finder Tool to determine your level of risk exposure to PFAS. Search by address, zip code, or city. Try It Free >

WEBINAR: Operational Value of Water Quality Intelligence in Agriculture

Oct 23, 2024 at 11:00 AM EST

Data Management in the Water Industry: Cloud vs On-Prem in 2025

By Jay Shah — Technical Lead, Software Team – Sunnyvale, California

In the water industry, data isn’t just numbers. It’s trust. It’s compliance. It’s human safety. From municipal utilities testing for lead and turbidity to industrial parks monitoring effluent discharge, we’ve entered a world where the quality and accessibility of water data determines not just operational efficiency, but public health outcomes.

As someone building the systems that analyze water quality parameters – from TDS and pH to microbial load – we often get asked a deceptively simple question: “Should we be running this on the cloud or on-prem?”

There’s no easy answer. And frankly, there shouldn’t be. Because the choice between cloud infrastructure and on-premises deployment is deeply contextual – shaped by the realities of lab certifications, SCADA integration, regulatory oversight, internet reliability, and sometimes even the climate of the region.

Let’s break this down, not from a vendor’s playbook, but from the trenches.

Why Cloud Looks Like the Future (But Isn’t the Whole Picture)

Cloud-first thinking has reshaped how teams build and ship products across industries. In the water sector, too, we’re seeing cloud architectures unlock new possibilities.

For instance, when we began pushing real-time water quality data to our customers via dashboards, cloud compute made life easier. We could spin up scalable environments, stream sensor data, and apply analytics – all without worrying about local server health. Our field partners could log in from anywhere, whether it was a municipal officer in Fresno or part of a centralized operations team managing multiple plants through a network operations center — enabling predictive maintenance regardless of location.

As more utilities consider cloud adoption, one consistent concern is cybersecurity — especially in light of increasing attacks on public infrastructure. Many IT teams ask how cloud-hosted environments handle encryption, authentication, and access controls compared to their on-prem setups.

But what started as a technical win quickly invited deeper questions.

When we work with ISO-accredited labs, many of them still require raw sample data to be stored locally, and often, permanently. Some refuse to let sensitive customer data leave the premises at all, due to audit trails and chain-of-custody rules. Others simply don’t trust “the cloud,” especially when they’ve experienced outages or billing shocks on previous SaaS deployments.

And let’s not forget compliance. In some regulatory frameworks (especially international ones), you need to prove not just how secure your data is – but where it physically lives.

On-Prem Isn’t Legacy – It’s Contextual

There’s a tendency in tech circles to treat on-prem like it’s outdated. But in the water world, it’s often the opposite – on-prem infrastructure is carefully engineered for precision, reliability, and compliance.

We’ve worked with treatment plants that operate in remote or semi-urban locations where network connectivity is patchy. In those cases, cloud-based systems are simply not viable. You can’t upload sensor data to AWS if your line drops every 45 minutes. Local servers – even a compact edge appliance sitting in a dusty control room – become mission-critical.

Also, when you’re dealing with systems like SCADA or legacy PLC controllers, local integration is smoother. These systems are real-time, and often unforgiving of latency. The data collected – flow rate, pressure, chlorine residual – needs to be acted on quickly. Delays aren’t just inefficient; they can be dangerous.

Remote access to these systems is possible, but typically relies on a layered security model – VPN tunnels, IP whitelisting, and in many cases, platforms like VTScada Anywhere Client that allow remote viewing and limited control. These are useful but require deliberate access control, especially when working with critical infrastructure.

We’ve built setups where data is logged locally and only synced to the cloud every few hours – after encryption, compression, and validation. And it works.

So, What Are We Doing Today? (Spoiler: It’s Hybrid-ish, with Edge at the Core)

When people ask us if we’re “cloud or on-prem,” the honest answer is: neither fully, and both partially. Our systems are built around device-side intelligence — meaning most of the operational data is first processed and stored on the device itself. This local data processing approach stands in contrast to a 100% cloud processing model, which we’ve historically advocated. However, as edge computing matures, particularly in global contexts with strict data residency laws or intermittent connectivity, hybrid architectures are becoming increasingly relevant. We have often emphasized the value of centralized cloud processing, but we’re now re-evaluating that position in regions where edge-first design unlocks deployment flexibility. This ensures continuity even in low-connectivity environments and gives our platform a level of resilience that pure cloud setups can’t always guarantee.

For third-party integrations, we publish REST APIs that allow clients to programmatically retrieve data and store it within their own on-premise systems, such as SCADA, LIMS, or compliance platforms. This design not only respects data sovereignty and regulatory needs but also makes it easier for clients to maintain their existing internal workflows without disruption.

We still leverage the cloud for analytics, dashboards, alert configuration, and long-term insights — but always in a way that complements the edge-first design. It’s not your traditional “hybrid” model. It’s a more distributed, modular approach to infrastructure — one that fits the real-world constraints and responsibilities of the water industry.

And ultimately, it allows us to deliver both flexibility for innovation and stability for compliance — without forcing customers to choose between the two.

Looking Ahead: Governance, Not Hype

We’re now seeing conversations shift from “Where do we store the data?” to “How do we govern it responsibly?” As climate change increases volatility in water systems, and as new pollutants (like PFAS) enter public discourse, data transparency and data provenance are becoming core concerns.

We’re also beginning to think more about edge computing – processing closer to the source. Whether that’s a nitrate sensor in a river or a mobile testing lab near a refugee camp, we need systems that adapt to constrained environments without compromising on security or accuracy.

And that’s the real challenge for infrastructure teams in water: to build systems that don’t just scale but endure. Systems that aren’t fragile in the face of weather, outages, or unexpected regulation updates. Systems that respect the complexity of our industry, without making things more complicated than they already are.

Final Thoughts: Tech Choices That Respect the Water Itself

Ultimately, data management in the water industry isn’t about the cloud or on-prem. It’s about choosing infrastructure that respects what the data represents – clean water, safe communities, and environmental stewardship.

We’re not building for clicks, ads, or dopamine loops. We’re building for clarity, compliance, and public health.

So if you’re weighing your infrastructure strategy, don’t ask “What’s trending?” Ask:

  • Where does my data live?

  • Who needs access to it, and when?

  • How can I take action when something happens?

And most importantly: does my current setup help us do the work that matters – better, faster, and more responsibly?

That’s how we move forward. One dataset at a time.

And hopefully, one drop at a time, too.

Recent Posts

What Water Quality Parameter Do You Test Most Often?

The KETOS SHIELD remotely monitors dozens of water quality parameters. Which one do your water operators test most often?

Have Water Monitoring Questions?

Ask Our Team During A Demo