From Quarterly Guesswork to Daily Foresight: How Meridian Connect Networks Cut Capex Over-Provisioning by 31% with DataSense Insight+
Average Reading Time: 13 minutes
Executive Summary
Meridian Connect Networks, a six-circle Indian mobile broadband operator, was running its network capacity and expansion planning on quarterly Excel cycles built off a Hadoop lake that almost nobody outside the data team could navigate. Forecasts missed reality by nearly 40 percent. New cell-sites and rollout phases went live in clusters that did not pay back. Churn flagging happened weeks after the customer had already left.
In nine months with DataSense Insight+, three numbers moved decisively.
Hero Metric | Before | After |
| Forecast accuracy (MAPE on cell-site and zone-level demand) | 38% error | 9% error |
| Capex deferred or redirected via right-sized rollout | — | ₹38.4 Cr (~$4M) over 18 months |
| Planning cycle, ideation to circle-head sign-off | 8 weeks | 11 days |
This is the story of how the change happened, what we built, and why the same diagnostic is repeatable for any mobile broadband or fixed-wireless operator sitting on rich data and slow decisions.
Customer Snapshot
Company: Meridian Connect Networks Pvt. Ltd. (anonymized)
Footprint: Six service circles across Tier-2 and semi-urban catchments, with active expansion into two new circles under evaluation
Subscriber base: 0.8 million consumer subscribers (mobile broadband and a growing FTTH base) and 8,400 enterprise accounts
Annual revenue: ~₹1,200 Cr (FY25)
Workforce: 4,100 employees, of whom roughly 280 sit inside network planning, BI, and revenue operations
Existing stack at the time of engagement:
- OSS/BSS layer: legacy Netcracker-style platform, partially customized in-house
- Data lake: Hadoop (HDFS + Hive), maintained by a 14-person data engineering team
- Reporting: custom dashboards for finance, custom Excel models for network planning
- Streaming: Kafka, in production but underused outside fraud detection
- ML maturity: pilot models for fraud and collections, no production ML in network planning or customer intelligence
A capable data foundation, in other words, but one whose insights were trapped two or three handoffs away from the people who needed them.
The Challenge: Where the Pain Showed Up in Numbers
Meridian's network planning team ran on a rhythm that had not really changed in a decade. Every quarter, planners pulled CDRs, OSS performance metrics, and tower utilization reports into spreadsheets, layered on regional sales projections, and produced a capex recommendation that the circle CTOs would then debate for another three weeks.
The cost of that rhythm, measured in 2024:
- Capex over-provisioning of approximately 31 percent in the prior fiscal, driven by conservative buffers added at every handoff
- Forecast MAPE of 38 percent at the cell-site level, with worse numbers in seasonal markets like Nashik and Aurangabad
- Decision lag of 14 to 21 days between a usage anomaly being detected and a planning response
- Churn detection running 23 days after the fact on average, by which point retention offers were either too late or too generous
- ₹6.2 Cr spent annually on consultant-led capacity and feasibility studies that arrived after the fiscal window they were meant to inform
The deeper problem was not any single metric. It was that the data team was producing answers, the planning team was producing decisions, and the two artefacts almost never lined up in the same week.
What They Needed And Why Now
Three things converged in early 2025.
First, competitive pressure from 5G. National operators were rolling out 5G aggressively in adjacent markets, and Meridian's board had committed to a counter-strategy that combined wireless densification in existing circles with selective FTTH rollouts in two flagship circles by FY27. The existing planning approach simply could not be trusted to allocate that capex sensibly. A 30 percent over-provision on the current wireless footprint was painful. The same error rate on a fiber rollout would have been balance-sheet damage.
Second, a specific failure. A wireless broadband expansion phase in the S-1 zone went live in late 2025 against a forecast of 18,000 active subscribers within six months. The actual figure at month seven was 6,400. The post-mortem traced the miss to a single regional sales optimism factor that had quietly compounded across three handoffs.
Third, compliance tightening. New compliance windows for QoS reporting meant the BI team was now spending nearly 40 percent of its time on regulatory dashboards rather than commercial intelligence.
The mandate from the office of the CTO was direct. Bring planning cycles below two weeks, get forecast accuracy above 90 percent, give the team a defensible way to choose which circles to expand into next, and do all of it without adding another offshore consulting line item.
Evaluation Criteria
Meridian's procurement and architecture review group set six non-negotiables before they would even take a demo:
- Data sovereignty. All subscriber-level data had to remain on Indian soil and within Meridian's own VPC. No data could leave the perimeter for inference, training, or telemetry.
- No black-box models. Whatever forecasts the system produced had to be explainable to a circle CTO who had spent twenty years in network operations and was not interested in abstract model outputs.
- Time to first signal under 90 days. Anything longer and the rollout would slip past the FY26 planning window.
- Predictable INR-denominated TCO. Per-seat or per-API-call pricing was a non-starter, because usage was expected to grow tenfold in the first year.
- Integration with legacy OSS/BSS. Replacement was off the table for at least three years. Any new platform had to consume from the existing stack without forcing a migration.
- Per-circle, per-cluster granularity. A national average was useless. The Pune circle behaves nothing like the Marathwada cluster, and the platform had to respect that.
Why DataSense Insight+
DataSense Insight+ was selected after a four-vendor shortlist and a six-week proof-of-value run on the S-1 circle's data. The architecture had to do three jobs at once:
- Pull the right data without disrupting the existing one
- Run forecasts that planners could actually understand and override
- Surface results inside a workspace that a circle head could review in real time
What We Built
The architecture had to do three jobs at once. Pull the right data without disrupting the existing lake. Run forecasts that planners could actually understand and override. Surface results inside a workspace that a circle head could read on a Monday morning before her ten o'clock review.
Data Layer
Kafka topics were already in place for fraud telemetry, so we extended them. CDR streams, OSS performance counters, billing events, NPS responses, geo-tagged tower telemetry, customer care interaction logs, and Census and consumption-class signals for expansion modelling were all routed into a unified ingestion pipeline. Apache Flink handled the streaming side for anything that needed sub-hour responsiveness, while Spark batch jobs ran the heavier historical reconciliations every night.
A feature store was built on PostgreSQL for warm features and Redis for hot ones, with Feast as the orchestration layer. This single decision saved months downstream, because every model, dashboard, and alerting rule could now reference the same canonical feature definitions instead of recomputing them in five different SQL dialects.
Modelling Layer
Three model families did most of the heavy lifting, each chosen for a specific reason rather than fashion.
Prophet for cell-site and zone-level usage seasonality. Mobile broadband demand has strong weekly and yearly seasonality, festival spikes, and the occasional one-off shift like a new SEZ opening or a campus going live. Prophet handles all three transparently, and its decomposition output is something a circle CTO can read without translation. We ran Prophet on a six-week horizon for capacity and rollout planning and on a two-week horizon for short-term re-balancing.
LSTM networks for hourly traffic pattern prediction across a 168-hour rolling window. Where Prophet captures the rhythm, LSTM captures the texture, and the combination of the two outputs gave planners both the steady-state forecast and the burst prediction they needed for QoS guarantees and backhaul sizing.
XGBoost for churn propensity scoring, ARPU bucket transitions, and circle-level expansion attractiveness scoring. Tree-based models do well on the kind of mixed tabular data telcos sit on, and feature importance plots gave both the retention team and the expansion team something concrete to work with. The churn model alone moved Meridian from catching 28 percent of at-risk subscribers to catching 73 percent, with a 21-day lead time instead of a 23-day lag. The expansion attractiveness model, layered on top of demographic, competitive, and infrastructure-cost features, became the spine of the new circle-finding workflow.
A lighter Bayesian uplift model was added in month seven for retention campaign targeting, but that is really a Phase 2 story.
Serving and UX Layer
Inference ran on an on-prem Kubernetes cluster co-located with Meridian's NOC in Pune, with a smaller edge inference pod in the Hyderabad data center for latency-sensitive scoring. Airflow handled orchestration. Grafana drove the operational dashboards for the data and SRE teams.
The executive and planning views were built as a custom React frontend, which mattered more than it sounds. The team did not want another Tableau tab. They wanted a workspace where a planner could pull up the Aurangabad cluster, see the 14-day forecast next to the actual rollout plan, drag the rollout date forward by two weeks, and see the projected utilization curve update in real time. The expansion team had a parallel view that ranked candidate circles by projected payback and surfaced the assumptions driving the rank, so a sales head could challenge the model without leaving the screen. That interactivity changed how planning meetings ran.
Integration
DataSense Insight+ pulled from the existing Hadoop lake without writing back to it, which kept the data team's existing pipelines untouched. Outputs went into a separate read-optimized warehouse that the BI and finance teams could query through their existing licences if they preferred. No team was forced to abandon a tool they liked.
The Results
After nine months in production across all six circles, the impact was measurable across three dimensions that mattered to the board.
Forecast accuracy. Demand MAPE at the cell-site level dropped from 38 percent to 9 percent on the six-week horizon, and to 4 percent on the two-week horizon. Take-rate forecasts in newly addressed catchments tightened from a ±22 percent band to ±6 percent, which mattered enormously because every percentage point of take-rate error compounds across sites-planned, sites-lit, and downstream backhaul and OLT capacity decisions. The two seasonally volatile circles, Aurangabad and Nashik, came in at 11 percent on six weeks, roughly where most planning teams would be thrilled to land on stable urban footprints. Peak-hour throughput forecasts at the cell-site and OLT level, which had not really been modelled before, started landing within 7 percent of actual.
Capex efficiency. Of the ₹212 Cr capex envelope earmarked for FY26 expansion, ₹38.4 Cr was deferred or reallocated to higher-yield zones within the first 18 months. The S-1-style overbuild that had triggered the engagement, where a rollout phase had gone in against an optimistic take-rate that never materialised, did not repeat. Two planned cell-site augmentations and one OLT augmentation were rescheduled by two quarters based on revised demand curves. One last-mile FTTH rollout was redirected from a low-density township to an enterprise-heavy cluster where the projected payback period was less than half. Backhaul augmentation, which had historically been ordered on a buy-ahead basis to play it safe, moved to a just-in-time model that freed up roughly ₹4.6 Cr in working capital. The new-circle expansion shortlist, which used to rely on regional sales optimism layered on top of demographic averages, was now built off model-ranked candidates with explicit payback and confidence intervals attached.
Decision velocity. Planning cycle time fell from 8 weeks to 11 days. The quarterly review rhythm gave way to a fortnightly one. The consultant line item for feasibility and demand studies dropped from ₹6.2 Cr a year to ₹1.4 Cr, and what remained was scoped to specialized RoW and regulatory work rather than basic demand modelling.
A few second-order effects also showed up that nobody had forecast at the start. The retention team's offer-to-conversion ratio improved by 19 percent, because they were now reaching subscribers showing early service-experience degradation before the churn intent hardened. Two enterprise leased-line accounts that had been close to switching were saved through an early-warning workflow that the original scope did not even include. And the field operations team began using the same forecast outputs to plan splice technician deployments, tower maintenance windows, and CPE inventory positioning, which trimmed roughly 14 percent from rollout labour costs without anyone in the original scoping conversation having anticipated that use case.
What's Next
Phase 2, currently in design, extends DataSense Insight+ across three new fronts.
The first is micro-cluster level demand modelling for the two flagship circles, which involves a different forecasting profile because demand at the neighborhood and building level is enterprise- and household-driven rather than aggregate-zone-driven. This sets up the FTTH rollout team with the same level of confidence the wireless team now has at the cell-site level.
The second is a B2B account intelligence module that maps enterprise account health to network experience metrics, giving the enterprise sales team an early-warning signal on at-risk accounts.
The third is a customer journey orchestration layer that takes the existing churn and uplift models and wires them into Meridian's outbound campaign system for closed-loop retention.
Phase 3, scoped for late FY27, will explore autonomous capacity planning, where the platform proposes rollout schedules and expansion shortlists for human approval rather than waiting for human-led queries.
Why This Matters for Other Operators
Meridian's situation is not unusual. Most regional and Tier-2 mobile broadband, wireless ISP, and hybrid operators across India and South and Southeast Asia are sitting on similar data foundations, similar legacy stacks, and similar planning rhythms. The bottleneck is rarely the data. It is the gap between the data and the decision.
If the symptoms below sound familiar, the same diagnostic that helped Meridian will surface where the gap lives in your organization.
- Capex over-provisioning that nobody can quite quantify but everyone suspects is real
- Forecasts that arrive after the window they were meant to inform
- Churn detection that runs on lag rather than lead
- BI teams spending more time on regulatory reports than commercial intelligence
- Expansion shortlists built on regional optimism rather than defensible payback models
- A growing capex commitment, whether 5G densification, FTTH rollout, or both, that the current planning approach was never designed to handle
Sound Familiar? Run the Same Diagnostic
Book a 30-minute DataSense Audit with our team. We will walk through your current planning rhythm, the data sources you already have in place, and the specific gaps where forecast accuracy, expansion targeting, and decision velocity could move the most. No deck, no pitch. Just a structured diagnostic that tells you where the leverage is.
[Schedule your 30-minute DataSense Audit → https://calendar.app.google/aXktZEMrNnV3n8yC7 ]
Reach out at [email protected] or visit mindwebs.org for a closer look at the platform.