Why 6G Spectrum Matters: The Invisible Anchor of the Next Wireless Revolution

As I reflect on the trajectory of mobile communications, I find myself at a fascinating inflection point. We stand at the threshold of another major leap forward, and the promise of 6G extends far beyond incremental improvements in speed or latency. What truly excites me is how 6G represents a fundamental reimagining of how intelligence, presence, and connectivity converge in our networks and devices. At the core of this transformation lies an often overlooked but absolutely critical element: spectrum.

I’d like to explore why spectrum will once again shape not just our networks, but our societies and the very fabric of our digital existence.

Each generation of wireless technology has been defined by the spectrum it unlocked. 3G introduced us to mobile internet, fundamentally changing how we access information. 4G gave birth to the mobile economy, enabling video streaming, social media, and real-time applications that have transformed business models and social interactions. 5G pushed into millimeter wave frequencies, delivering industrial-grade responsiveness for critical applications.

But 6G represents something more profound. The leap isn’t merely technological—it’s philosophical. Connectivity is evolving to become contextual and cognitive. Our networks won’t just react to demands; they’ll anticipate needs. Devices are transforming from communication tools into intelligent sensors and agents that understand and interact with their environment. To enable this vision, 6G will require access to new spectral frontiers, particularly the sub-terahertz and terahertz (THz) ranges that have remained largely untapped for communications.

The relationship between spectrum and 6G innovation is multifaceted and critical. First, we face the fundamental challenge of data hunger meeting bandwidth bottlenecks. Applications like immersive extended reality, holographic communication, and digital twins demand terabit-per-second scale bandwidth capacities that can only be provided through the vast, underutilized frequency bands far above today’s cellular allocations.

Second, moving into terahertz bands introduces entirely new physics to our communication systems. This isn’t just about higher speeds; it means fundamentally different signal behaviours, novel hardware challenges, and revolutionary ways of sensing the environment. The properties of these frequencies will enable capabilities we’ve barely begun to imagine.

Third, spectrum is increasingly becoming a strategic national resource. The countries and companies that shape the 6G spectrum narrative will effectively shape the rules of digital engagement for the next decade and beyond. This geopolitical dimension adds another layer of complexity to spectrum allocation and standardization.

As we develop these new frequency bands, we’ll need new ways to describe and categorize them. Just as 5G required a new “language” to describe its frequency bands (such as n78 or FR2), 6G will demand new spectrum notations to handle wider bandwidths (tens or hundreds of gigahertz), account for dynamic spectrum sharing and AI-managed allocation, describe multi-layered integration across space, air, and terrestrial networks, and reflect new use-case mappings for sensing, localization, and environmental feedback.

Without clear and intelligent spectrum notations, we risk fragmenting the global 6G conversation—both technically and geopolitically at precisely the moment when unified approaches are most needed.

We often discuss spectrum in abstract terms as an invisible field of energy we harness for communication. But the spectrum has language. It has a notation. And as we transition from 5G into the far more complex realm of 6G, that language is evolving in significant ways.

To understand the future of wireless, we must first understand how we describe it. At the most basic level, frequency measurements tell us about radio wave oscillation: 1 Hz represents one cycle per second, 1 MHz is one million cycles per second, 1 GHz is one billion cycles per second, and 1 THz is one trillion cycles per second. Higher frequencies oscillate faster, enabling more data to be carried per unit time but also introducing greater signal loss, narrower coverage, and new technical challenges.

The evolution of mobile communications has consistently moved toward higher frequencies: 2G operated in hundreds of MHz, 4G and early 5G exploited sub-6 GHz bands, 5G NR expanded into millimeter wave (24–100 GHz), and 6G will push from 100 GHz to potentially 10 THz. This progression reflects our growing appetite for bandwidth and the technological innovations that make higher frequencies viable for communication.

In 5G, standardized notations were introduced to simplify discussions about specific frequency bands. Designations like n78 (3300–3800 MHz, a widely deployed mid-band 5G range) and broader categories like FR1 (sub-6 GHz frequencies) and FR2 (24–52 GHz millimeter wave) have streamlined regulatory, engineering, and operational conversations. However, as we move into sub-THz and THz frequencies, these notation schemes begin to show their limitations.

As we begin to propose bands like 140 GHz, 275 GHz, and even 1 THz for 6G, new spectrum notation systems will be required to unify wider frequency ranges under flexible identifiers, account for hybrid use cases where a single band supports sensing, communication, and computing simultaneously, and enable AI interpretation through machine-readable notations for real-time spectrum management.

We might see notations like fTH1 (Fixed THz Band 1: 275–325 GHz), dTHx (Dynamic Terahertz experimental block), or sT1 (Sensing THz Band 1, dedicated for RF-based environment detection). While these are speculative examples, they illustrate the fundamental need: our notation must evolve alongside our use cases and technology.

The importance of well-defined spectrum notation extends across multiple stakeholder groups. For engineers, poorly defined notation creates confusion in hardware design, simulation, and deployment. For regulators, a lack of harmonized notation leads to regional incompatibility and inefficiencies in global rollout. For innovators, a shared, evolving language opens doors to collaborative research, efficient prototyping, and even machine-to-machine spectrum negotiation.

It’s worth noting that notation isn’t neutral; it embodies power. Whoever defines the language often shapes the outcome. As we collectively create 6G, spectrum notation represents a strategic touchpoint—a bridge between science, policy, and geopolitics that will influence the development trajectory of next-generation wireless technology.

The future of 6G is being written not just in laboratories or boardrooms but in the electromagnetic spectrum itself. If 5G reached into the millimeter-wave frontier, 6G is preparing for a quantum leap into the sub-terahertz and terahertz bands. These frequency ranges, once considered the domain of theoretical physics or space science, are now firmly in the telecom spotlight.

Before exploring specific frequencies, it’s important to understand that 6G isn’t simply “5G, but faster.” It aims to support terabit-per-second data rates for holographic and immersive applications, microsecond-level latency for real-time control and tactile internet, native AI and sensing capabilities embedded directly in the spectrum layer, and multi-dimensional connectivity spanning terrestrial, airborne, and satellite networks. To support these capabilities, we need wider bandwidths than ever before—and that’s only possible at higher frequencies.

Several spectrum ranges are emerging as candidates for 6G deployment. Upper Mid-Bands (7–24 GHz), sometimes called FR3, offer a potential balance between coverage and capacity for early 6G deployments. Candidate bands in this range include 7–15 GHz, with particular interest in the 10–14.5 GHz range being explored by ITU. These frequencies could support urban macro deployments with extended coverage and decent capacity, though existing satellite usage presents challenges that will require robust coexistence frameworks.

Sub-Terahertz bands (100–300 GHz) represent the range where true 6G performance begins to shine. Particular interest has focused on 100–140 GHz (under exploration in Europe, Korea, and Japan) and 275–325 GHz (proposed as a new THz communication block). These frequencies could enable indoor ultra-high-speed access, device-to-device communications, and real-time augmented, virtual, and extended reality applications. However, they face challenges including severe path loss, line-of-sight requirements, and hardware immaturity.

Terahertz Bands (0.3–10 THz) push beyond traditional RF into new physical domains. These bands, currently under early-stage scientific study, could support wireless cognition, high-speed backhaul, and environmental sensing. The challenges here are substantial: limited current RF integrated circuits, lack of global regulatory frameworks, and energy efficiency concerns.

Low-Band Spectrum (Sub-1 GHz) remains essential even in the 6G era. While not new, these frequencies provide critical coverage for massive IoT, rural areas, and emergency communications. The primary challenge is that this spectrum is already heavily saturated with legacy systems.

International harmonization efforts are underway across multiple organizations. ITU-R (WP 5D) is actively evaluating candidate frequencies for IMT-2030 (the official designation for 6G). The FCC in the United States has opened exploratory windows at 95–275 GHz. Europe’s Hexa-X project advocates for coordinated research into 100+ GHz spectrum. China, Korea, and Japan are conducting field trials at 140 GHz and above. Global harmonization will be crucial—not just to avoid interference, but to enable cross-border 6G roaming, manufacturing scale, and effective spectrum diplomacy.

Rather than depending on any single band, 6G will likely employ a layered spectrum approach: low bands for resilient, wide-area coverage; mid bands for urban macro deployment and balanced rollout; sub-THz for immersive services and fixed wireless; and THz for sensing, cognition, and backhaul. All of these layers will be dynamically orchestrated, likely through AI and real-time feedback systems, to create a seamless connectivity experience across diverse environments and use cases.

Author: Obeidallah Ali, R&D Director at DIGIS Squared

Obeidallah Ali leads the Research & Development efforts at DIGIS Squared, driving innovation in AI-powered telecom solutions. With deep expertise in 5G Network Design, Optimization, and Automation, he focuses on developing tools like INOS™ and Katana™ that help operators diagnose, troubleshoot, and enhance network performance worldwide.

For inquiries, please contact:
Email: info@digis2.com

Semantic Communications: Rethinking How Networks Understand Meaning

Traditional communication models, like Shannon’s theory, have always focused primarily on the accuracy of bit transmission from sender to receiver. But in today’s world, dominated by AI, IoT, and immersive experiences, this approach is becoming increasingly insufficient. The challenge isn’t just about transmitting data anymore; it’s about transmitting the right data, with the right context, at precisely the right moment.

At its core, semantic communication represents a model that prioritizes understanding over mere accuracy. Rather than sending every bit of information, semantic systems intelligently transmit only what’s necessary for the receiver to reconstruct the intended meaning. This represents a profound shift in how we conceptualize network communication.

Consider this practical example: a device needs to send the message “I need a glass of water.” In classical communication, this entire sentence would be encoded, transmitted, and decoded bit by bit, regardless of context. But in a semantic communication system, if the context already indicates the user is thirsty, simply transmitting the word “glass” might be sufficient to trigger complete understanding. This approach is inherently context-aware, knowledge-driven, and enhanced by artificial intelligence.

The necessity for semantic communication becomes increasingly apparent when we consider its practical benefits. It substantially reduces redundant data transmission, which conserves both bandwidth and energy, critical resources in our increasingly connected world. For latency-sensitive applications like critical IoT systems, autonomous vehicles, and holographic communication, this efficiency translates to meaningful performance improvements. Furthermore, it enhances machine-to-machine understanding, enabling more intelligent edge networks, while aligning communication more closely with human-like reasoning patterns, making our interactions with technology more natural and efficient.

When we examine these advantages collectively, it becomes evident that semantic communication isn’t merely a beneficial addition to our technological toolkit; it represents a fundamental paradigm shift in communications technology.

The enabler of this transformation is undoubtedly artificial intelligence, particularly in domains such as natural language understanding, knowledge graphs, semantic representations, and the ability to learn shared context between sender and receiver. When integrated with Digital Twins and Cognitive Networks, semantic communication becomes even more powerful, allowing systems to predict, understand, and take proactive action rather than simply reacting to inputs.

At Digis Squared, we view Semantic Communication as a cornerstone of future AI-native networks. I believe it will fundamentally reshape how we design, operate, and optimize telecom systems, not only by increasing efficiency but by making networks truly intelligent.

As Head of Product, I find myself increasingly asking a question that challenges conventional thinking: What if our networks could understand why we communicate, not just what we communicate? This perspective shifts our focus from merely building faster networks to creating smarter, more meaningful ones that truly understand human intent.

Author: Mohamed Sayyed, Head of Product at DIGIS Squared

Diagnosing and Resolving “FAILURE_MSG4_CT_TIMER_EXPIRED” in 5G Standalone Networks

In the deployment and optimization of 5G Standalone (SA) networks, ensuring the robustness of the Random Access Channel (RACH) procedure is critical. DIGIS Squared identified and resolved a recurring RACH failure – “FAILURE_MSG4_CT_TIMER_EXPIRED” – during performance testing using our proprietary tools: INOS™ and Katana™. This white paper outlines the nature of the problem, diagnostic process, root cause analysis, and optimization strategies applied to restore optimal network performance.


Background

The 5G NR contention-based RACH procedure is essential for initial access, handover, and beam recovery. It involves a four-message handshake:

  1. Msg1: RACH preamble from UE
  2. Msg2: Random Access Response (RAR) from gNB
  3. Msg3: MAC CE or RRC message from UE
  4. Msg4: Contention resolution from gNB

The failure occurs when the UE sends Msg3 but does not receive Msg4 within the contention resolution timer window, resulting in an aborted RACH attempt.


Problem Identification

Failure Type: FAILURE_MSG4_CT_TIMER_EXPIRED
Detection Tool: INOS™ (field testing)
Confirmation Tool: Katana™ (OSS KPI analysis)

During extensive drive tests in both urban and suburban environments, INOS flagged multiple instances of RACH failure where Msg1, Msg2, and Msg3 were correctly transmitted, but Msg4 was not received. This was corroborated through Katana’s analysis of OSS counters, revealing high contention timer expiries in cells with:

  • Low SS-RSRP values (< -110 dBm)
  • High load and scheduling delays
  • Specific PRACH configurations

Root Cause Analysis

The following contributing factors were identified:

  • Msg3 Decoding Failures: UL signal degradation or beam misalignment prevented gNB from decoding Msg3.
  • Delayed Msg4 Scheduling: Resource contention at gNB delayed the contention resolution message.
  • Timer Misconfiguration: The default timer (sf64) was too short for specific TDD configurations.

Standards Reference:

  • 3GPP TS 38.331: RRC Protocol for NR
  • 3GPP TS 38.321: MAC Protocol for NR

Optimization Actions

Network-Side Adjustments

  • Increased ra-ContentionResolutionTimer from sf64 to sf128.
  • Reviewed and optimized PRACH Configuration Index and ZeroCorrelationZone settings.
  • Prioritized Msg4 scheduling at MAC layer in high-load scenarios.

Coverage Optimization

  • Fine-tuned beamforming and UL power control.
  • Extended PRACH monitoring duration in gNB firmware.


Post-Optimization Results

MetricBefore OptimizationAfter Optimization
RACH Success Rate89%97%
Msg4 Timer Expiry Rate11.8%<1.2%
Initial Access Latency (avg)440 ms260 ms
RRC Setup Drop RateModerateNear Zero

Verification was conducted using both INOS (field KPIs) and Katana (OSS trends), confirming significant improvement across all measured metrics.


Conclusion

This case study highlights the necessity of cross-layer observability in managing 5G SA network performance. By leveraging both real-time field data from INOS and OSS intelligence from Katana, DIGIS Squared successfully diagnosed and mitigated a complex RACH failure. The resolution not only improved RACH success rates but also enhanced user experience and access reliability.


Author: Obeidallah Ali, R&D Director at DIGIS Squared

Obeidallah Ali leads the Research & Development efforts at DIGIS Squared, driving innovation in AI-powered telecom solutions. With deep expertise in 5G Network Design, Optimization, and Automation, he focuses on developing tools like INOS™ and Katana™ that help operators diagnose, troubleshoot, and enhance network performance worldwide.


For inquiries, please contact:
Email: info@digis2.com

Diagnosing the Invisible: How We Enhanced CDN Caching Visibility to Prevent 404 Failures

Milliseconds matter in today’s hyper-connected digital world, and content delivery must be seamless, reliable, and globally scalable. At DIGIS Squared, we’re committed to going beyond surface-level metrics to detect and resolve the subtle issues that impact end-user experience at scale.

One such challenge we’ve recently tackled involved intermittent 404 errors and browsing failures caused by CDN (Content Delivery Network) caching problems. What appeared to be random access issues turned out to be symptoms of deeper inefficiencies in how content was cached—and more importantly, how that caching was monitored.


The Hidden Problem: When the Cache Misses

CDNs are the unsung heroes of modern web performance. By distributing content across global edge servers, they reduce latency, offload origin traffic, and enable resilient access for users worldwide. But when caching fails, whether due to misconfigured TTLs, cache-busting headers, or regional edge node discrepancies the impact can be significant:

  • End-users encounter 404 errors or content that fails to load
  • The origin server receives unnecessary load, reducing scalability
  • Diagnostics become harder due to lack of cache-level transparency

We noticed these exact patterns in our browsing analytics: certain requests, particularly through Akamai and Cloudflare, were returning failures that didn’t align with backend health or application logic. This pointed to a cache-layer issue, not an application bug.


The Solution: A New Dashboard to Measure CDN Caching Effectiveness

To combat this, we built and deployed a new internal dashboard that focuses on one core KPI: CDN Caching Hit Success Rate.

Here’s what it includes:

CDN Hit/Miss Analytics:

We track whether content is being successfully served from the cache or fetched from the origin, giving us clear indicators of performance degradation.

Provider-Specific Breakdown:

The dashboard separately monitors:

  • Akamai
  • Cloudflare

…two of the world’s most widely used CDN providers, with distributed edge networks and high cache sensitivity.

Unified KPI:

To give a macro-level view, we also calculate a global hit ratio that consolidates data across all CDN providers we observe in browsing sessions, helping us detect broader trends or cross-provider anomalies.

Root Cause Visibility:

Combined with error codes like 404, we can now correlate browsing failures directly to cache misses. This has already enabled us to:

  • Identify content types with poor caching behavior
  • Advise clients on improving their CDN TTL, cache-control headers, and edge rule configurations
  • Proactively alert when hit ratios drop below optimal thresholds


Why This Matters to Telecom & Digital Experience Teams

For operators, OTT providers, and enterprises relying on global content delivery, cache efficiency is no longer a back-end concern; it’s a frontline performance metric. Here’s why this matters:

  •  A single percent drop in cache hit ratio can significantly increase origin load, affecting cost and latency
  • In telecom, real-time browsing quality KPIs are vital to SLA monitoring and customer retention
  • Cache failures often go unnoticed because traditional monitoring tools don’t surface them unless there’s a full outage

By adding this caching intelligence into our performance analytics suite, we’re enabling smarter diagnostics, better QoE benchmarking, and deeper insights across the full delivery chain from device to content edge.

The Evolution of Self-Organizing Networks: From SON to Cognitive SON to LTMs

As we approach 2030, the telecommunications industry is at a point where traditional network automation methods are merging with advanced AI technologies. Based on my experience over the past decade with network optimization solutions, I would like to share some insights on potential future developments.

Two Perspectives on SON Evolution

When discussing the future of Self-Organizing Networks (SON), it’s crucial to distinguish between two perspectives:

SON as a Conceptual Framework

The fundamental principles of self-configuration, self-optimization, and self-healing will remain essential to network operations. These core concepts represent the industry’s north star – autonomous networks that can deploy, optimize, and repair themselves with minimal human intervention.

These principles aren’t going away. Rather, they’re being enhanced and reimagined through more sophisticated AI approaches.

Vendor-Specific SON Implementations

The feature-based SON solutions we’ve grown familiar with – ANR (Automatic Neighbour Relations), CCO (Coverage & Capacity Optimization), MLB (Mobility Load Balancing), and others – are likely to undergo significant transformation or potential replacement.

These siloed, rule-based features operate with limited contextual awareness and struggle to optimize for multiple objectives simultaneously. They represent the first generation of network automation that’s ripe for disruption.

Enter Large Telecom Models (LTMs)

The emergence of Large Telecom Models (LTMs) – specialized AI models trained specifically on telecom network data – represents a paradigm shift in how we approach network intelligence.

Like how Large Language Models revolutionized natural language processing, LTMs are poised to transform network operations by:

  1. Providing holistic, cross-domain optimization instead of siloed feature-specific approaches
  2. Enabling truly autonomous decision-making based on comprehensive network understanding
  3. Adapting dynamically to changing conditions without explicit programming
  4. Learning continuously from network performance data

The Path Forward: Integration, or Replacement?

The relationship between traditional SON, Cognitive SON, and emerging LTMs is best seen as evolutionary rather than revolutionary.

  • Near-term (1-2 years): LTMs will complement existing SON features, enhance their capabilities while learn from operational patterns
  • Mid-term (3-4 years): We’ll see the emergence of agentic AI systems that can orchestrate multiple network functions autonomously
  • Long-term (5+ years): Many vendor-specific SON implementations will likely be replaced by more sophisticated LTM-driven systems

The most successful operators will be those who embrace this transition strategically – leveraging the proven reliability of existing SON for critical functions while gradually adopting LTM capabilities for more complex, multi-domain challenges.

Real-World Progress

We’re already seeing this evolution in action. SoftBank recently developed a foundational LTM that automatically reconfigures networks during mass events.

These early implementations hint at the tremendous potential ahead as we move toward truly intelligent, autonomous networks.

Prepared By: Abdelrahman Fady | CTO | Digis Squared

NWDAF: How 5G is AI Native by Essence

The evolution of telecommunications networks has always been characterized by increasing complexity and intelligence. As we’ve moved through successive generations of wireless technology, I’ve observed a consistent trend toward more adaptive, responsive systems. With 5G, this evolution has reached a critical inflection point by introducing the Network Data Analytics Function (NWDAF) a component that fundamentally transforms how networks operate and adapt.

NWDAF, introduced in the 5G Core architecture starting from Release 15 and continuing to evolve toward 6G, represents a pivotal element in the Service-Based Architecture (SBA). More than just another network component, it embodies a philosophical shift toward data-driven, intelligent network operations that anticipate the needs of both users and applications.

At its core, NWDAF serves as a standardized network function that provides analytics services to other network functions, applications, and external consumers. Its functionality spans the entire analytics lifecycle: collecting data from various network functions (including AMF, SMF, PCF, and NEF), processing and analyzing that data, generating actionable insights and predictions, and feeding decisions back into the network for optimization and policy enforcement.

I often describe NWDAF as the “central intelligence of the network”—a system that transforms raw operational data into practical insights that drive network behavior. This transformation is not merely incremental; it represents a fundamental reimagining of how networks function.

The necessity for NWDAF becomes apparent when we consider the demands placed on modern networks. Autonomous networks require closed-loop automation for self-healing and self-optimization—capabilities that depend on the analytical insights NWDAF provides. Quality of Service assurance increasingly relies on the ability to predict congestion, session drops, or mobility issues before they impact user experience. Network slicing, a cornerstone of 5G architecture, depends on real-time monitoring and optimization of slice performance. Security analytics benefit from NWDAF’s ability to detect anomalies or attacks through traffic behavior pattern analysis. Furthermore, NWDAF’s flexible deployment model allows it to operate in either central cloud environments or Multi-access Edge Computing (MEC) nodes, enabling localized decision-making where appropriate.

The integration of NWDAF with other network functions occurs through well-defined interfaces. The Np interface facilitates data collection from various network functions. The Na interface enables NWDAF to provide analytics to consumers. The Nnef interface supports interaction with the Network Exposure Function, while the Naf interface enables communication with Application Functions. This comprehensive integration ensures that NWDAF can both gather the data it needs and distribute its insights effectively throughout the network.

The analytical capabilities of NWDAF span multiple dimensions. Descriptive analytics provide visibility into current network conditions, including load metrics, session statistics, and mobility patterns. Predictive analytics enable the network to anticipate issues before they occur, such as congestion prediction, user experience degradation forecasts, and mobility failure prediction. Looking forward, prescriptive analytics will eventually allow NWDAF to suggest automated actions, such as traffic rerouting or slice reconfiguration, further enhancing network autonomy.

As we look toward 6G, NWDAF is poised to evolve into an even more sophisticated component of network architecture. I anticipate the development of an AI/ML-native architecture where NWDAF evolves into a Distributed Intelligence Function. Federated learning approaches will enable cross-domain learning without requiring central data sharing, addressing privacy and efficiency concerns. Integration with digital twin technology will allow simulated networks to feed NWDAF with predictive insights, enhancing planning and optimization. Perhaps most significantly, NWDAF will increasingly support intent-based networking, where user intentions are translated directly into network behavior without requiring detailed technical specifications.

The journey toward truly intelligent networks is just beginning, and NWDAF represents a crucial step in that evolution. By embedding analytics and intelligence directly into the network architecture, 5G has laid the groundwork for networks that don’t just connect—they understand, anticipate, and adapt. This foundation will prove essential as we continue to build toward the even more demanding requirements of 6G and beyond.

Prepared By: Amr Ashraf | Head of Solution Architect and R&D | Digis Squared

Share:

ACES NH & DIGIS Squared Partnership Milestone

We are proud to announce the successful delivery and deployment of DIGIS Squared’s advanced cloud native testing and assurance solution, INOS, to ACES NH, the leading telecom infrastructure provider and neutral host in the Kingdom of Saudi Arabia.

As part of this strategic partnership, DIGIS Squared has delivered:

  • INOS Lite Kits for 5G Standalone (5GSA) testing and IBS testing.
  • INOS Watcher Kits for field / Service assurance.  
  • Full deployment of the INOS Platform over ACES NH cloud hosted inside the Kingdom, ensuring data localization and privacy compliance.

The ACES NH team is now leveraging INOS across all testing and assurance operations, with:

  • Comprehensive, detailed telecom network field KPIs & Service KPIs.
  • Auto RCA for field detected issues.
  • Full automation of testing and reporting workflows, that enables higher testing volumes in shorter timeframes.
  • AI-powered modules for virtual testing and predictive assurance.
  • A flexible licensing model that enables the support of all technologies.

This partnership highlights both companies’ shared vision of strengthening local capabilities and equipping ACES NH with deeper network performance insights—supporting their mission to provide top-tier services, in line with Saudi Arabia’s Vision 2030.

We look forward to continued collaboration and delivering greater value to the Kingdom’s digital infrastructure.

About ACES NH:

ACE NH, a Digital infrastructure Neutral Host licensed by CST in Saudi Arabia and DoT in India. ACES NH provide In-Building Solutions, Wi-Fi-DAS, Fiber Optics, Data Centers and Managed Services. We at ACES NH design, build, manage and enables Telecom-Operators, Airports, Metros, Railways, Smart & Safe Cities, MEGA projects. With its operations footprint in countries from ASIA, Europe, APAC, GCC and North-Africa with diverse projects portfolio and with focus on futuristic ICT technologies like Small-cells, ORAN, Cloud-Computing. ACES NH is serving nearly 2 billion worldwide annual users.

Mobile Private Network

Private networks are dedicated communication networks built for a specific organization or use case

Benefits

  • Enhanced security and data privacy
  • Improved network performance and reliability
  • Customized coverage and capacity
  • Integration with existing systems and infrastructure

A private (mobile) network is where network infrastructure is used exclusively by devices authorized by the end-user organization.

Typically, this infrastructure is deployed in one or more specific locations which are owned or occupied by the end-user organization.

Devices that are registered on public mobile networks will not work on the private network except where specifically authorized.

Formally these are known as ‘non-public networks’ however the term private network is more commonly used across vertical industries.

Drivers of having a 5G Private network

Network Performance: with eMBB, URLLC and MMTC, 5G is very capable in terms of network performance

5G Security: The fifth generation of networks is more secure than the 4G LTE network because it has identity management, privacy, and security assurance

New Spectrum in 5G: availability of shared and dedicated 5G spectrum in several bands

Network Coverage: With 5G network, you control where to deploy your gNB

Private Networks Deployment Models

SNPN, Standalone Non-Public Network

NPN is deployed as an independent, standalone network

Private company has exclusive responsibility for operating the NPN and for all service attributes

The only communication path between the NPN and the public network can be done optionally via a firewall

standalone network. Under this deployment model, all network functions are located within the facility where the network operates, including the radio access network (RAN) and control plane elements. Standalone, isolated private networks would typically use dedicated spectrum (licensed or unlicensed) purchased through a mobile network operator (MNO) or, in some cases, directly from government agencies.

PNI-NPN: Public Network Integrated – Non Public Network

  • NPN deployed with MNO support: hosted completely or partially on public network infrastructure
  • e.g. using Network Slicing
  • PNI-NPN has different variants we are going to explain some of them in the coming section

PNI-NPN: Deployment with shared RAN

Shared RAN with dedicated Core

NPN and the public network share part of the radio access network, while other network functions remain separated.

This scenario involves an NPN sharing a radio-access network (RAN) with the service provider. Under this scenario, control plane elements and other network functions physically reside at the NPN site.

This type of deployment enables local routing of network traffic within the NPN’s physical premises, while data bound for outside premises is routed to the service provider’s network. 3GPP has specifications that cover network sharing. (A variation of this deployment scenario involves the NPN sharing both the RAN and control plane functions, but with the NPN traffic remaining on the site where the NPN is located and not flowing out to the public network.)

PNI-NPN: Deployment with shared RAN and Control Plane

Shared RAN and core control Plane.

Both RAN and Core Sharing from control side, with the RAN and Core elements managed by the Public 5G network.

NPN only handles user plane connectivity.

This scenario involves an NPN sharing a radio-access network (RAN) with the service provider. Under this scenario, control plane elements and other network functions physically reside at the NPN site”

PNI-NPN: NPN Deployment in public network

5G Public-Private Network Slice

NPN hosted by the public network

Complete outsourcing of the network, where devices on the private network utilize the Public 5G network RAN.

This scenario can be implemented by means of network slicing

The third primary type of NPN deployment is where the NPN is hosted directly on a public network. In this type of deployment, both the public network and private network traffic are located off-site.”

Through virtualization of network functions and in a technique known as network slicing, the public-network operator of the private network partitions between the public network and the NPN, keeping them completely separate.

Challenges of Private Network

Spectrum and Regulations

Limited Spectrum Options: Securing suitable spectrum can be challenging, especially in densely populated or highly regulated regions where spectrum allocation is scarce.

Regulatory Hurdles: Navigating complex regulatory environments to acquire spectrum licenses can be time-consuming and costly, often requiring compliance with specific national or regional regulations.

High Initial Cost

Infrastructure Investment: Setting up a private network requires substantial upfront investment in infrastructure such as base stations, antennas, and network equipment.

Operational Expenses: Beyond initial setup, ongoing operational costs include maintenance, upgrades, and personnel training, contributing to the overall cost burden.

Knowledge acquisition or outsourcing

Technical Expertise: Establishing and maintaining a private network demands specialized knowledge in network design, integration, security, and optimization.

Outsourcing Challenges: Depending on internal resources versus outsourcing, finding capable vendors or partners with expertise in private network implementation can be challenging, affecting project timelines and quality.

Availability and Scope

Geographical Coverage: Ensuring adequate coverage across the desired operational area without compromising signal strength or reliability can be complex, particularly in challenging terrains or remote locations.

Scalability: Designing networks that can scale effectively as operational needs grow, without sacrificing performance or security, requires careful planning and sometimes iterative adjustments.

Integration with Existing IT/OT Systems

Legacy Systems: Many enterprises operate legacy operational technology (OT) systems that aren’t designed to interface with IP-based private networks.

Interoperability Issues: Ensuring seamless integration between IT/OT systems, existing network infrastructure, and the new private network requires careful system design and often bespoke solutions.

Data Flow & Security Consistency: Synchronizing real-time data and maintaining consistent security policies across heterogeneous systems can be complex.

Return on Investment (ROI) and Business Justification

Unclear Business Models: Enterprises often struggle to quantify the ROI of private networks, especially when benefits like reliability and security are intangible.

Cost vs. Benefit Uncertainty: Without clear use cases (e.g., predictive maintenance, robotics, digital twin), the business case can remain weak, delaying decision-making.

Our Private Networks SI Capabilities

Digis Squared provides Vendor Management & control, operator mindset, helicopter view, program governance, wide experience, class-efficient network solutions & design

We at Digis Squared provide E2E Private Network SI and managed Services journey that could be described as following  

This blog post was written by Obeidallah AliR&D Director at Digis Squared.

Revolutionizing Indoor Network Testing with INOS: A Deep Dive into the Enhanced Indoor Kit

Introduction

As mobile networks continue to evolve with 5G, ensuring optimal indoor connectivity is more critical than ever. INOS (Indoor Network Optimization Solution) is redefining how operators and engineers approach indoor testing with its advanced tools, robust features, and a newly upgraded Indoor Kit. Designed to tackle the unique challenges of indoor environments, the INOS Indoor Kit offers significant improvements in software, hardware, and overall functionality to deliver superior usability, reliability, and results.


The Importance of Indoor Testing

Indoor spaces like malls, airports, and office buildings pose unique challenges for network optimization due to:

  • Architectural complexity: Thick walls and multiple floors impede signal propagation.
  • User density: Crowded environments generate high network demand.
  • Interference: Co-channel interference can degrade signal quality.

These challenges make precise and efficient indoor network testing crucial for delivering seamless connectivity.


Enhancements in the INOS Indoor Kit

Software Improvements (Icons)

  1. Revamped User Interface (UI):
    The new UI offers an intuitive design for enhanced accessibility, streamlining control, and monitoring processes for users.
  2. Enhanced Connectivity Options:
    Supporting Internet, WLAN, and Bluetooth connections, the kit provides robust and flexible inter-device connectivity.
  3. Comprehensive Control Capabilities:
    The tablet serves as a central hub, allowing users to control every connected device and monitor KPIs directly.
  4. Centralized Alarm Notifications:
    Alarm notifications from all connected devices are displayed on the tablet in real-time, enabling prompt troubleshooting.

Hardware Upgrades

  1. Ergonomic and Lightweight Design:
    A portable, lighter design ensures ease of use in various indoor scenarios.
  2. Extended Battery Life:
    Powering up to 12 devices for 8 hours of continuous operation, the kit supports long-duration tasks without frequent recharging.
  3. Smart Cooling System:
    An intelligent cooling mechanism activates based on system temperature, ensuring consistent performance without overheating.

Key Features and Differentiators

The INOS Indoor Kit offers several standout features that set it apart from competitors:

  1. 5G Support Across All Devices:
    Fully optimized for 5G testing, supporting all devices within the kit to handle the latest network demands.
  2. Tablet as a Centralized Display:
    Displays real-time radio KPIs, with intuitive visualizations and insights for quick decision-making.
  3. Advanced Device Management via Tablet:
    • Control multiple phones directly.
    • Color-coded indicators highlight synced devices, poor KPIs, and ongoing logfile recordings, allowing users to focus on critical areas.
  4. Support for Large Layout Images:
    Unlike competitors, INOS excels at handling and displaying large indoor layouts, ensuring no testing area is overlooked.
  5. Automated Processes:
    • Logfile Uploading and Collection: Eliminates manual intervention, saving time and effort.
    • Post-Processing Automation: Simplifies report generation and routine tasks that traditionally require manual copy-paste workflows.
  6. Comprehensive Support Model:
    INOS provides end-to-end support for all product aspects, ensuring users have the help they need at every stage.
  7. Expandable Kit Design:
    Offers the flexibility to add more devices, making it adaptable to different indoor testing scales.
  8. Enhanced Connectivity:
    INOS leverages Internet, WLAN, and Bluetooth for device control, overcoming the limitations of competitors who rely solely on Bluetooth (limited to 8 devices and prone to connectivity issues).

Why INOS Stands Out in Indoor Testing

INOS combines cutting-edge technology with user-centric design to deliver a superior indoor testing experience. With its latest enhancements, it ensures that telecom operators and network engineers have the tools they need to achieve:

  •  Unmatched Accuracy: Collect and analyze data with precision.
  • Greater Efficiency: Streamlined workflows and automation save time and effort.
  • Enhanced Portability: Lightweight design and extended battery life make it perfect for demanding indoor environments.

Conclusion

The INOS Indoor Kit, with its latest software and hardware upgrades, is a game-changer for indoor network optimization. By focusing on usability, functionality, and reliability, it empowers operators to tackle even the most challenging scenarios with confidence.

Ready to elevate your indoor testing? Discover how the enhanced INOS Indoor Kit can revolutionize your network optimization strategy.

This blog post was written by Amr AshrafProduct Architect and Support Director at Digis Squared. With extensive experience in telecom solutions and AI-driven technologies, Amr plays a key role in developing and optimizing our innovative products to enhance network performance and operational efficiency.

AI-Driven RAN: Transforming Network Operations for the Future

Challenges Facing Mobile Network Operators (MNOs)

As mobile networks evolve to support increasing data demand, Mobile Network Operators (MNOs) face several critical challenges:

1. Rising CAPEX Due to Network Expansions

With the rollout of 5G and upcoming 6G advancements, MNOs must invest heavily in network expansion, including:

  • Deploying new sites to enhance coverage and capacity.
  • Upgrading existing infrastructure to support new technologies.
  • Investing in advanced hardware, software, and spectrum licenses.

2. Growing Network Complexity

As networks integrate multiple generations of technology (2G, 3G, 4G, 5G, and soon 6G), managing this complexity becomes a major challenge. Key concerns include:

  • Optimizing the placement of new sites to maximize coverage and efficiency.
  • Choosing the right hardware, licenses, and features to balance performance and cost.
  • Ensuring seamless interworking between legacy and new network elements.

3. Increasing OPEX Due to Operations and Maintenance

Operational expenditures continue to rise due to:

  • The increasing number of managed services personnel and field engineers.
  • The complexity of maintaining multi-layer, multi-vendor networks.
  • The need for continuous network optimization to ensure service quality.
  • Rising Energy Costs: Powering an expanding network infrastructure requires substantial energy consumption, and increasing energy prices put further pressure on operational budgets. AI-driven solutions can optimize power usage, reduce waste, and shift energy consumption to off-peak times where feasible.

4. Competitive Pressures in Customer Experience & Network Quality

MNOs are not only competing on price and service offerings but also on:

  • Network Quality: Coverage, speed, and reliability.
  • Customer Experience: Personalized and high-quality connectivity.
  • Operational Efficiency: Cost-effective operations that enhance profitability.

The Concept of AI in RAN

To address these challenges, AI-driven Radio Access Networks (AI-RAN) emerge as a key enabler. AI-RAN leverages artificial intelligence and machine learning to:

  • Optimize network planning and resource allocation.
  • Automate operations, reducing manual interventions.
  • Enhance predictive maintenance to prevent failures before they occur.
  • Improve energy efficiency by dynamically adjusting power consumption based on traffic demand.

Different AI-RAN Methodologies

  1. AI and RAN
    • AI and RAN (also referred to as AI with RAN): using a common shared infrastructure to run both AI and workloads, with the goal to maximize utilization, lower Total Cost of Ownership (TCO) and generate new AI-driven revenue opportunities.
    • AI is used as an external tool for decision-making and analytics without direct integration into the RAN architecture.
    • Example: AI-driven network planning tools that assist in site selection and spectrum allocation.
  2. AI on RAN
    • AI on RAN: enabling AI services on RAN at the network edge to increase operational efficiency and offer new services to mobile users. This turns the RAN from a cost centre to a revenue source.
    • AI is embedded within the RAN system to enhance real-time decision-making.
    • Example: AI-powered self-optimizing networks (SON) that adjust parameters dynamically to improve network performance.
  3. AI for RAN
    • AI for RAN: advancing RAN capabilities through embedding AI/ML models, algorithms and neural networks into the radio signal processing layer to improve spectral efficiency, radio coverage, capacity and performance.
    • AI is leveraged to redesign RAN architecture for autonomous and intelligent network operations.
    • Example: AI-native Open RAN solutions that enable dynamic reconfiguration of network functions.

Source is NVidia AI-RAN: Artificial Intelligence – Radio Access Networks Document.

Organizations and Standardization Bodies Focusing on AI-RAN

Several industry bodies and alliances are driving AI adoption in RAN, including:

  • O-RAN Alliance: Developing AI-native Open RAN architectures.
  • 3GPP: Standardizing AI/ML applications in RAN.
  • ETSI (European Telecommunications Standards Institute): Working on AI-powered network automation.
  • ITU (International Telecommunication Union): AI for good to promote the AI use cases
  • GSMA: Promoting AI-driven innovations for future networks.
  • Global Telco AI Alliance: A collaboration among leading telecom operators to advance AI integration in network operations and RAN management.

AI-RAN Use Cases

  1. Intelligent Network Planning
    • AI-driven tools analyse coverage gaps and predict optimal site locations for new deployments.
    • Uses geospatial and traffic data to optimize CAPEX investments.
    • Improves network rollout efficiency by identifying areas with the highest potential return on investment.
  1. Automated Network Optimization
    • AI-powered SON dynamically adjusts network parameters.
    • Enhances performance by minimizing congestion and interference.
    • Predicts and mitigates traffic spikes in real-time, improving service stability.
  2. Predictive Maintenance
    • AI detects anomalies in hardware and predicts failures before they happen.
    • Uses machine learning models to analyze historical data and identify patterns leading to failures.
    • Reduces downtime and minimizes maintenance costs by enabling proactive issue resolution.
  3. Energy Efficiency Optimization
    • AI adjusts power consumption based on real-time traffic patterns.
    • Identifies opportunities for network elements to enter low-power modes during off-peak hours.
    • Leads to significant OPEX savings and a reduced carbon footprint by optimizing renewable energy integration.
  1. Enhanced Customer Experience Management
    • AI-driven analytics personalize network performance based on user behavior.
    • Predicts and prioritizes network resources for latency-sensitive applications like gaming and video streaming.
    • Uses AI-driven call quality analysis to detect and rectify issues before customers notice degradation.
    •  
  2. AI-Driven Interference Management
    • AI models analyze interference patterns and dynamically adjust power levels and beamforming strategies.
    • Reduces interference between cells and enhances spectral efficiency, especially in dense urban areas.
  3. Supply Chain and Inventory Optimization
    • AI helps predict hardware and component needs based on network demand forecasts.
    • Reduces overstocking and minimizes delays by ensuring the right components are available when needed.
  4. AI-Driven Beamforming Management
    • AI optimizes beamforming parameters to improve signal strength and reduce interference.
    • Dynamically adjusts beam directions based on real-time user movement and network conditions.
    • Enhances network coverage and capacity, particularly in urban and high-density environments.

Conclusion

AI is revolutionizing RAN by enhancing efficiency, reducing costs, and improving network performance. As AI adoption in RAN continues to grow, MNOs can expect increased automation, better customer experiences, and more cost-effective network operations. The journey toward AI-driven RAN is not just an evolution—it is a necessity for the future of mobile networks.

To further illustrate these advancements, incorporating graphs that highlight AI’s impact on OPEX reduction, predictive maintenance efficiency, and energy savings will help visualize the benefits AI brings to RAN operations.

Prepared By: Abdelrahman Fady | CTO | Digis Squared