Semantic Communications: Use Cases, Challenges, and the Path Forward

Today, I want to delve deeper into the practical applications of semantic communications, examine the challenges we face in implementation, and outline what I believe is the most effective path forward.

Let’s begin by exploring the transformative potential of semantic communication across various domains.

In the realm of 6G and beyond, semantic communication will enable significantly leaner, context-aware data exchange for ultra-reliable low-latency communications (URLLC). This isn’t merely an incremental improvement; it represents a fundamental shift in how we approach network efficiency and reliability.

For Machine-to-Machine (M2M) and IoT applications, the implications are particularly profound. Devices will be able to understand intent without requiring verbose data transmission, resulting in substantial savings in both spectrum usage and energy consumption. In a world moving toward billions of connected devices, this efficiency gain becomes not just beneficial but necessary.

Autonomous systems present another compelling use case. When vehicles and robots can communicate purpose rather than raw data, we see marked improvements in decision-making speed and safety. This shift from data-centric to meaning-centric communication could be the difference between an autonomous vehicle stopping in time or not.

The future of immersive experiences, including extended reality, holographic communication, and digital humans, will increasingly rely on shared context and compressed meaning. These applications demand not just bandwidth but intelligent use of that bandwidth, making semantic communication an ideal approach.

Finally, Digital Twins and Cognitive Networks will benefit tremendously from real-time mirroring and network self-awareness based on semantics rather than full datasets. This allows for more sophisticated modelling and prediction with less overhead.

Despite these promising applications, several significant challenges stand in our way.

Perhaps the most fundamental is what I call “semantic noise” errors in understanding, not just in transmission. This represents an entirely new category of “noise” in the communication channel that our traditional models aren’t equipped to address.

Context synchronization presents another hurdle. How do we ensure that sender and receiver share enough background knowledge to interpret messages correctly? Without this shared foundation, semantic communication breaks down.

From a theoretical perspective, modelling meaning mathematically remains a complex challenge. We need to move beyond bits to quantify and encode “meaning” in ways that are both efficient and reliable.

The dependence on advanced AI also presents practical challenges. Semantic communication requires deep integration with natural language processing, reasoning models, and adaptive learning technologies that are still evolving rapidly.

Finally, standardization poses a significant obstacle. Our current network protocols simply weren’t built for semantic intent exchange, requiring substantial rethinking of our fundamental approaches.

In the first phase, Awareness & Modelling, we need to define semantic entropy, capacity, and metrics while developing proof-of-concept systems in research settings. This foundational work should include embedding semantic layers into AI-enhanced protocols, establishing the technical groundwork for what follows.

The second phase, Prototyping in 6G Environments, involves integrating semantic communication with URLLC and mMTC (massive Machine Type Communications). We should test these integrations with Digital Twin networks and edge AI, while simultaneously establishing pre-standardization working groups to ensure alignment across the industry.

The final phase, Ecosystem Integration & Commercialization, will require embedding semantic modules into chipsets and network functions, deploying them in smart cities, Industry 4.0 environments, and immersive media applications. Standardization through bodies like 3GPP and ITU will be crucial during this phase to ensure global interoperability.

This journey toward semantic communication isn’t just a technical evolution; it’s a reimagining of how networks understand and transmit meaning. The challenges are substantial, but the potential rewards in efficiency, intelligence, and new capabilities make this one of the most exciting frontiers in telecommunications.

This blog post was written by Amr AshrafProduct Architect and Support Director at Digis Squared.

Why 6G Spectrum Matters: The Invisible Anchor of the Next Wireless Revolution

As I reflect on the trajectory of mobile communications, I find myself at a fascinating inflection point. We stand at the threshold of another major leap forward, and the promise of 6G extends far beyond incremental improvements in speed or latency. What truly excites me is how 6G represents a fundamental reimagining of how intelligence, presence, and connectivity converge in our networks and devices. At the core of this transformation lies an often overlooked but absolutely critical element: spectrum.

I’d like to explore why spectrum will once again shape not just our networks, but our societies and the very fabric of our digital existence.

Each generation of wireless technology has been defined by the spectrum it unlocked. 3G introduced us to mobile internet, fundamentally changing how we access information. 4G gave birth to the mobile economy, enabling video streaming, social media, and real-time applications that have transformed business models and social interactions. 5G pushed into millimeter wave frequencies, delivering industrial-grade responsiveness for critical applications.

But 6G represents something more profound. The leap isn’t merely technological—it’s philosophical. Connectivity is evolving to become contextual and cognitive. Our networks won’t just react to demands; they’ll anticipate needs. Devices are transforming from communication tools into intelligent sensors and agents that understand and interact with their environment. To enable this vision, 6G will require access to new spectral frontiers, particularly the sub-terahertz and terahertz (THz) ranges that have remained largely untapped for communications.

The relationship between spectrum and 6G innovation is multifaceted and critical. First, we face the fundamental challenge of data hunger meeting bandwidth bottlenecks. Applications like immersive extended reality, holographic communication, and digital twins demand terabit-per-second scale bandwidth capacities that can only be provided through the vast, underutilized frequency bands far above today’s cellular allocations.

Second, moving into terahertz bands introduces entirely new physics to our communication systems. This isn’t just about higher speeds; it means fundamentally different signal behaviours, novel hardware challenges, and revolutionary ways of sensing the environment. The properties of these frequencies will enable capabilities we’ve barely begun to imagine.

Third, spectrum is increasingly becoming a strategic national resource. The countries and companies that shape the 6G spectrum narrative will effectively shape the rules of digital engagement for the next decade and beyond. This geopolitical dimension adds another layer of complexity to spectrum allocation and standardization.

As we develop these new frequency bands, we’ll need new ways to describe and categorize them. Just as 5G required a new “language” to describe its frequency bands (such as n78 or FR2), 6G will demand new spectrum notations to handle wider bandwidths (tens or hundreds of gigahertz), account for dynamic spectrum sharing and AI-managed allocation, describe multi-layered integration across space, air, and terrestrial networks, and reflect new use-case mappings for sensing, localization, and environmental feedback.

Without clear and intelligent spectrum notations, we risk fragmenting the global 6G conversation—both technically and geopolitically at precisely the moment when unified approaches are most needed.

We often discuss spectrum in abstract terms as an invisible field of energy we harness for communication. But the spectrum has language. It has a notation. And as we transition from 5G into the far more complex realm of 6G, that language is evolving in significant ways.

To understand the future of wireless, we must first understand how we describe it. At the most basic level, frequency measurements tell us about radio wave oscillation: 1 Hz represents one cycle per second, 1 MHz is one million cycles per second, 1 GHz is one billion cycles per second, and 1 THz is one trillion cycles per second. Higher frequencies oscillate faster, enabling more data to be carried per unit time but also introducing greater signal loss, narrower coverage, and new technical challenges.

The evolution of mobile communications has consistently moved toward higher frequencies: 2G operated in hundreds of MHz, 4G and early 5G exploited sub-6 GHz bands, 5G NR expanded into millimeter wave (24–100 GHz), and 6G will push from 100 GHz to potentially 10 THz. This progression reflects our growing appetite for bandwidth and the technological innovations that make higher frequencies viable for communication.

In 5G, standardized notations were introduced to simplify discussions about specific frequency bands. Designations like n78 (3300–3800 MHz, a widely deployed mid-band 5G range) and broader categories like FR1 (sub-6 GHz frequencies) and FR2 (24–52 GHz millimeter wave) have streamlined regulatory, engineering, and operational conversations. However, as we move into sub-THz and THz frequencies, these notation schemes begin to show their limitations.

As we begin to propose bands like 140 GHz, 275 GHz, and even 1 THz for 6G, new spectrum notation systems will be required to unify wider frequency ranges under flexible identifiers, account for hybrid use cases where a single band supports sensing, communication, and computing simultaneously, and enable AI interpretation through machine-readable notations for real-time spectrum management.

We might see notations like fTH1 (Fixed THz Band 1: 275–325 GHz), dTHx (Dynamic Terahertz experimental block), or sT1 (Sensing THz Band 1, dedicated for RF-based environment detection). While these are speculative examples, they illustrate the fundamental need: our notation must evolve alongside our use cases and technology.

The importance of well-defined spectrum notation extends across multiple stakeholder groups. For engineers, poorly defined notation creates confusion in hardware design, simulation, and deployment. For regulators, a lack of harmonized notation leads to regional incompatibility and inefficiencies in global rollout. For innovators, a shared, evolving language opens doors to collaborative research, efficient prototyping, and even machine-to-machine spectrum negotiation.

It’s worth noting that notation isn’t neutral; it embodies power. Whoever defines the language often shapes the outcome. As we collectively create 6G, spectrum notation represents a strategic touchpoint—a bridge between science, policy, and geopolitics that will influence the development trajectory of next-generation wireless technology.

The future of 6G is being written not just in laboratories or boardrooms but in the electromagnetic spectrum itself. If 5G reached into the millimeter-wave frontier, 6G is preparing for a quantum leap into the sub-terahertz and terahertz bands. These frequency ranges, once considered the domain of theoretical physics or space science, are now firmly in the telecom spotlight.

Before exploring specific frequencies, it’s important to understand that 6G isn’t simply “5G, but faster.” It aims to support terabit-per-second data rates for holographic and immersive applications, microsecond-level latency for real-time control and tactile internet, native AI and sensing capabilities embedded directly in the spectrum layer, and multi-dimensional connectivity spanning terrestrial, airborne, and satellite networks. To support these capabilities, we need wider bandwidths than ever before—and that’s only possible at higher frequencies.

Several spectrum ranges are emerging as candidates for 6G deployment. Upper Mid-Bands (7–24 GHz), sometimes called FR3, offer a potential balance between coverage and capacity for early 6G deployments. Candidate bands in this range include 7–15 GHz, with particular interest in the 10–14.5 GHz range being explored by ITU. These frequencies could support urban macro deployments with extended coverage and decent capacity, though existing satellite usage presents challenges that will require robust coexistence frameworks.

Sub-Terahertz bands (100–300 GHz) represent the range where true 6G performance begins to shine. Particular interest has focused on 100–140 GHz (under exploration in Europe, Korea, and Japan) and 275–325 GHz (proposed as a new THz communication block). These frequencies could enable indoor ultra-high-speed access, device-to-device communications, and real-time augmented, virtual, and extended reality applications. However, they face challenges including severe path loss, line-of-sight requirements, and hardware immaturity.

Terahertz Bands (0.3–10 THz) push beyond traditional RF into new physical domains. These bands, currently under early-stage scientific study, could support wireless cognition, high-speed backhaul, and environmental sensing. The challenges here are substantial: limited current RF integrated circuits, lack of global regulatory frameworks, and energy efficiency concerns.

Low-Band Spectrum (Sub-1 GHz) remains essential even in the 6G era. While not new, these frequencies provide critical coverage for massive IoT, rural areas, and emergency communications. The primary challenge is that this spectrum is already heavily saturated with legacy systems.

International harmonization efforts are underway across multiple organizations. ITU-R (WP 5D) is actively evaluating candidate frequencies for IMT-2030 (the official designation for 6G). The FCC in the United States has opened exploratory windows at 95–275 GHz. Europe’s Hexa-X project advocates for coordinated research into 100+ GHz spectrum. China, Korea, and Japan are conducting field trials at 140 GHz and above. Global harmonization will be crucial—not just to avoid interference, but to enable cross-border 6G roaming, manufacturing scale, and effective spectrum diplomacy.

Rather than depending on any single band, 6G will likely employ a layered spectrum approach: low bands for resilient, wide-area coverage; mid bands for urban macro deployment and balanced rollout; sub-THz for immersive services and fixed wireless; and THz for sensing, cognition, and backhaul. All of these layers will be dynamically orchestrated, likely through AI and real-time feedback systems, to create a seamless connectivity experience across diverse environments and use cases.

Author: Obeidallah Ali, R&D Director at DIGIS Squared

Obeidallah Ali leads the Research & Development efforts at DIGIS Squared, driving innovation in AI-powered telecom solutions. With deep expertise in 5G Network Design, Optimization, and Automation, he focuses on developing tools like INOS™ and Katana™ that help operators diagnose, troubleshoot, and enhance network performance worldwide.

For inquiries, please contact:
Email: info@digis2.com

Semantic Communications: Rethinking How Networks Understand Meaning

Traditional communication models, like Shannon’s theory, have always focused primarily on the accuracy of bit transmission from sender to receiver. But in today’s world, dominated by AI, IoT, and immersive experiences, this approach is becoming increasingly insufficient. The challenge isn’t just about transmitting data anymore; it’s about transmitting the right data, with the right context, at precisely the right moment.

At its core, semantic communication represents a model that prioritizes understanding over mere accuracy. Rather than sending every bit of information, semantic systems intelligently transmit only what’s necessary for the receiver to reconstruct the intended meaning. This represents a profound shift in how we conceptualize network communication.

Consider this practical example: a device needs to send the message “I need a glass of water.” In classical communication, this entire sentence would be encoded, transmitted, and decoded bit by bit, regardless of context. But in a semantic communication system, if the context already indicates the user is thirsty, simply transmitting the word “glass” might be sufficient to trigger complete understanding. This approach is inherently context-aware, knowledge-driven, and enhanced by artificial intelligence.

The necessity for semantic communication becomes increasingly apparent when we consider its practical benefits. It substantially reduces redundant data transmission, which conserves both bandwidth and energy, critical resources in our increasingly connected world. For latency-sensitive applications like critical IoT systems, autonomous vehicles, and holographic communication, this efficiency translates to meaningful performance improvements. Furthermore, it enhances machine-to-machine understanding, enabling more intelligent edge networks, while aligning communication more closely with human-like reasoning patterns, making our interactions with technology more natural and efficient.

When we examine these advantages collectively, it becomes evident that semantic communication isn’t merely a beneficial addition to our technological toolkit; it represents a fundamental paradigm shift in communications technology.

The enabler of this transformation is undoubtedly artificial intelligence, particularly in domains such as natural language understanding, knowledge graphs, semantic representations, and the ability to learn shared context between sender and receiver. When integrated with Digital Twins and Cognitive Networks, semantic communication becomes even more powerful, allowing systems to predict, understand, and take proactive action rather than simply reacting to inputs.

At Digis Squared, we view Semantic Communication as a cornerstone of future AI-native networks. I believe it will fundamentally reshape how we design, operate, and optimize telecom systems, not only by increasing efficiency but by making networks truly intelligent.

As Head of Product, I find myself increasingly asking a question that challenges conventional thinking: What if our networks could understand why we communicate, not just what we communicate? This perspective shifts our focus from merely building faster networks to creating smarter, more meaningful ones that truly understand human intent.

Author: Mohamed Sayyed, Head of Product at DIGIS Squared

The Evolution of Self-Organizing Networks: From SON to Cognitive SON to LTMs

As we approach 2030, the telecommunications industry is at a point where traditional network automation methods are merging with advanced AI technologies. Based on my experience over the past decade with network optimization solutions, I would like to share some insights on potential future developments.

Two Perspectives on SON Evolution

When discussing the future of Self-Organizing Networks (SON), it’s crucial to distinguish between two perspectives:

SON as a Conceptual Framework

The fundamental principles of self-configuration, self-optimization, and self-healing will remain essential to network operations. These core concepts represent the industry’s north star – autonomous networks that can deploy, optimize, and repair themselves with minimal human intervention.

These principles aren’t going away. Rather, they’re being enhanced and reimagined through more sophisticated AI approaches.

Vendor-Specific SON Implementations

The feature-based SON solutions we’ve grown familiar with – ANR (Automatic Neighbour Relations), CCO (Coverage & Capacity Optimization), MLB (Mobility Load Balancing), and others – are likely to undergo significant transformation or potential replacement.

These siloed, rule-based features operate with limited contextual awareness and struggle to optimize for multiple objectives simultaneously. They represent the first generation of network automation that’s ripe for disruption.

Enter Large Telecom Models (LTMs)

The emergence of Large Telecom Models (LTMs) – specialized AI models trained specifically on telecom network data – represents a paradigm shift in how we approach network intelligence.

Like how Large Language Models revolutionized natural language processing, LTMs are poised to transform network operations by:

  1. Providing holistic, cross-domain optimization instead of siloed feature-specific approaches
  2. Enabling truly autonomous decision-making based on comprehensive network understanding
  3. Adapting dynamically to changing conditions without explicit programming
  4. Learning continuously from network performance data

The Path Forward: Integration, or Replacement?

The relationship between traditional SON, Cognitive SON, and emerging LTMs is best seen as evolutionary rather than revolutionary.

  • Near-term (1-2 years): LTMs will complement existing SON features, enhance their capabilities while learn from operational patterns
  • Mid-term (3-4 years): We’ll see the emergence of agentic AI systems that can orchestrate multiple network functions autonomously
  • Long-term (5+ years): Many vendor-specific SON implementations will likely be replaced by more sophisticated LTM-driven systems

The most successful operators will be those who embrace this transition strategically – leveraging the proven reliability of existing SON for critical functions while gradually adopting LTM capabilities for more complex, multi-domain challenges.

Real-World Progress

We’re already seeing this evolution in action. SoftBank recently developed a foundational LTM that automatically reconfigures networks during mass events.

These early implementations hint at the tremendous potential ahead as we move toward truly intelligent, autonomous networks.

Prepared By: Abdelrahman Fady | CTO | Digis Squared

NWDAF: How 5G is AI Native by Essence

The evolution of telecommunications networks has always been characterized by increasing complexity and intelligence. As we’ve moved through successive generations of wireless technology, I’ve observed a consistent trend toward more adaptive, responsive systems. With 5G, this evolution has reached a critical inflection point by introducing the Network Data Analytics Function (NWDAF) a component that fundamentally transforms how networks operate and adapt.

NWDAF, introduced in the 5G Core architecture starting from Release 15 and continuing to evolve toward 6G, represents a pivotal element in the Service-Based Architecture (SBA). More than just another network component, it embodies a philosophical shift toward data-driven, intelligent network operations that anticipate the needs of both users and applications.

At its core, NWDAF serves as a standardized network function that provides analytics services to other network functions, applications, and external consumers. Its functionality spans the entire analytics lifecycle: collecting data from various network functions (including AMF, SMF, PCF, and NEF), processing and analyzing that data, generating actionable insights and predictions, and feeding decisions back into the network for optimization and policy enforcement.

I often describe NWDAF as the “central intelligence of the network”—a system that transforms raw operational data into practical insights that drive network behavior. This transformation is not merely incremental; it represents a fundamental reimagining of how networks function.

The necessity for NWDAF becomes apparent when we consider the demands placed on modern networks. Autonomous networks require closed-loop automation for self-healing and self-optimization—capabilities that depend on the analytical insights NWDAF provides. Quality of Service assurance increasingly relies on the ability to predict congestion, session drops, or mobility issues before they impact user experience. Network slicing, a cornerstone of 5G architecture, depends on real-time monitoring and optimization of slice performance. Security analytics benefit from NWDAF’s ability to detect anomalies or attacks through traffic behavior pattern analysis. Furthermore, NWDAF’s flexible deployment model allows it to operate in either central cloud environments or Multi-access Edge Computing (MEC) nodes, enabling localized decision-making where appropriate.

The integration of NWDAF with other network functions occurs through well-defined interfaces. The Np interface facilitates data collection from various network functions. The Na interface enables NWDAF to provide analytics to consumers. The Nnef interface supports interaction with the Network Exposure Function, while the Naf interface enables communication with Application Functions. This comprehensive integration ensures that NWDAF can both gather the data it needs and distribute its insights effectively throughout the network.

The analytical capabilities of NWDAF span multiple dimensions. Descriptive analytics provide visibility into current network conditions, including load metrics, session statistics, and mobility patterns. Predictive analytics enable the network to anticipate issues before they occur, such as congestion prediction, user experience degradation forecasts, and mobility failure prediction. Looking forward, prescriptive analytics will eventually allow NWDAF to suggest automated actions, such as traffic rerouting or slice reconfiguration, further enhancing network autonomy.

As we look toward 6G, NWDAF is poised to evolve into an even more sophisticated component of network architecture. I anticipate the development of an AI/ML-native architecture where NWDAF evolves into a Distributed Intelligence Function. Federated learning approaches will enable cross-domain learning without requiring central data sharing, addressing privacy and efficiency concerns. Integration with digital twin technology will allow simulated networks to feed NWDAF with predictive insights, enhancing planning and optimization. Perhaps most significantly, NWDAF will increasingly support intent-based networking, where user intentions are translated directly into network behavior without requiring detailed technical specifications.

The journey toward truly intelligent networks is just beginning, and NWDAF represents a crucial step in that evolution. By embedding analytics and intelligence directly into the network architecture, 5G has laid the groundwork for networks that don’t just connect—they understand, anticipate, and adapt. This foundation will prove essential as we continue to build toward the even more demanding requirements of 6G and beyond.

Prepared By: Amr Ashraf | Head of Solution Architect and R&D | Digis Squared

Share:

AI-Driven RAN: Transforming Network Operations for the Future

Challenges Facing Mobile Network Operators (MNOs)

As mobile networks evolve to support increasing data demand, Mobile Network Operators (MNOs) face several critical challenges:

1. Rising CAPEX Due to Network Expansions

With the rollout of 5G and upcoming 6G advancements, MNOs must invest heavily in network expansion, including:

  • Deploying new sites to enhance coverage and capacity.
  • Upgrading existing infrastructure to support new technologies.
  • Investing in advanced hardware, software, and spectrum licenses.

2. Growing Network Complexity

As networks integrate multiple generations of technology (2G, 3G, 4G, 5G, and soon 6G), managing this complexity becomes a major challenge. Key concerns include:

  • Optimizing the placement of new sites to maximize coverage and efficiency.
  • Choosing the right hardware, licenses, and features to balance performance and cost.
  • Ensuring seamless interworking between legacy and new network elements.

3. Increasing OPEX Due to Operations and Maintenance

Operational expenditures continue to rise due to:

  • The increasing number of managed services personnel and field engineers.
  • The complexity of maintaining multi-layer, multi-vendor networks.
  • The need for continuous network optimization to ensure service quality.
  • Rising Energy Costs: Powering an expanding network infrastructure requires substantial energy consumption, and increasing energy prices put further pressure on operational budgets. AI-driven solutions can optimize power usage, reduce waste, and shift energy consumption to off-peak times where feasible.

4. Competitive Pressures in Customer Experience & Network Quality

MNOs are not only competing on price and service offerings but also on:

  • Network Quality: Coverage, speed, and reliability.
  • Customer Experience: Personalized and high-quality connectivity.
  • Operational Efficiency: Cost-effective operations that enhance profitability.

The Concept of AI in RAN

To address these challenges, AI-driven Radio Access Networks (AI-RAN) emerge as a key enabler. AI-RAN leverages artificial intelligence and machine learning to:

  • Optimize network planning and resource allocation.
  • Automate operations, reducing manual interventions.
  • Enhance predictive maintenance to prevent failures before they occur.
  • Improve energy efficiency by dynamically adjusting power consumption based on traffic demand.

Different AI-RAN Methodologies

  1. AI and RAN
    • AI and RAN (also referred to as AI with RAN): using a common shared infrastructure to run both AI and workloads, with the goal to maximize utilization, lower Total Cost of Ownership (TCO) and generate new AI-driven revenue opportunities.
    • AI is used as an external tool for decision-making and analytics without direct integration into the RAN architecture.
    • Example: AI-driven network planning tools that assist in site selection and spectrum allocation.
  2. AI on RAN
    • AI on RAN: enabling AI services on RAN at the network edge to increase operational efficiency and offer new services to mobile users. This turns the RAN from a cost centre to a revenue source.
    • AI is embedded within the RAN system to enhance real-time decision-making.
    • Example: AI-powered self-optimizing networks (SON) that adjust parameters dynamically to improve network performance.
  3. AI for RAN
    • AI for RAN: advancing RAN capabilities through embedding AI/ML models, algorithms and neural networks into the radio signal processing layer to improve spectral efficiency, radio coverage, capacity and performance.
    • AI is leveraged to redesign RAN architecture for autonomous and intelligent network operations.
    • Example: AI-native Open RAN solutions that enable dynamic reconfiguration of network functions.

Source is NVidia AI-RAN: Artificial Intelligence – Radio Access Networks Document.

Organizations and Standardization Bodies Focusing on AI-RAN

Several industry bodies and alliances are driving AI adoption in RAN, including:

  • O-RAN Alliance: Developing AI-native Open RAN architectures.
  • 3GPP: Standardizing AI/ML applications in RAN.
  • ETSI (European Telecommunications Standards Institute): Working on AI-powered network automation.
  • ITU (International Telecommunication Union): AI for good to promote the AI use cases
  • GSMA: Promoting AI-driven innovations for future networks.
  • Global Telco AI Alliance: A collaboration among leading telecom operators to advance AI integration in network operations and RAN management.

AI-RAN Use Cases

  1. Intelligent Network Planning
    • AI-driven tools analyse coverage gaps and predict optimal site locations for new deployments.
    • Uses geospatial and traffic data to optimize CAPEX investments.
    • Improves network rollout efficiency by identifying areas with the highest potential return on investment.
  1. Automated Network Optimization
    • AI-powered SON dynamically adjusts network parameters.
    • Enhances performance by minimizing congestion and interference.
    • Predicts and mitigates traffic spikes in real-time, improving service stability.
  2. Predictive Maintenance
    • AI detects anomalies in hardware and predicts failures before they happen.
    • Uses machine learning models to analyze historical data and identify patterns leading to failures.
    • Reduces downtime and minimizes maintenance costs by enabling proactive issue resolution.
  3. Energy Efficiency Optimization
    • AI adjusts power consumption based on real-time traffic patterns.
    • Identifies opportunities for network elements to enter low-power modes during off-peak hours.
    • Leads to significant OPEX savings and a reduced carbon footprint by optimizing renewable energy integration.
  1. Enhanced Customer Experience Management
    • AI-driven analytics personalize network performance based on user behavior.
    • Predicts and prioritizes network resources for latency-sensitive applications like gaming and video streaming.
    • Uses AI-driven call quality analysis to detect and rectify issues before customers notice degradation.
    •  
  2. AI-Driven Interference Management
    • AI models analyze interference patterns and dynamically adjust power levels and beamforming strategies.
    • Reduces interference between cells and enhances spectral efficiency, especially in dense urban areas.
  3. Supply Chain and Inventory Optimization
    • AI helps predict hardware and component needs based on network demand forecasts.
    • Reduces overstocking and minimizes delays by ensuring the right components are available when needed.
  4. AI-Driven Beamforming Management
    • AI optimizes beamforming parameters to improve signal strength and reduce interference.
    • Dynamically adjusts beam directions based on real-time user movement and network conditions.
    • Enhances network coverage and capacity, particularly in urban and high-density environments.

Conclusion

AI is revolutionizing RAN by enhancing efficiency, reducing costs, and improving network performance. As AI adoption in RAN continues to grow, MNOs can expect increased automation, better customer experiences, and more cost-effective network operations. The journey toward AI-driven RAN is not just an evolution—it is a necessity for the future of mobile networks.

To further illustrate these advancements, incorporating graphs that highlight AI’s impact on OPEX reduction, predictive maintenance efficiency, and energy savings will help visualize the benefits AI brings to RAN operations.

Prepared By: Abdelrahman Fady | CTO | Digis Squared

Optimizing LTE 450MHz Networks with INOS 

Introduction 

The demand for reliable, high-coverage wireless communication is increasing, particularly for mission-critical applications, rural connectivity, and industrial deployments. LTE 450MHz (Band 31) is an excellent solution due to its superior propagation characteristics, providing extensive coverage with fewer base stations. However, the availability of compatible commercial handsets remains limited, creating challenges for operators and network engineers in testing and optimizing LTE 450MHz deployments. 

To overcome these challenges, DIGIS Squared is leveraging its advanced network testing tool, INOS, integrated with ruggedized testing devices such as the RugGear RG760. This article explores how INOS enables efficient testing, optimization, and deployment of LTE 450MHz networks without relying on traditional consumer handsets. 

The Challenge of LTE 450MHz Testing 

LTE 450MHz is an essential frequency band for sectors such as utilities, public safety, and IoT applications. The band’s key advantages include: 

  • Longer range: Due to its low frequency, LTE 450MHz signals propagate further, covering large geographical areas with minimal infrastructure. 
  • Better penetration: It ensures superior indoor and underground coverage, crucial for industrial sites and emergency services. 
  • Low network congestion: Given its niche application, LTE 450MHz networks often experience less congestion than conventional LTE bands. 

However, network operators and service providers face significant hurdles in testing and optimizing LTE 450MHz due to the lack of commercially available handsets supporting Band 31. Traditional methods of network optimization rely on consumer devices, which are not widely available for this band. 

Introducing INOS: A Comprehensive Drive Test Solution 

INOS is a state-of-the-art, vendor-agnostic network testing and optimization tool developed by DIGIS Squared. It allows operators to: 

  • Conduct extensive drive tests and walk tests with real-time data collection. 
  • Analyze Key Performance Indicators (KPIs) such as RSRP, RSRQ, SINR, throughput, and latency. 
  • Evaluate handover performance, coverage gaps, and network interference. 
  • Benchmark networks across multiple operators. 
  • Generate comprehensive reports with actionable insights for optimization. 

INOS eliminates the dependency on consumer devices, making it an ideal solution for LTE 450MHz testing. 

How INOS Enhances LTE 450MHz Testing 

1. Seamless Data Collection 

INOS allows seamless data collection for LTE 450MHz performance analysis. Engineers can conduct extensive tests using professional-grade testing devices like the RugGear RG760. 

2. Comprehensive Performance Monitoring 

INOS enables engineers to monitor key LTE 450MHz performance metrics, including: 

  • Signal strength and quality (RSRP, RSRQ, SINR). 
  • Throughput measurements for downlink and uplink speeds. 
  • Handover success rates and network transitions. 
  • Coverage mapping with real-time GPS tracking. 

3. Efficient Deployment & Troubleshooting 

Using INOS streamlines the LTE 450MHz deployment process by: 

  • Identifying weak coverage areas before commercial rollout. 
  • Troubleshooting network performance issues in real-time. 
  • Validating base station configurations and antenna alignments. 

4. Cost-Effective & Scalable Testing 

By using INOS instead of expensive proprietary testing hardware, operators can achieve a cost-effective and scalable testing framework. 

Real-World Applications 

1. Private LTE Networks 

Organizations deploying private LTE networks in critical industries (e.g., mining, utilities, emergency services) can use INOS to ensure optimal network performance and coverage. 

2. Smart Grids & Utilities 

With LTE 450MHz playing a key role in smart grids and utilities, INOS facilitates efficient network optimization, ensuring stable communication between smart meters and control centers. 

3. Public Safety & Emergency Response 

For first responders relying on LTE 450MHz for mission-critical communications, INOS ensures that networks meet the required service quality and reliability standards. 

4. Rural & Remote Connectivity 

Operators extending connectivity to underserved areas can leverage INOS to validate coverage, optimize handovers, and enhance user experience. 

Conclusion 

Testing and optimizing LTE 450MHz networks have historically been challenging due to the limited availability of compatible handsets. By leveraging the powerful capabilities of INOS, DIGIS Squared provides a cutting-edge solution for network operators to efficiently deploy and maintain LTE 450MHz networks. 

With INOS, operators can conduct extensive drive tests, analyze network KPIs, and troubleshoot issues in real-time, ensuring seamless connectivity for industries relying on LTE 450MHz. As the demand for private LTE networks grows, INOS represents a game-changer in network testing and optimization. 

For more information on how INOS can enhance your LTE 450MHz deployment, contact DIGIS Squared today! 

————————————————————————————————————————————-

This blog post was written by Amr AshrafProduct Architect and Support Director at Digis Squared. With extensive experience in telecom solutions and AI-driven technologies, Amr plays a key role in developing and optimizing our innovative products to enhance network performance and operational efficiency.

NFV deployment validation using INOS

Network Function Virtualization (NFV), is becoming increasingly important as mobile networks are being asked to handle an ever-growing number of connected devices and new use cases. In this article, Amr Ashraf, RAN and Software Solution Architect and Trainer, describes the benefits of NFV, capabilities and deployment considerations. Plus, we take a quick look at how Digis Squared’s powerful AI-tool, INOS, can help in the deployment validation of NFV.

Network Function Virtualization

Mobile virtualization – also known as network function virtualization (NFV) – is a powerful technology that has the capability to transform the way mobile networks are designed, deployed, and operated.

  • NFV enables the creation of virtualized mobile networks, and the isolation of different types of traffic on the same physical network infrastructure.
  • The creation of different virtual networks for different types of services or different user groups.
  • Multiple independent network operators to share a common infrastructure,
  • And improves the security of the network.

In this article, Amr Ashraf describes the benefits of NFV, capabilities and deployment considerations. Plus, we take a quick look at how Digis Squared’s powerful AI-tool, INOS, can help in the deployment validation of NFV.

The future of mobile network functions is virtual

Mobile virtualization is becoming increasingly important as mobile networks are being asked to handle an ever-growing number of connected devices and new use cases.

NFV & Infrastructure Sharing. One of the main benefits of mobile virtualization is that it allows for multiple independent network operators to share a common infrastructure. This can help to reduce the costs and complexity of building and maintaining mobile networks, and can also help to improve coverage and capacity in areas where it would otherwise be difficult or expensive to deploy new infrastructure.

NFV & Security. Mobile virtualization also helps to improve the security of the network by isolating different functions and providing a secure environment for each virtual network. This makes it an ideal solution for enterprise customers who need to maintain high levels of security for their sensitive data.

Deployment flexibility. Mobile virtualization is supported by software-based virtualized network functions (VNFs), which can be run on standard servers and storage systems, rather than specialized hardware. This makes it easy to scale and adapt the network to changing requirements. Additionally, it also makes it possible to deploy mobile virtualization solutions in a variety of different environments, including on-premises, in the cloud, or at the edge of the network.

NFV & 5G customisations. It’s worth noting that mobile virtualization is a key technology in building the 5G network. 5G network standards are designed to support network slicing, which can create multiple isolated virtual networks on top of a common physical infrastructure. This makes it possible to create customized solutions for different types of users and use cases, such as providing high-bandwidth services for multimedia applications, or low-latency services for industrial automation and control.

NFV is the future, and the future is now. Mobile virtualization is a rapidly evolving technology with considerable potential to transform the way mobile networks are designed, deployed, and operated. In the coming years, we expect to see more and more operators turning to mobile virtualization to meet the growing demands on their networks and stay competitive in the fast-changing mobile landscape.

Orchestration

Implementing mobile virtualization can present a number of technical challenges, including the management and orchestration of virtualized network functions (VNFs) and ensuring network security. Managing and orchestration of VNFs is a complex task, which involves provisioning and configuring VNFs, as well as ensuring their availability and performance. This is complicated by the fact that VNFs are software-based and can be deployed on a variety of hardware and virtualization platforms.

Security

As VNFs are software-based, they can be targeted by cyber-attacks just like any other type of software. Therefore, ensuring network security is vital when implementing mobile virtualization.

Additionally, virtualized networks may be vulnerable to new types of attacks that exploit the virtualization itself.

NFVO. One of the key solutions to these challenges is the use of network function management and orchestration (NFVO) systems. NFVOs automate the provisioning, configuration, and management of VNFs, and they help to ensure that the VNFs are highly available and perform well. They also play an important role in the orchestration of VNFs, which involves coordinating the actions of multiple VNFs to achieve a desired outcome.

Strong defences. Another key solution is the use of security solutions such as firewall, intrusion detection and prevention systems, secure VPN, and secure containers to protect the virtualized network, secure communication between virtualized functions, and protect virtualized infrastructure from unauthorized access.

Anomaly detection. Solutions based on artificial intelligence and machine learning can also be used to monitor and detect anomalies in the network, identify potential security threats, and take appropriate action to mitigate them.

Digis Squared recommend involving INOS Probe to undertake anomaly detection 24/7, and send these alerts to the CSP. Read more – Anomaly detection: using AI to identify, prioritise and resolve network issues.

Security strategy. In addition to these technical solutions, it’s also important to have a comprehensive security strategy in place to address any potential vulnerabilities and threats that may arise when implementing mobile virtualization. This can include implementing best practices for network design, conducting regular security assessments, and keeping systems and software up to date with the latest security patches and updates.

Skills & expertise. An often overlooked, yet important security consideration, is the need for skilled personnel who are well-versed in the technologies and best practices associated with mobile virtualization. As mobile virtualization is a complex technology that requires a deep understanding of network functions, security, and software development, it’s crucial to have a team of experts who can design, deploy, and maintain secure mobile virtualization solutions.

INOS & NFV

Drive testing can be used to validate the performance of virtualized network functions and ensure that they are providing the desired level of service. This can help to identify and troubleshoot any issues that may arise, such as poor performance or dropped connections. Drive testing can also be used to compare the performance of virtualized network functions with that of traditional, hardware-based network functions, in order to ensure that the virtualized functions are providing an equivalent or better level of service.

Digis Squared’s AI-solution INOS is an essential tool in the implementation and ongoing optimization of NFV. It helps to validate and troubleshoot virtualized network functions and ensure that they are providing an equivalent or better level of service compared to traditional, hardware-based network functions. Additionally, drive testing provides key information about the environment in which the network is deployed that can be used to optimize the deployment of virtualized network functions.

Conclusion

Mobile virtualization is a powerful technology that has the capability to transform the way mobile networks are designed, deployed, and operated. Key benefits it enables include,

  • The creation of virtualized mobile networks, and the isolation of different types of traffic on the same physical network infrastructure.
  • The creation of different virtual networks for different types of services or different user groups.
  • Multiple independent network operators to share a common infrastructure,
  • And improves the security of the network.

However, implementing mobile virtualization can present a number of technical challenges, including the management and orchestration of virtualized network functions (VNFs) and ensuring network security.

The use of network function management and orchestration (NFVO) systems, security solutions, AI/ML-based monitoring and anomaly-detection systems, and a comprehensive security strategy can help to mitigate these challenges.

Finally, NFV is a powerful, yet complex technology – it’s essential to work with an experienced team with deep expertise who can design, deploy, and maintain mobile virtualization solutions.

In conversation with Amr Ashraf, Digis Squared’s RAN and Software Solution Architect and Trainer.

If you or your team would like to discover more about our capabilities, please get in touch: use this link or email hello@DigisSquared.com

Find out more about INOS

INOS can be implemented as a public or private cloud, or on-premise solution, and is also available as a “Radio Testing as-a-service” model. Its extensive AI analysis and remote OTA capabilities ensure speedy and accurate assessment of all aspects of network testing: SSV, in-building and drive testing, network optimization and competitor benchmarking, across all vendors, network capabilities and technologies, including 5G, private networks and OpenRAN.

INOS is built with compute resources powered by Intel® Xeon® Scalable Processors. Digis Squared is a Partner within the Intel Network Builders ecosystem program, and a member of the Intel Partner Alliance.

See INOS in action at LEAP, Riyadh & MWC Barcelona

Digis Squared will be at LEAP in Riyadh at the start of February, as part of the UK Pavilion H4.G30, undertaking cloud-based INOS demos. Plus the team will be at MWC Barcelona at the end of February, with a full suite of all the INOS solutions and form factors on a dedicated exhibition stand Hall 7 B13.

Get in touch to arrange a dedicated time to meet: hello@DigisSquared.com

Discover more

Digis Squared ◦ Enabling smarter networks.

Anomaly detection: using AI to identify, prioritise and resolve network issues

Anomaly detection: efficiently identifying and resolving issues across mobile networks is vital for the success of any CSP or MNO. With 5G network deployment ramping up, and beyond that, work towards autonomous networks, the use of AI to handle anomalies is vital. In this article, Amr Ashraf, RAN and Software Solution Architect and Trainer, describes how CSPs can more efficiently identify and resolve the real issues hidden amongst all the noise.

This article is also available as a stand-alone paper, here.

Identifying the real issues hiding in all the noise

Data volumes are only going in one direction – there are substantial increases in data volume as more people connect online, with more devices, more solutions move to the cloud, continued changes in customer behaviour and movement of people and information across society, and more and more daily interactions move online. This increased volume of data also provides far greater noise for cyber threats to hide within. Whilst networks continue to perform well, and meet demand, efficient use of resources will become ever more important.

New tools and approaches are needed to be able to identify and resolve issues in all the noise.

Data from Statista, image credit World Economic Forum.

Anomaly detection and resolution: the role of AI

AI and ML provide the vital tools to handle this ever-growing volume of data. Soon – if not already – there will simply be too much data to analyse everything – but we can use AI to identify the unusual issues and outliers, and dig deeper into these anomalies.


The AI Hierarchy of Needs, Monica Rogati 2017
“How do we use AI and machine learning to get better at what we do?”. M.Rogati placed anomaly detection as a vital transformational step in providing a solid foundation for data before being effective with AI and ML.

Digis Squared & anomaly detection

Digis Squared’s INOS AI tool is a vendor-agnostic, multi-network-technology solution delivering automated assessment, testing and optimisation of networks, across all technologies. The data collected by INOS is analysed in the cloud-based AI engine, and it is here where anomalies are detected, assessed and actioned.

INOS: Three data collection methods

“INOS collects data in the field by one of three methods – a traditional “suitcase” format for drive testing, or a highly mobile backpack which can be used in narrow streets, or walking through shopping malls for example,” explains Amr Ashraf, RAN and Software Solution Architect and Trainer. “And this paper focusses on the third method, the static INOS active probe, and how the data it collects is analysed and actioned.”

A background image of a green landscape is overlaid with icons and photos of equipment. INOS

INOS active probe

“The active probe is a static box which is typically deployed inside a building – maybe a corporate HQ, a high-profile area within an airport, or a new business facility. Perhaps the location is selected because the CSP wants to proactively support a new VIP client, gather KPI data, or improve SLAs. Once deployed, the probe continuously monitors the networks, and data is streamed to the INOS platform in the cloud, where it is analysed by the INOS AI engine.”

“In the first step of the analysis, the data collected by the INOS active probe is used to identify QoE – the quality of experience – problems experienced by mobile devices in the probe. INOS will assess the data and carry out root-cause investigations to identify the fault that led to the problem. This is accomplished by gathering various performance measurements from several layers – the network, hardware, link, and operating system – which are then aggregated and delivered to INOS cloud.”

“To put it more specifically, the probe regularly initiates a test scenario while recording network, hardware, link, and OS measurements. Given that QoE problems can come from a variety of locations along the path, measuring the performance of each layer enables not only the detection of QoE problems, but also the determination of the problem’s root cause. A database on the INOS cloud receives aggregated metrics as soon as they are made available.”

Why use active probes?

“One of the best ways to understand how the end-user perceives the performance from beginning to end is using probes. They offer real-time and historical end-to-end call tracing, KPIs that continuously track network health and customer experience, and proactive alarming (QoE).”

“The two basic methods—probe measurements—that can improve performance and enrich end-to-end analysis (from the user terminal to the core network) are active and passive probe measurements, Because they give detailed information that enables service operators to assess the service quality across various transport technologies, probes play a significant role in the ever-increasing complexity of modern telecom networks.”

“By defining some parameters, the INOS probe uses AI models to detect anomalies in any field KPIs. For example, parameters such as deviation parentage and time windows can be used to calculate the deviation value. The example below for RSRP in LTE takes this approach, and identifies a deviation of 20%.”

INOS report of RSRP in LTE, identifying an anomaly with 20% deviation

“Additionally, INOS is able to assess data from other channels, including WhatsApp, Telegram, Twitter and email, and can assess behaviour of those apps for network anomalies too.”

Telegram Notification for a customer with certain Anomaly Detection

“To achieve the goal of end-to-end quality metrics, these probe measurements should be connected with numerous node-to-node performance data as well as customer data,” explains Amr.

“An integral view from the customer, network, service, or terminal perspective is provided by Digis Squared’s in-house developed INOS and RAI tools. Together these two AI-tools can proactively manage the network by continuously monitoring end-to-end KPIs, created from various perspectives in the network. They can immediately identify any deteriorating trends and anomalies, for example, dropped-call ratio and set-up times.”

“All Digis Squared’s tools are vendor agnostic – networks are such a complex mix of solutions, that our tools simply have to be able to work with and analyse data from all vendors. And, of course, they also handle data from all network technologies, legacy 2G platforms through to 5G, they’re designed for all of this.”

Prioritization: counter-intuitive approaches are sometimes best

The costs and impacts associated with low and medium-severity anomalies may be far greater than the total cost of high-severity issues – smaller issues are often harder to detect, and take longer to identify and implement a fix, so their compounded cost can be higher. AI can help ensure counter-intuitive approaches to assessing priority can be handled without bias.

“A proactive approach can save money in addition to ensuring high levels of customer satisfaction by reducing the number of trouble tickets and so optimizing resource allocation. The pre-defined INOS reports’ ability to show service quality makes root-cause investigation possible across all network layers.”

“Today, the Digis Squared AI tools are able to continuously receive data from active probes in the network, identify anomalies and negative trends. They are also able to identify root cause, and propose recommended solutions to fix the issue. Working with our clients, in some installations we enable those recommended fixes to be automatically implemented, ensuring that frequently occurring minor issues are identified and resolved automatically. Of course, all issues are included in reporting. This approach ensures that staff do not need to intervene in the mundane, predictable issues, and can instead focus on assessing the recommendations the system makes for more complex issues.”

Anomaly handling and autonomous networks

The use of AI in anomaly detection and, critically, resolution, has great value for legacy technologies, and even greater value for new technologies and transformations. It’s a vital step in network function virtualization (NFV), cloud-native computing (CNC) and software-defined networking (SDN) technologies. And provides important preparation for CSPs as they ready their organizations for operations based on autonomous networks.

“The Digis Squared INOS active probes are a vital tool in providing high-quality data on background network behaviour and performance,” shared Amr. “Using this, our AI tools are able to continuously assess the streamed data and identify anomalies, assess their root cause, and then propose and implement recommended actions. AI solutions like this will soon be the only way in which CSPs can efficiently identify and resolve the real issues hidden amongst all the vast quantities of noise.”

Find out more about INOS

INOS can be implemented as a public or private cloud, or on-premise solution, and is also available as a “Radio Testing as-a-service” model. Its extensive AI analysis and remote OTA capabilities ensure speedy and accurate assessment of all aspects of network testing: SSV, in-building and drive testing, network optimization and competitor benchmarking, across all vendors, network capabilities and technologies, including 5G, private networks and OpenRAN.

INOS is built with compute resources powered by Intel® Xeon® Scalable Processors. Digis Squared is a Partner within the Intel Network Builders ecosystem program, and a member of the Intel Partner Alliance.

In conversation with Amr Ashraf, Digis Squared’s RAN and Software Solution Architect and Trainer.

If you or your team would like to discover more about our capabilities, please get in touch: use this link or email hello@DigisSquared.com

Discover more

Digis Squared ◦ Enabling smarter networks.

AI enhancement of capacity management in mobile networks

The optimisation of capacity management in mobile networks is vital: too little capacity constraints revenue opportunities and impacts customer experience, but idle capacity risks high opex and under-performing investment in assets. Capacity management has always used mathematical modelling techniques to attempt to find the sweet spot, and optimise opportunities and costs. In the past, such predictions were based on historical data, but now AI enhancement of capacity management changes that. The deployment of network virtualization, 5G and network slicing requires the use of cognitive planning; it is vital that capacity planning models are able to assess a step-change in the volume of data points in real-time or near-real-time.

RAN Automation Architect and Data Scientist at Digis Squared, Obeid Allah Ali, describes how AI, automation and advanced analytics are being deployed to gain even greater network capacity planning efficiencies.

What exactly is machine learning, and why is it important?

Machine Learning (ML) is an application of artificial intelligence (AI) that enables computer programs to learn and improve over time because of their interactions with data.

It automates analytics by making predictions using algorithms that learn repeatedly.

Its easy self-learning technique, rather than rule-based programming, has found widespread use in a variety of contexts.

So, whether it’s making life easier with navigation advice based on predicted traffic behaviour, assessing large amounts of medical data to identify new patterns and links, or warning you about market volatility so you can adjust financial decisions, AI and ML technology has permeated many aspects of our daily lives.

The power of prediction machines

In simplified terms, prediction is the process of filling in the missing information. It takes the information you have, often called ‘data,’ and uses it to generate information you don’t have. Most machine learning algorithms are mathematical models that predict outcomes.

How will machine learning impact businesses?

There are two major ways that forecasts will alter the way businesses operate.

  1. At low levels, a prediction machine can relieve humans of predictive activities, resulting in cost savings, and for example removing emotional bias.
  2. A prediction machine could become so accurate and dependable that it alters how a company operates.

How big is the growth in mobile connectivity?

Above: from GSMA “The State of Mobile Internet Connectivity Report 2021” [3], their most recent report

Some further statistics on the growth in mobile data, from the same GSMA report [3],

  • global data per user reaching more than 6 GB per month – double the data usage for 2018
  • 94% of the world’s population covered by mobile broadband network
  • By the end of 2020, 51% of the world’s population – just over 4 billion people – were using mobile internet, an increase of 225 million since the end of 2019

And from [4] GSMA Mobile Economy 2021 report,

  • By the end of 2025, 5G will account for just over a fifth of total mobile connections.

Capacity and performance of mobile networks

The rapid growth of mobile traffic places enormous strain on mobile networks’ ability to provide the necessary capacity and performance.

To meet demand, communications services providers (CSPs), mobile network operators and their suppliers need a range of options, including more spectrum, new technology, small cells, and traffic offloading to alternate access networks.

To meet commercial business objectives, mobile network operators are under pressure to maximize the utilization of existing resources while avoiding capacity bottlenecks that reduce revenues and negatively influence end-user experience.

Additionally, network operators have to assess risk, contractual SLAs (especially in the context of MVNOS who utilise their network, and corporate contracts), the total cost of ownership, and the impact on customer experience, perception and brand.

Radio Access Network costs are estimated to be 20% of the opex cost of running a network [1]. And the impact of opex on network quality correlates strongly with increased ARPU and reduced churn; when network quality is highest, service providers benefit from a higher average ARPU (+31 %) and lower average churn (-27%) [2].

Finding the perfect balance of capacity, quality, efficiency and cost – not too much, not too little – is complex and dynamic.

Capacity forecasting for mobile networks

The Digis Squared team have developed machine learning algorithms and decoders that can, based on network activity, decode how User Traffic Profiles are changing. With the deployment of 5G and network slicing techniques, modelling network usage patterns and customer behaviour and predicting future demand becomes immediately far more complex – the only way to successfully model this will be with AI.

Detecting a problem

We detect anomalies in cells in the existing network, plus highly utilized cells, using machine learning and a design approach algorithm based on several reported KPIs. We use this information to distinguish what requires immediate attention from what should be monitored for proactive action. Using multivariable modeling techniques, that is, assessing multiple KPIs across each cell, enables us to have a highly nuanced model, optimising all available capacity.

Forecasting

Operators must be able to estimate the required traffic capacity for their mobile networks in this competitive climate to invest in extensions when they are truly needed, and deploy the most cost-effective solution, while maximizing investment and maintaining good network quality.
In this phase of the development of the model, we will discover future troublesome cells to guide our approach and actions using predictive models.

AI enhancement of capacity management: what’s next?

Today, we use an open-loop control system to apply our AI methods. However, as predictive model accuracy improves, we anticipate transitioning to a fully automated Self-Organized Network (SON) – enabling closed-loop network management with self-planning, self-configuration, self-optimization, and self-healing – system in the near future.

In conversation with Obeid Allah Ali, RAN Automation Architect and Data Scientist at Digis Squared.

If you or your team would like to discover more about our capabilities, please get in touch: use this link or email sales@DigisSquared.com .

Discover more

Digis Squared, independent telecoms expertise.

References