GCP – Deutsche Telekom designs the telco of tomorrow with BigQuery
Imagine you unlocked your phone and all you saw was a blank glowing screen in a happy shade of pink, or a family photo, and nothing else.
No apps, no window, no pop-ups. You simply tap the screen and then speak a request or type something in, and almost instantly you’ve drafted an email or text for easy review, or you’ve booked a restaurant reservation or loaded up your favorite song or game — all from the same minimal interface.
This is a future Deutsche Telekom Germany believes is right around the corner. At this year’s Mobile World Congress, we unveiled just such a device with a simple AI-powered operating system that eliminated the need for apps (at least on the surface) and delivered experiences directly to customers.
It’s part of our decades’ long mission to bring a fuller, richer digital life to more people.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3eaa63aed7f0>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
While the app-less phone may not be here quite yet, we’ve been laying the foundation for it, and other cutting-edge digital services, for years — through a long-term investment in the modernization of our data platform.
Within the decades that Deutsche Telekom has been in operation, we’ve gathered a vast amount of valuable data, as well as accruing just as much legacy infrastructure. At the rate at which technology is evolving, almost anything you build today becomes legacy tomorrow.
We knew that the only way to make the most of our historic data would be to move away from our historic infrastructure. Modernization, rather than migration, was the best path forward. Not that it would be the easy path: It’s meant digitizing everything from how our users interact to how we process our data and manage our backend systems.
Modernizing infrastructure to protect — and act on — data
As a telco organization, protecting our existing data while we scale is paramount. This is especially important when it comes to our users’ personal identifiable information, or PII, which can include call records, personal texts, and location information. Security was and continues to be a top priority for us because it’s important to our customers, and we want to ensure we’re in compliance with all applicable data security standards.
With security and consolidation top of mind, we saw an opportunity to build what we call our One Data Ecosystem (ODE), an AI-integrated, interconnected data platform for our German entity, to unify our data management and processing.
It was a sprawling undertaking. Our on-prem data operations were spread across 40 to 50 different systems which created confusion in regards to where to find specific information, and often resulted in redundant data sets—which in turn exposed new security threat vectors. These operations were also slow. It could take hours or days to sync the systems to uncover useful insights that teams could use to guide business decisions or deliver on projects that require access to data.
Simply adding new hardware on top of these existing solutions wouldn’t solve the problem; we needed to start fresh.
With a solid foundation, we could process data faster, implement future-facing AI solutions, and get the most value out of a multimodal, multi-cloud data set. We knew that the foundation would also require sovereign controls over the numerous services we rely on to manage network traffic data. These needs led us to Google Cloud.
Growing exponentially in the first step of a multi-stage deployment
We were able to collaborate with the Google Cloud team to not only outline a proof of concept for the platform but also work closely with its product and engineering teams to provide feedback on products or features that we knew we’d need to realize our vision for ODE.
We designed our new system implementation to be both fast and secure. ODE serves as both a source of truth but also allows us to decentralize data usage patterns and abstract them from our more than thirty thousand active data users, so we can act on data without infringing on our users’ privacy. Our team decided on a hub-and-spoke model based on BigQuery, Cloud Storage, and BigLake to build a data lakehouse architecture, using Apache Iceberg.
This model also has granular data governance controls, using Dataplex, to limit data access to certain teams. For example, our business teams have access to specific data sets that are different from those that AI developers or data scientists use. We were able to architect these fine-grained access controls for each Google Cloud solution we’ve implemented in record time thanks to specialized services and support from the Google team.
Because we pseudonymize our customer data before ingesting it, we retain our data ingestion capabilities on-premises because of local regulations and data retention policies. That pseudonymised data is then pushed to Google Cloud Storage with Apache Iceberg and then sent to BigQuery and BigLake.
Our data and development teams are now able to move ten times faster than before by processing the data in our lakehouse using Vertex AI. Generative AI is at the forefront of using Gemini models through Vertex AI pipelines and it’s at the core of what we’re building. As we continue to improve, we’re aiming to get 20 or 30 times faster than was possible with our old systems. Such an efficiency boost will save money and allow us to create solutions more quickly and serve customers faster.
In our next phase, we’ll also be integrating solutions from the Looker business intelligence suite. This will give our teams more robust insights and should speed up decision making.
A digital transformation pioneered by generative AI
Since we were focused on modernizing, we wanted to be sure we could actually handle the next big technologies to come along —in particular the fast-emerging trends around AI.
We decided to lean on the Gemini Pro 1.5 models not just as a feature in future offerings but also as a means to build those offerings —Gemini has been core to executing our migration, as well as helping us build some of our new capabilities.
Since Gemini Pro 1.5 has a substantial 1-million-token context window, we could feed our whole code base into it. We then converted the code into natural language and used that natural language to train AI models to improve their output over time. This process included virtual agents that could delegate work to specific models based on the complexity of the code —a vital feature since so much of our infrastructure was interconnected.
These agents helped detangle the wires and shift our resources to the cloud in a cleaner, easy-to-understand package. This is an ongoing process, but we’ve already eliminated excess resources and estimate we’ll be able to remove up to 40% of our old digital infrastructure.
Modernizing our team skills for the AI era
A modernization is a great time to rethink your organization’s hard skills as much as its hardware. That’s why we’ve made it a point to support our data redesign with a focus on upskilling our workforce.
Every two weeks we organize internal “Breakfast Sessions” for 400 to 500 employees, where we delve into the latest best practices for data lifecycle management or exploring the code in our One Data Ecosystem. By referring to live code, our engineers can see what high quality code looks like in our Google Cloud environment and better understand how to most effectively use the ODE to help us grow. We’ve also formally trained more than 300 employees through Google Cloud training partners, and ongoing three-month Google Academy programs, to allow employees to upskill and take advantage of professional certifications.
Leading for the future
Our most important end result for any project is our customer experience. With our data systems consolidated, we now have real-time information for our teams to respond to customers. Beyond fielding simple FAQs, our teams can clearly see information such as device configurations and customer histories to provide a more tailored user experience.
Given how tightly coupled the data collection and collation and analysis steps are, the time we used to have to invest in doing those steps —now, we are doing things three to four times as quickly as before.
As for exciting new projects and innovations like our app-less smartphone, these are only possible with the insights, efficiency, and capabilities the One Data Ecosystem makes possible.
With our unified data foundation in place, we’re now moving into our second phase of evolution and mapping out a central database of our products. With a clearer understanding of our users’ technical inventory — such as which devices they’re using, cables they’re running, or lines they’re managing — we can then roll into our third phase. This will focus on designing new products shaped by our data and our customers’ preferences.
This includes our — “idea to insight” journey that will use Pub/Sub and Dataflow to provide us insights from data in real time as it arrives in our ecosystem. We also plan to explore more of the business potential of our network traffic data by combining both real-time and historical data in BigQuery to inform strategic decisions. And as a more external project, we’re looking at ways to monetize our data. With Google Cloud, it’s easier than ever to label and tag data as it’s ingested, and we can view those datasets in Dataplex, and we can share that through Analytics Hub with potential enterprise customers.
While these capabilities may seem far out, they remain core to what has always driven Deutsche Telekom Germany: driving exceptional service and services for our customers. Having embarked on this data journey, we are now ready to leap over historic technology gaps, setting a new stage in the process for how telecommunications companies use data to shape a truly data-centric future that gives us more insight into our organization and our customers’ preferences.
Read More for the details.