GCP – Migrating from Cassandra to Bigtable at Latin America’s largest streaming service
Post Content
Read More for the details.
Post Content
Read More for the details.
Post Content
Read More for the details.
In the world of technology and cloud computing, the news comes fast and furious. Blink and you’ll miss it. As we wind down the final days of 2023, here’s a look at the top stories of the year that we published on the Google Cloud blog — the product launches, research findings, and initiatives that resonated most with you.
The Google Cloud community started the year in a contemplative mood, thirsty for tools to give them deeper insights from their data, and more holistic views of their environments. The top stories of the month were:
Log Analytics in Cloud Logging is now GAManage Kubernetes configuration at scale using the new GitOps observability dashboardBetter together: Looker connector for Looker Studio now generally availableIntroducing Security Command Center’s project-level, pay-as-you-go optionsCISO Survival Guide: Vital questions to help guide transformation success
In February, readers looked ahead. We unveiled a new pricing approach that is decidedly forward-looking, made inroads on futuristic immersive technology, and planted seeds with the telecommunications community at Mobile World Congress. Readers were also really excited for the global roll-out of AlloyDB. The top stories for the month were:
Introducing new cloud services and pricing for ultimate flexibilityExtending reality: Immersive Stream for XR is now Generally AvailableReimagining Radio Access Networks with Google CloudIntroducing Telecom Network Automation: Unlock 5G cloud-native automation with Google Cloud, and Introducing Telecom Data Fabric: Unlock the value of your dataAlloyDB for PostgreSQL goes global with sixteen new regions
If there has been one overarching theme to Google Cloud for 2023, it’s been generative AI, which made its first real showing this month, with the launch of support for the technology in Vertex AI, alongside an avalanche of news from our first-ever Data Cloud & AI Summit.
Google Cloud brings generative AI to developers, businesses, and governmentsNew BigQuery editions: flexibility and predictability for your data cloudBuild new generative AI powered search & conversational experiences with Gen App BuilderIntroducing Looker Modeler: a single source of truth for BI metricsRun AlloyDB anywhere – in your data center, your laptop, or in any cloud
We kicked the AI story up a notch in April with segment-specific news, and a glimpse into our AI-optimized infrastructure. We also made it easier for customers to interact with Google Cloud professional services, and introduced gamified training!
A responsible path to generative AI in healthcareSupercharging security with generative AIBringing our world-class expertise together under Google Cloud ConsultingGoogle’s Cloud TPU v4 provides exaFLOPS-scale ML with industry-leading efficiencyBoost your cloud skills — play The Arcade with Google Cloud to earn points and prizes
Google I/O is usually a consumer-focused show, but Google Cloud’s foundational role in enabling generative AI let our news take center stage, including the launch of the Duet AI brand. With that as the backdrop, it’s no surprise readers were also excited about new multicloud connectivity capabilities.
Introducing Duet AI for Google Cloud – an AI-powered collaboratorGoogle Cloud advances generative AI at I/O: new foundation models, embeddings, and tuning tools in Vertex AIAt Google I/O, generative AI gets to workAnnouncing A3 supercomputers with NVIDIA H100 GPUs, purpose-built for AIAnnouncing Cross-Cloud Interconnect: seamless connectivity to all your clouds
Three short months after announcing support for generative AI for Vertex AI, we made good by bringing it to general availability, and expanding it to search experiences. We also helped thread the needle between generative AI and databases with vector support, and shared how generative AI is helping to evolve the threat landscape.
Generative AI support on Vertex AI is now generally availableHelping businesses with generative AIImproving search experiences with Enterprise Search on Gen App BuilderBuilding AI-powered apps on Google Cloud databases using pgvector, LLMs and LangChain and Announcing vector support in PostgreSQL services to power AI-enabled applicationsExpanding our Security AI ecosystem at Security Summit 2023
July is usually a relatively quiet month, as people head out on vacations, but not this year — the launch of even more models for our AI builder tools proved just as enticing as a day at the beach. We also shook up the MySQL community with a bold new Cloud SQL Enterprise Plus offering, and introduced a new, visual way for developers to connect their applications.
Google Cloud expands availability of enterprise-ready generative AIIntroducing Application Integration: Connect your applications visually, without codeConversational AI on Gen App Builder unlocks generative AI-powered chatbots and virtual agentsIntroducing Cloud SQL Enterprise Plus: New edition delivers up to 3x MySQL performance
If you think that generative AI news dominated Google Cloud Next this month, you’re only half right. It was certainly a thread in all our leading announcements, but there was also a lot of excitement for more traditional Google Cloud specialties around data analytics and Kubernetes.
Announcing BigQuery Studio — a collaborative analytics workspace to accelerate data-to-AI workflowsVertex AI extends enterprise-ready generative AI development with new models, toolingExpanding Duet AI, an AI-powered collaborator, across Google CloudExpanding our AI-optimized infrastructure portfolio: Introducing Cloud TPU v5e and announcing A3 GAIntroducing the next evolution of container platforms
Enough with the generative AI news already 🙂 In September, readers remembered that it’s a big world out there, and that Google Cloud is a cloud provider with global coverage. From dashboards to databases, from subsea cables to blockchain, this month’s most popular stories showcased the breadth and depth of Google Cloud’s offerings.
Introducing Infrastructure Manager: Provision Google Cloud resources with TerraformMeet Nuvem, a cable to connect Portugal, Bermuda, and the U.S.Enhancing Google Cloud’s blockchain data offering with 11 new chains in BigQueryBigQuery’s user-friendly SQL: Elevating analytics, data quality, and securityGoogle is a Leader in the 2023 Gartner® Magic Quadrant™ for Container Management
Around here, we like to talk about having a healthy disregard for the impossible. Like mitigating the largest-ever DDoS attack — again. Or rethinking Ethernet. Or halving the cost of Spanner compared to the competition. Whatevs, no big deal.
Google mitigated the largest DDoS attack to date, peaking above 398 million rpsGoogle opens Falcon, a reliable low-latency hardware transport, to the ecosystemShared fate: Protecting customers with generative AI indemnificationCloud Spanner is now half the cost of Amazon DynamoDB, and with strong consistency and single-digit ms latency2023 State of DevOps Report: Culture is everything
Security researchers never sleep, and neither do our systems engineers, uncovering significant new vulnerabilities, and beating records for the world’s largest distributed training job for large language models. Oh, and Memorystore for Redis got an upgrade that delivered 60X more throughput.
Google researchers discover ‘Reptar,’ a new CPU vulnerabilityGoogle Cloud demonstrates the world’s largest distributed training job for large language models across 50000+ TPU v5e chipsVertex AI Search adds new generative AI capabilities and enterprise-ready featuresMemorystore for Redis Cluster is GA and provides up to 60 times more throughput and microseconds latencyGKE Enterprise, the next evolution of container platforms, is now generally available
Halfway through the month, we’re pretty sure we know what the top stories will have been: anything related to Gemini, Google’s latest and most capable AI model. Also a special shout out to Google Cloud’s Learning Content team, whose post about free generative-AI trainings shot up to be the top-viewed post of the entire year — in the matter of a few days. Seems like you are all as excited as we are about the Gemini era!
12 days of no-cost training to learn generative AI this DecemberImagen 2 on Vertex AI is now generally availableGemini, Google’s most capable model, is now available on Vertex AIMedLM: generative AI fine-tuned for the healthcare industryAnnouncing General Availability of Duet AI for Developers and Duet AI in Security Operations
And that’s a wrap! On behalf of the Google Cloud blog team, wishing you peaceful and happy holiday season, and looking forward to seeing you here on these pages in 2024.
Read More for the details.
European organizations continue to embrace cloud-based solutions to support their ongoing digital transformation. But the adoption of breakthrough technologies like generative AI and data analytics to drive new innovations will only accelerate if organizations are confident that when they use cloud services their data will remain secure, private, and under their control. This is why the ability to achieve and maintain digital sovereignty is so critical, and why Google Cloud has brought to market the industry’s broadest portfolio of sovereign cloud solutions as part of our “Cloud. On Europe’s Terms” initiative.
We are excited to share some recent examples of how Google Cloud and our partners’ (like T-Systems in Germany and S3NS in France) range of sovereign cloud solutions have been a true business accelerator for numerous innovative European organizations over the course of this year. The ability to address their specific and unique needs for additional cloud controls have enabled them to move new workloads to the cloud, address regulatory mandates, gain efficiencies and scale, and boost customer confidence and trust.
Groupe AGPM is a French insurance company working with S3NS on their cloud transformation. “The cloud creates value for AGPM by accelerating our operational processes and we expect the project to drive a significant improvement in our performance” said Christian Gros, Director of Information Systems, AGPM. “As tangible benefits of AGPM’s ongoing move-to-cloud strategy, the first use cases will be available on the Local Controls by S3NS architecture from December 2023,” added Guillaume Hédoux, head of APGM’s Architecture and IT Transformation division.
ENIO develops and sells software for electromobility infrastructure and control and billing of services with electronic devices throughout Europe. The company used the application modernization capabilities of T-Systems Sovereign Cloud to containerize applications, improving flexibility, scalability, and cost, while helping to ensure GDPR-compliant data processing and storage.
Birdz is a leader in environmental IoT solutions working with S3NS to host their cloud-based applications. “At Birdz, we help our customers address their critical resource-management challenges through our comprehensive expertise in IoT sensors, IoT connectivity and data processing. Working with S3NS was therefore a natural choice. This partnership aligns with our strategy of supporting local digital transformation through highly secure, cloud-based services,” said Ludovic Millier, CIO, Birdz.
MindDoc offers simplified access to mental health care via an online telehealth service. Using the T-Systems Sovereign Cloud, MindDoc is able to meet German regulatory requirements regarding personal data while operating a digital health service. “With T-Systems Sovereign Cloud powered by Google Cloud, many hurdles with respect to sensitive customer data are reduced and this significantly increases the trust of our patients and doctors,” said Raphael Lucchini, CTO, MindDoc.
Club Med is a world-renowned hospitality company that operates nearly 70 premium resorts worldwide and is working closely with S3NS as they use data to improve their business. “In an age where data proliferation and rapid advancements in artificial intelligence are the norm, Club Med is dedicated to leveraging this data in real-time to elevate the experiences of both our customers and employees. We deeply understand the importance of personal information and are committed to its meticulous protection. Our partnership with S3NS, born from the collaboration between Thales and Google Cloud, positions us to manage data in a way that is not only agile and ethical but also fully compliant with GDPR standards,” said Quentin Briard, VP Digital/Marketing, ClubMed.
Oviva offers online care to individuals living with weight-related conditions. To meet Europe’s digital healthcare regulations, Oviva’s architecture was previously divided across various on-premise data centers. By using the T-Systems Sovereign Cloud, Oviva could unify its infrastructure in Google Cloud while complying with German regulations, improving overall efficiency, but also reliability, security, and their ability to innovate. “With the move to the Sovereign Cloud, the big advantage is that we now have our entire environment in the same setup. That means we’re much more productive as a team,” said Demian Jäger, DevOps Engineer, Oviva.
Xayn is a European startup building secure, sovereign, and sustainable AI systems that leverage the power of language models at scale. “Working on cutting-edge AI technology, we need a strong, secure, and reliable partner that can help us protect our clients’ sensitive data. That’s why we decided to use the Sovereign Cloud of Google and T-Systems for our enterprise AI solution,” said Dr. Leif-Nissen Lundbæk, CEO, Xayn.
Lengoo is a European language tech company that enables enterprises to build fine-tuned language models using their proprietary data. T-Systems Sovereign Cloud powered by Google Cloud helps reduce security concerns about the safety and protection of Lengoo’s customer data: “The T-Systems Sovereign Cloud powered by Google Cloud creates more data sovereignty and thus more trust for Lengoo’s customers in our language-based AI solutions,” said Alexander Gigga, CMO, Lengoo.
For these organizations, the business benefits of adopting sovereign cloud solutions is clear. But we know that designing a digital sovereignty strategy that balances control and innovation can be challenging, and we have multiple resources to help.
We recently announced a new strategic collaboration with Accenture to help customers address their digital sovereignty needs in the European Economic Area and the United Kingdom. With a deep understanding of local requirements, Accenture will help clients build, enable and manage sovereign solutions across the region. Accenture’s Cloud First European Lead, Valerio Romano, said: “Our collaboration with Google helps our clients cut through the clutter of the nascent sovereign cloud market and unlock new sources of value in the digital realm.”
Google Cloud’s Digital Sovereignty Explorer is an online, interactive tool that takes individuals through a guided series of questions about their organizations’ digital sovereignty requirements. It seeks to simplify terms and explain important concepts. Upon completing the Explorer, you will receive a personalized report summarizing your responses with additional reference material and recommendations on a range of potentially applicable sovereign cloud solutions.
We also invite you to view proceedings from the recent Financial Times conference Tackling The Complexities Of Digital Sovereignty In Europe, which took place on December 7. This event, produced in partnership with Google Cloud, gathered leading EU policymakers as well as business and public sector leaders responsible for data governance and security in discussing how to navigate the complexity of digital sovereignty while benefiting from technology and supporting economic growth. You can access the replay at digitalsovereigntyeu.live.ft.com.
Read More for the details.
Nuclera, a UK and US-based biotechnology company, is collaborating with Google Cloud to serve the life science community, marrying Nuclera’s rapid protein access benchtop system with Google DeepMind’s pioneering protein structure prediction tool, AlphaFold2 (ref 1) served on Google Cloud’s Vertex AI machine learning platform.
With proteins representing 95% of drug targets, the demand to obtain multiple active protein variations to aid in drug discovery is constantly increasing. In particular, reliable protein structure prediction is a prerequisite for compound/biologics lead development.
The breakthrough AI tool, AlphaFold2 (released by DeepMind in 2021), has thrilled the structural biology and drug discovery communities in recent years by taking a huge leap forward in protein structure prediction accuracy (ref 2).
This coming together of technologies from Nuclera and Google presents a new integrated system for drug developers to optimize protein construct design to accelerate their drug discovery process. High quality structures in minutes to hours will soon be a reality, enabling laser-guided protein design. Additionally, reliable structures for proteins thought “impossible” to characterize experimentally will be made accessible.
Accessibility of proteins for lab-based research is fundamental to drug discovery, and is notoriously difficult and expensive to achieve, meaning time and resource limitations are imposed on research potential.
Nuclera is motivated to better human health by making proteins accessible, enabling life science researchers to obtain active proteins from DNA through its benchtop eProtein Discovery system (see Figure 1). Nuclera’s technology integrates cell-free protein synthesis and digital microfluidics on Smart Cartridges, allowing rapid progress on protein projects through an automated, high-throughput, benchtop protein access system.
Widely hailed as a breakthrough in biological research and a leap in the development of vaccines and synthetic materials, AlphaFold2 is an AI model developed by DeepMind for predicting the 3D structure of a protein based on its 1D amino acid sequence.
AlphaFold2 running on Google Cloud’s Vertex AI is set to become an integral feature in Nuclera’s cloud based software, to improve the quality and obtainability of proteins. Currently Nuclera’s cloud software allows their customers to make informed decisions from expression and purification screen results to identify optimal protein constructs to scale up as well as optimal conditions to scale up proteins. The integration of AlphaFold2 into the eProtein Discovery Software increases the quality of constructs screened on the system by offering an addition in silico filter during the experiment design phase, which translates to a higher probability of identifying a truly optimal target protein on which to build discovery programs. Furthermore, AlphaFold2 will help eProtein Discovery users gain deep insights into possible target protein constructs, including any impacts on drug interactions, structural features, and folding.
While the immense power of the AlphaFold2 algorithm is undeniable, it is important to note that Alphafold2 requires serving infrastructure and an operational model.
Generating a protein structure prediction is a computationally-intensive task. Running inference workflows at scale can be challenging — these challenges include optimizing inference elapsed time, optimizing hardware resource utilization, and managing experiments.
The Vertex AI solution for AlphaFold 2 is designed for inference at scale by focusing on the following optimizations:
Optimizing inference workflow by parallelizing independent steps.Optimizing hardware utilization (and as a result, costs) by running each step on the optimal hardware platform. As part of this optimization, the solution automatically provisions and deprovisions the compute resources required for a step.Describing a robust and flexible experiment tracking approach that simplifies the process of running and analyzing hundreds of concurrent inference workflows.
Nuclera will use the Vertex AI platform as a foundation for a scalable and resource-efficient AlphaFold pipeline, as well as other Google Cloud services to expose the pipeline through an API and integrate it with its eProtein Discovery system.
The first objective that AlphaFold2 and Nuclera will achieve kicks off with creating a scalable API service that accesses an execution of AlphaFold2 in Google Cloud. Second, an analytics dashboard will be built which allows users to visually and quantitatively compare predicted 3D structures for protein variants. Third, a protein of interest (POI) recommendation feature will propose possible synthetic protein variants (isoforms, truncations, mutations, orthologs or fusions) to customers using intelligent selection algorithms, taking into account various constraints such as computationally generated scores or conserved domains.
The 3D structural insights provided by AlphaFold2 will enable Nuclera and its customers to optimize their protein variation synthesis process and gain deeper insights into the interactions between residues and the 3D folding protein structure.
eProtein Discovery customers worldwide will benefit from the composite predictions delivered in the AlphaFold2 module in the eProtein Discovery Software to build a clearer understanding of their proteins, making faster informed decisions that will ultimately economize on time taken for progress in academic research and drug discovery success.
Shweta Maniar, Global Director, Healthcare & Life Sciences Solutions, Google Cloud,commented that, “AlphaFold2 integrated with Nuclera’s eProtein Discovery System is a really exciting demonstration of its practical use in drug discovery, enabling researchers to rapidly and efficiently design and produce proteins with the desired structure and function..”
In partnership with Google Cloud and the awesome capabilities of AlphaFold2, we’re excited to be pioneering AI/ML-assisted drug discovery tools, which we believe will bring forth next-generation therapies at a greater pace than ever before. To learn more and to try out this solution, check our GitHub repository, which contains the components and universal and monomer pipelines. The artifacts in the repository are designed so that you can customize them. In addition, you can integrate this solution into your upstream and downstream workflows for further analysis. To learn more about Vertex AI, visit the product page.
References
1. Jumper, J., Evans, R., Pritzel, A. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2
2. Ghadermarzi, S., Li, X., Li, Mi, et al. Sequence-Derived Markers of Drug Targets and Potentially Druggable Human Proteins. Front. Genet., 10, 1-18 (2019).https://www.frontiersin.org/articles/10.3389/fgene.2019.01075/full
Read More for the details.
Storage systems are a fundamental resource for technological solutions development, essential for running applications and retaining information over time.
Google Cloud and NetApp are partners in the cloud data services market. And we have collaborated on a number of innovative solutions, such as bursting EDA workloads and animation production pipelines that help customers migrate, modernize, and manage their data in the cloud. Google Cloud storage offerings provide a variety of benefits over traditional on-premises storage solutions, including maintenance, scalability, reliability, and cost-effectiveness.
In this blog we’ll take a closer look at some of the NetApp Cloud Volumes ONTAP (NetApp CVO) features that utilize Google Cloud infrastructure to address the applications performance challenges while providing the best solution for file storage in cloud computing.
Google Cloud offers a variety of compute instance configuration types, each optimized for different workloads. The instance types vary by the number of virtual CPUs (vCPUs), disk types, and memory size. These different configurations indicate the instance IOPS and BW limitations.
NetApp CVO is a customer managed software-defined storage offering that delivers advanced data management for file and block workloads in Google Cloud.
Recently NetApp introduced two more VM types to support CVO single-node and high availability (HA) deployments: n2-standard-48 and n2-standard-64.
Choosing the right Google CloudVM configuration for NetApp CVO deployment can affect the performance of your application in a number of ways. For example:
Using VMs with larger vCPU core count will enable concurrent tasks execution and improve overall system performance for certain type of applications.Workloads with high file count and deep directory structures can benefit from having a CVO VM configured with a large amount of system RAM to store and process larger amounts of data at once.The type of disk storage that your CVO VM has will affect its I/O performance. Choosing the right type (e.g Balanced PD vs. SSD PD) will dictate disk performance limits . Persistent disk performance scales with disk size and with the number of vCPUs on your VM instance. Choosing the right instance configuration will determine the VM instance performance limits.The network bandwidth that your CVO VM has will determine how quickly it can communicate with other resources in the cloud. Google Cloud accounts for bandwidth per VM and VM’s machine type defines its maximum possible egress rate.
NetApp has also introduced Flash Cache for CVO that reduces latency for accessing frequently used data and improves performance for random read-intensive workloads. Flash Cache uses high-performance Local SSDs to cache frequently accessed data and augments the persistent disks used by the CVO VMs.
Flash Cache speeds access to data through real-time intelligent caching of recently read user data and NetApp metadata. When a user requests data that is cached in Local SSD, it is served directly from the NVMe storage, which is faster than reading the data from a persistent disk.
Figure 1: NetApp Cloud Volumes ONTAP Flash Cache architecture
Recently NetApp introduced a temperature sensitive storage efficiency capability (enabled by default) which allows CVO to perform inline block-level features including compression and compaction.
Inline compression. Compresses data blocks to reduce the amount of physical storage requiredInline data compaction. Packs smaller I/O operations and files into each physical block
Inline storage efficiencies can mitigate the impact of Google Cloud infrastructure performance limitations by reducing the amount of data that needs to be written to disk and allow CVO to handle higher application IOPS and Throughput.
Following our previous 2022 blog where we discussed bursting EDA Workloads to Google Cloud with NetApp CVO FlexCache, we wanted to rerun the same tests using a synthetic EDA workload benchmark suite but this time with both high write speed and Flash Cache enabled. The benchmark was developed to simulate real application behavior and real data processing file system environments, where each workload has a set of pass/fail criteria that must be met to successfully pass the benchmark.
The results presented below show performance for top valid/successful run using CVO-HA Active/Active configuration and as it can be clearly seen using high write speed mode in conjunction with Flash Cache introduces ~3x improvement in performance.
Figure 2: NetApp CVO performance testing
In addition, to present potential performance benefits of the new NetApp CVO features, together with the NetApp team we used a synthetic load tool to simulate widely deployed application workloads and compare different CVO configurations loaded with unique I/O mixture and access patterns.
Following chart presents a comparison of performance between default n2-standard-32, n2-standard-64 CVO HA (active/active) deployments with and without high write speed and Flash Cache enabled. NetApp CVO high write speed is a feature that improves the write performance by buffering data in memory before it is written to a persistent disk. Since the application receives the write acknowledgement before the data is persisted to storage, this can be a good option, but you must verify that your application can tolerate data loss and you should ensure write protection at the application level.
Figure 3: NetApp CVO performance testing comparison for various workloads
Assuming high compression rate (>=50) and modest Flash Cache hit ratio (25-30%), we can see:
Better performance for deployments with more powerful instances with Flash Cache running random read intensive workloads benefiting from more memory and higher infrastructure performance limits. HPC and batch applications can benefit from Flash Cache. The higher cache hit ratio yields overall higher system performance and can be 3-4 times that of the default deployment without a Flash Cache.Write intensive workloads can benefit from high write speed and are typically generated by backup and archiving, imaging and media applications (for sequential access) and databases (for more random writes).Transactional workloads are characterized by numerous concurrent data reads and writes from multiple users, typically generated by transactional database applications such as SAP, Oracle, SQL Server, and MySQL. These kinds of workloads can benefit from moving to stronger instances and using Flash Cache.Mixed workloads present a high degree of randomness and typically require high throughput and low latency. In certain scenarios, they can benefit from running on stronger instances and Flash Cache. For example, virtual desktop infrastructure (VDI) environments typically have mixed workloads.
Try the Google Cloud and NetApp CVO Solution. You can also learn more and get started with Cloud Volumes ONTAP for Google Cloud.
A special thanks to Sean Derrington from Google Cloud, Chad Morgenstern, Jim Holl and Michele Pardini from NetApp for their contributions.
Read More for the details.
Virgin Media O2 (VMO2) is one of the leading telecom providers in the UK, with an impressive network of 46 million connections, encompassing broadband, mobile, TV, and home phone services. They are known for offering a large and reliable mobile network, coupled with a broadband infrastructure that delivers extremely fast internet speeds.
Three years ago, VMO2 set out to modernize its data platforms, moving away from legacy on-premises platforms into a unified data platform built on Google Cloud. This migration to cloud included multiple Hadoop-based systems, data warehouses, and operational data stores.
One of the key data platforms being migrated is the Netpulse system — a heavily used network data analytics and monitoring service that provides a 360-degree view of their customer’s network and performance.
Netpulse combines several source systems together, but the core of the system is combining weblogs and location feeds in near real-time, using anonymized data to help ensure compliance with GDPR and other data privacy policies. Individually, both weblogs and location feed datasets hold substantial value for both network and customer analytics. Fusing these datasets provides a unified perspective with big potential, boosting VMO2’s ability to optimize network performance, enhance customer experiences, and swiftly address operational challenges that may arise. In essence, this enrichment represents a pivotal step towards a more informed and responsive network ecosystem, driving superior outcomes for both VMO2 and its customers.
In short, our team was eager to move to a cloud-based near real-time/real-time system to improve the accuracy, scalability, resilience and reliability of the Netpulse system. But first we had to find the optimal service, based on our requirements for high performance.
VMO2’s existing on-premises Hadoop clusters were struggling to keep up with continuously increasing data volumes. While Hadoop captured both weblogs and location feeds at current volumes, real-time analytics on these feeds was impossible due to the challenges highlighted below. Further, the system lacked disaster recovery capabilities. Outages on the platform led to extended periods of data unavailability, impacting downstream analytics usage.
Overall the on-premises infrastructure had several challenges:
Lack of scalabilityCapacity limitsExpensive upgradesSoftware licensing costsData center expansion constraints
Before selecting a solution, we needed to understand the scope of the existing system, and our performance requirements.
Weblogs reports the web-traffic activities of VMO2’s mobile customers, encompassing interactions at both Layer 4 and Layer 7. It’s a voluminous dataset that generates approximately 2.5 TB every day; during peak hours, there are 1.5 billion records read and joined with another dataset. Another application, Mobility Management Entity (MME), generates control-plane data, providing real-time insights into the geographical whereabouts of VMO2’s mobile customers. MME generates another 2.1 TB of data daily and peaks at 900 million writes per hour. For our use case, we needed to process all incoming weblogs records while joining them with MME data, and store new MME records for quick lookup.
Looking at the peak hourly numbers, we determined that our system would need to support:
1.5 billion records read from weblogs, or 420,000 records read per second900 million records written from MME, or 250,000 records written per second
That meant that at peak, the system would need to support about 670,000 operations per second. Further, for headroom, we wanted to size up to support 750,000 operations per second!
And while Netpulse needs to be able to perform near-real-time data transformation, it also performs lookups for MME data that is up to 10 hours old (after which the MME records expire). Only a subset of each MME record needed to be stored, amounting to 100 bytes per record. It is important to note that this data going back to 10 hours consists of the same unique records that are occasionally updated, so the overall storage size does not need to be 10 times the peak-hour size.
In other words, we needed to store less than 100 bytes from each MME record for a total of 900 million * 100 bytes, or 90 GB at peak, if all records were unique in that hour. So we safely set the max storage requirement to be double this number, i.e., 180 GB for 10 hours.
To meet these requirements, we determined that we needed a service that could function as both a key-value store and a fast lookup service. The system’s read-to-write ratio was going to be at least 2:1, with the potential of growing to 3:1. We also needed a high-throughput system that could scale and tolerate failure, as this service needed to be up and running 24/7 in production.
In short, we needed a data storage and processing solution for Netpulse that could meet the following requirements:
Near real-time data transformationAbility to perform lookups for historical dataHigh throughputScalabilityFault tolerance24/7 availability
These stringent performance and availability requirements left us with several options for a managed database service. Ultimately we chose Memorystore for Redis because it can easily scale up to millions of operations per second with sub-millisecond latencies and provides us with a high SLA and also because Memorystore Redis:
is a fast in-memory cache instead of using disk-based storagehas easier-to-understand storage semantics, such as sorted setsscales to 100,000 requests per second for small (less than 1KB) reads and writes on a single Redis nodewas familiar to VMO2 teams
Further, once the data is enriched, it needs to be stored in an analytical store for further analysis. We chose BigQuery to power analytics on this and other datasets. Similarly, we chose
Dataflow for data transformation because it can handle data in real-time and scale to meet demand. Additionally, we had already standardized on Dataflow for ad-hoc data processing outside of BigQuery.
In terms of scale, storing processed data into a data warehouse like BigQuery was never a challenge; neither was data processing itself with Dataflow, even at massive scale. However, doing a fast lookup while incoming data was being processed was a huge challenge and something that we wanted to prove out before settling on an architecture.
We started with a single-node Memorystore for Redis instance and gradually added more memory, to get a sense of max writes for a given amount of memory.
We used Redis pipelining to batch requests together for both reads and writes. Redis pipeline was a necessity in this use case as the data sizes to be written and read were small, sending such small amounts of data over network would have meant that we had poor per record latency and poor overall throughput as well – due to network round trip associated with each redis command.
As the record size was less than 100 bytes, after various tests, we settled on 10,000 as pipeline size — giving us about 1 MB of data to send to Redis in one go.
Number of ZADDcommands handled
Where ZADD is shown in green above
Memory usage on the primary instance
Where pink is the instance memory and blue is the memory usage
Our testing revealed that 64 GB was the ideal Redis instance size because it could service 100,000-110,000 writes per second with spare capacity.
Once we had this information, we knew that to support 250,000 writes per second, we would need at least three Memorystore for Redis instances, each with a primary node (and read replica nodes to support reads).
Next, we tested the system’s performance under concurrent read and write operations. This helped us determine the number of nodes required to support 500,000 reads and 250,000 writes per second.
ZREVRANGEcalls on two read replicas
Where blue lines are read ZREVRANGE commands fired on 2 read replicas.
A Memorystore for Redis instance with two read replicas was able to easily handle 200,000 read requests per second per node, for a total of 400,000 read requests per second.
After performance testing, we determined that we needed three Redis instances for production, each with one primary for writes and two read replicas for reads. Consistent hashing was used to shard writes across the three instances. This infrastructure could easily handle 300,000 writes and 1.2 million reads per second, which would be sufficient headroom for the use case. If we were to calculate per record latency for this batched workload, we will get a latency of under 100 microseconds.
The total number of nodes we needed for this use case was nine, across three Redis instances, each with one primary and three read replicas.
100,000 writes per second per primary x 3 = 300,000 writes per second200,000 reads per second per replica x 6 = 1.2 million reads per second
You can find a variant of the detailed PoC to prove this architecture on github.
The architecture with Memorystore for Redis instance discussed above only allowed one primary write node with multiple read replicas, leading us to the aforementioned design using consistent hashing to achieve the required throughput.
We also tested Memorystore for Redis Cluster before its recent General Availability launch, following the methodology we used for our original design, and testing the infrastructure for an hour’s worth of sustained write and read load. We tested with different shard numbers and finally settled on 20 shards for the Memorystore for Redis Cluster instance, providing us 260+ GB of RAM — and significantly more performance than the sharded architecture. (In the previous design, we needed three 64GB Redis instances with one primary to support the write load of 192 GB of RAM available for writes and much more for reads.)
Further, with Memorystore Redis Cluster’s 99.99% SLA, we can easily provision multiple write nodes, called shards. This provides greater scalability and reliability, allowing the development team to focus on solving key business problems rather than managing Redis infrastructure and handling key sharding.
We proceeded to test Memorystore for Redis Cluster extensively, to ensure the cluster could handle throughput at adequate per record latencies.
Memory usage on the cluster
We see memory usage go up during the test from about 2-3% to 68%.
Read and write calls on the cluster
Where the blue line is read (ZREVRANGE) commands fired and the cyan line is write (ZADD) commands.
As represented above, we easily sustained a load of more than 250,000 writes and more than 500,000 reads for about an hour. During the tests, we could still see single command execution time of around 100 microseconds, while the memory usage was well under 70 percent. In other words, we didn’t push the cluster to its limit.
This Memorystore for Redis Cluster design gives us several advantages:
Horizontal scaling – Redis Cluster performance scales linearly as we add nodes to the cluster, in contrast to standalone Redis, where vertical scaling offers diminishing returns on writes.Fully managed scaling – Instead of manually re-sharding keys as we add primaries, Memorystore for Redis Cluster automatically re-shards keys as we increase the number of primaries.Improved SLA from 99.9% to 99.99%.Significantly reduced costs AND improved performance – By converting from 9 x 64 GB nodes to 20 x 13 GB nodes, we are able to save about 66% in total costs AND see three times more performance.
The results with Memorystore for Redis Cluster have convinced us to use it in production now that it is GA.
With the migration to Memorystore, we’ve laid the foundation for seamless integration of weblogs and MME data in real-time. This critical development allows VMO2 to take advantage of a wide range of capabilities that come with this innovative solution, such as offloading to a fully-managed service, scalability, and a durable architecture complete with an SLA. At the same time, we are diligently training our team to use Memorystore’s powerful features, which will unlock the potential for real-time analytics and lightning-fast data ingestion at an unprecedented scale and speed. We are also primed to use BigQuery along with Memorystore for several analytics use cases, such as network performance monitoring and smart metering.
To learn more about Memorystore for Redis Cluster, check out the documentationor head directly to the consoleto get started.
Read More for the details.
Post Content
Read More for the details.
Headline-grabbing security incidents in 2023 have shown that cybersecurity continues to be challenging for organizations around the world, even those with skilled practitioners and state-of-the-art tools. To help prepare you for 2024, we are offering the final installment of this year’s Google Cloud Security Talks on Dec. 19.
Join us as our experts show you how you can meaningfully strengthen your security posture and increase your resilience against new and emerging threats with modern approaches, security best practices, generative AI and AI security frameworks, and actionable threat intelligence by your side.
This series of digital sessions — created by security practitioners for security practitioners — will kick off with a deep dive by our Mandiant experts into the two main types of threats they saw emerge this year, how they impacted organizations, and why we think they will have lasting impact into 2024.
Here’s a peek at other sessions where you’ll walk away with a better understanding of threat actors, potential attack vectors and get fresh ideas for detecting, investigating, and responding to threats:
How Google mitigated the largest DDoS attack in history: Learn how Google was able to defend against the largest DDoS attack to date and mitigate the attack using our global load-balancing and DDoS mitigation infrastructure. Plus, discover how you can protect against DDoS attacks going into the new year with security best practices.2024 Cybersecurity Forecast: Get an inside look at the evolving cyber threat landscape, drawn from Mandiant incident response investigations and threat intelligence analysis of high-impact attacks and remediations around the globe.SAIF from Day One: Securing the world’s AI with SAIF: AI is advancing rapidly, and it is important that effective risk management strategies evolve along with it. To help achieve this evolution, Google introduced our Secure AI Framework (SAIF). Join us to learn why SAIF offers a practical approach to addressing the concerns that are top of mind for security and risk professionals like you, and how to implement it.Unlock productivity with Duet AI in Security Operations: AI is the hot topic of the year, but is there a practical application for security teams? Or is it just smoke and mirrors? If we take a step back and look at the pervasive and fundamental security challenges teams face — the exponential growth in threats, the toil it takes for security teams to achieve desired outcomes, and the chronic shortage of security talent — and ask whether AI can effectively move the needle for security teams, the answer is a resounding yes. Join us to see how Duet AI in Security Operations can help teams simplify search, transform investigation, and accelerate response.Duet AI in Google Workspace: Keeping your data private in the era of gen AI. Join this session to get answers to your questions about Duet AI in Google Workspace, learn more about the built-in privacy and security controls in Duet AI, and understand how your organization can achieve digital sovereignty with Sovereign Controls.Meet the ghost of SecOps future. Today’s security operations center (SOC) has an increasingly difficult job protecting growing and expanding organizations. The landscape is changing and the SOC needs to change with the times or risk falling behind the evolution of business, IT, and threats. Join us as we show you a vision of what the SOC will look like in the near future and how to choose the best course of action today.Securing your gen AI innovations. This session will cover the essential Google Cloud security tools crucial for safeguarding your gen AI adoption approaches. The presenters will provide a comprehensive overview of the security challenges inherent in gen AI projects from machine learning models to data processing pipelines and offer practical insights into how to mitigate the most relevant risks.Real-world security for gen AI with Security Command Center. How can you protect your gen AI workloads with strong preventative guardrails, and get near real-time alerting of drift and workload violations? Get a sneak peek of new posture management capabilities coming to Security Command Center and learn how to protect AI applications and training data using opinionated, Google Cloud-recommended controls designed specifically for gen AI workloads.
This year’s final Google Cloud Security Talks is designed to give you the assurance you need heading into 2024 that detecting, investigating, and responding to threats at scale is achievable with modern security operations approaches and actionable threat intelligence. Make sure you join us, live or on-demand.
Read More for the details.
Live-streaming applications require that streaming be uninterrupted and have minimal delays in order to deliver a quality experience to end users’ devices. Delays in rendering live streams can lead to poor video quality and video buffering, negatively impacting viewership. To ensure a high live-streaming quality, you need a reliable content delivery network (CDN) infrastructure.
Media CDN is a content delivery network (CDN) platform designed for delivering streaming media with low latency across the globe. Notably, Media CDN uses YouTube’s infrastructure to bring video streams (both on-demand video and live) and large file downloads closer to users for fast and reliable delivery. As a CDN, it delivers content to users based on their location from a geographically distributed network of servers, helping to improve performance and reduce latency for users who are located far from the origin server.
In this blog, we look at how live-streaming providers can utilize Media CDN infrastructure to better serve video content, whether the live-streaming application is running within Google Cloud, on-premises, or in a different cloud provider altogether. Media CDN, when integrated with Google Cloud External Application Load Balancer as origin, can be utilized to render streams irrespective of the location of the live-streaming infrastructure. Further, it’s possible to configure Media CDN so that live streams can withstand most kinds of interruptions or outages, to ensure a quality viewing experience. Read on to learn more.
Live streaming is the process of streaming video or audio content in real time to broadcasters and playback devices. Typically live-streaming applications, involve the following components:
Encoder: Compresses the video to multiple resolutions/bitrates and sends the stream to a packager.Packager and origination service: Transfers the transcoded content to different media formats and stores video segments to be rendered via HTTP endpoints.CDN: Streams the video segments from the origination service to playback devices across the globe with minimal latency.
At a high level, Media CDN contains two important components:
Edge cache service: Provides a public endpoint and enables route configurations to route the traffic to specific origin.
Edge cache origin: Configure a Cloud Storage bucket or a Google External Application Load Balancer as an origin.
The above figure depicts an architecture where Media CDN can serve a live stream origination service running in Google Cloud, on-prem or on an external cloud infrastructure, by integrating with Application Load Balancer. Application Load Balancer enables connectivity to multiple backend services and provides advanced path- and host-based routing to connect to these backend services. This allows live stream providers to use Media CDN to cache their streams closer to the end users viewing the live channels.
The different types of backend services provided by Load Balancers to facilitate connectivity across infrastructure are:
Internet/Hybrid NEG Backends: Connect to live-streaming origination service running in a different cloud provider or on-premises.Managed Instance Groups: Connect to live-streaming origination service running in Compute Engine across multiple regions.Zonal Network Endpoint Groups: Connect to live-streaming origination service running in GKE.
Since any disruption to live stream traffic can affect viewership, it is essential to plan for disaster recovery to protect against zonal or regional failures. Media CDN provides primary/secondary origin failover to facilitate disaster recovery.
The above figure depicts Media CDN with an Application Load Balancer origin providing failover across regions. This is achieved by creating two “EdgeCacheOrigin” hosts pointing to the same Application Load Balancer with different “host header” values. Every EdgeCacheOrigin is configured to set the host header to a specific value. The Application Load Balancer performs host-based routing to route the live stream traffic requests based on the host header value.
When a playback device requests the stream from Media CDN, it invokes the Application Load Balancer by setting the host header to the primary origin value. The load balancer looks at the host header and forwards the traffic to the primary live stream origination service. When the primary live stream provider fails, the failover origin rewrites the host header to the failover origin value and sends the request to Application Load Balancer. The load balancer matches the host and routes the request to a secondary live stream origination service in a different zone or region.
The below snippet depicts the URL host-rewrite configuration in the EdgeCacheOrigin:
Media CDN is an important part of the live streaming ecosystem, helping to improve performance, reduce latency, and ensure quality for live streams. In this post, we looked at how live stream applications can utilize Google Media CDN across multiple environments and infrastructure. To learn more about Media CDN, please see:
Deploy Streaming Service with Media CDNMedia CDN OverviewMedia CDN with Application Load Balancer
Read More for the details.
Google Distributed Cloud Virtual for vSphere (GDCV vSphere) enables customers to deploy the same Kubernetes as Google Kubernetes Engine in the cloud, on their own hardware and data centers, with GKE Enterprise tooling to manage clusters at enterprise scale. Enterprises rely on GDCV (vSphere) to support their primary business applications, providing a scalable, secure, and highly available architecture. By integrating with their existing VMware vSphere infrastructure, GDCV makes it easy to deploy secure, consistent Kubernetes on-premises. GDCV (vSphere) integrates with VMware vSphere to meet high availability requirements, including HA admin and user clusters, auto-scaling, node repair, and now, VMware’s advanced storage framework.
Many VMware customers use aggregation functionalities such as datastore clusters to automate their virtual disk deployment. By combining many datastores into one object, they can let vSphere decide where to put a virtual disk, basically picking the best location for the given requirement.
Storage interactions between the Kubernetes cluster and vSphere are driven through the Container Storage Interface (CSI) driver module. VMware releases its own CSI driver, vSphere CSI. While this integration allows you to get started very quickly, it also comes with limitations, since even the VMware-delivered driver does not support datastore clusters. Instead, VMware relies on Storage Policy Based Management (SPBM) to enable administrators to declare datastore clusters for workloads and provide the placement logic that ensures automatic storage placement. Until this point, SBPM was not supported in GDCV making storage on these clusters harder and less intuitive for VM admins after being used to the flexibility of SBPM for VMs.
With version 1.16, GDCV (vSphere) now supports SPBM, enabling customers to leverage a consistent way to declare datastore clusters and deploy workloads. GDCV’s implementation of SPBM provides the flexibility to maintain and manage the vSphere storage without touching GDCV or Kubernetes. In this way, GDCV lifecycle management fully leverages a modern storage integration on top of vSphere, allowing for higher resilience and a much lower planned maintenance window.
This new model of storage assignment is through the integration of GDCV with VMware’s SPBM, which is also enabled by making advanced usage of the VMware CSI driver. Storage Policy Based Management (SPBM) is a storage framework that provides a single unified control plane across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines. Simply put, SPBM lets you create storage policies mapping, linking VMs or applications to their required storage needs.
By integrating with VMware’s SBPM, creating clusters with GDCV from a storage perspective is now just a matter of referencing a particular storage policy in the clusters’ install configurations.
Storage policies can be used to combine multiple single datastores to address them as if they were one, similar to datastore clusters. However, when queried, the policy currently delivers a list of all compliant datastores, but does not favor one for the best placement. In order to do that, GDCV will take care of that for you. It will analyze all compliant datastores in a given policy and pick the optimal datastore for the disk placement. And that is done dynamically for every storage placement using SPBM with GDCV.
The beauty of it is, from an automation point of view, when anything changes from a storage capacity or maintenance perspective — all changes are made from the storage end. Operations like adding more storage capacity can now be done without the need of changing GDCVs configuration files. .
This simplifies storage management within GDCV quite a bit.
With SBPM, a VMware administrator can build different storage policies based on the capabilities of the underlying storage array. They can define one/many datastores to one/many policies — and then assign the VM a policy that best defines its storage requirements. So in practice, if we want gold level storage (e.g. SSD only for production environment) we first create a policy defining all Gold level storage and add all the matching datastores to that policy. We then assign that policy to the VMs. Same for e.g. Bronze level storage. Simply create a Bronze storage policy with bronze level datastores (e.g. HDD only for Dev environment), and apply it to the relevant VMs.
In order to use the storage policy feature in GDCV, The VMware admin needs to set up at least one storage policy in VMware which is compatible with one or more datastores your GDCV cluster can access.
GDCV supports datastore tag-based placement. VMware admins can create specific categories and tags based on pretty much anything related to the available storage — think performance levels, cluster name, and disk types as just a few examples.
Let’s look at two key storage requirements in a cluster — the VM disk placement and the persistent volume claims (PVC) for stateful applications — and how to use SBPM policies to manage these.
VM disk placement
Let’s go back to our Gold and Bronze examples above. We are going to define storage requirements for a GDCV User cluster. Specifically, we want to refer to two different storage policies within the User cluster configuration file:
A cluster wide storage policy – this will be the default policy for the cluster and in our case will be our “Bronze Policy”A storage policy for specific node pools within the user cluster — our “Gold” policy
First, the “Gold” and “Bronze” tags are assigned to the different datastores available to the GDCV node VMs. In this case, “Gold” refers to SSD disks only; “Bronze” refers to HDD disks only.
To create and assign tags, follow the documentation; noting that tags can also be applied to datastore clusters, or datastores within the cluster — read more here.
Once the tags are created, the storage policy is defined as per the official documentation.
After creating a storage policy, the datastores compatible with the policy can be reviewed — see example below:
Now, let’s apply some storage policies to our user cluster configuration files.
Define cluster-wide policy (“Bronze” policy)
In this user cluster config file snippet, the storage policy name “Bronze” is set at the cluster-level. This means all the provisioned VMs in all of the nodepools will use this storage policy to find compatible datastores and dynamically select the one that has sufficient capacity to use.
Define node pool policy (“Gold”)
In this user cluster config file snippet, a storage policy (“Gold”) is at a node-pool level. This policy will be used to provision VMs at that node pool, and all other storage provisioning will use the storage policy specified at the cluster section.
Using storage policies like this abstracts the storage details from the cluster admin. Also, if there is a storage problem — for example with capacity — then more datastores can be tagged so that they are available within the storage policy — typically by the VMware admin. The GDCV cluster admin does not need to do anything, and the extra capacity that is made available through the policy is seamlessly incorporated by the cluster. This lessens the administrative load on the cluster admin and automates the cluster storage management.
Persistent Volume Claims
A user cluster can have one or more StorageClass objects, where one of them is designated as the default StorageClass. When you create the cluster using the documented install guide, we will have created a default storage class.
Additional storage classes can be created, which can be used instead of the default. The vSphere CSI driver allows the creation of storage classes with a direct reference to any existing storage policy within the vCenter where GDCV user cluster runs on.
This means that volumes created by PVCs in the cluster will be distributed across the datastores that are compatible with the storage policy defined in our user clusters. These storage classes can map to VMFS, NFS and vSAN storage policies within vSphere.
The file below configures a StorageClass that references a policy — “cluster-sp-fast”
This storage class can then be referenced in a persistent volume claim. See below:
Volumes with this associated claim will be automatically placed on the optimal datastore included in the “cluster-sp-fast” vSphere storage policy.
So in this post we have discussed the integration of GDCV with VMware’s SPBM framework. This integration is great news for GDCV admins as it allows the automation of storage management by taking it away from hard links between specific datastores and moving it more towards a dynamic storage assignment, managed from the VMware side. This means less overhead and less down times for the GDCV clusters and more flexibility in storage management
Learn more about the Google Distributed Cloud, a product family that allows you to unleash your data with the latest in AI from edge, private data center, air-gapped, and hybrid cloud deployments. Available for enterprise and public sector, you can now leverage Google’s best-in-class AI, security, and open-source with the independence and control that you need, everywhere your customers are.
Learn about visual inspection application at the edge with Google Distributed Cloud EdgeVMware’s SBPM documentationGDCV (vSphere) SBPM integrationGDCV (vSphere) 1.16 documentation – Storage Policy Based Management
Read More for the details.
The UAE Consensus agreement that was signed earlier this week at the conclusion of COP28 was not what many participants had hoped for, but it is more than many of us expected. While it failed to definitively call for the phase-out of fossil fuels, it did land agreement on a range of critical issues — from scaling renewables, funding for loss and damage, and more. Still, the risk of catastrophic climate change is high, and the pressure to effect meaningful change continues to fall on individuals and the private sector.
Thankfully, as we wrote over the past two weeks, there’s a lot that businesses and individuals can do. Over the course of 14 blogs, we detailed the numerous ways that Google, Google Cloud and our partners are curtailing our own carbon footprints, and outlined many ways our customers are doing the same, using a variety of techniques, tools and technologies. On the technology front, the rise of generative AI in particular is enabling new ways of surfacing climate-relevant information and driving climate-friendly actions.
To recap, here’s a summary of what we wrote about over the course of the conference:
On Day 1, Adaire Fox-Martin and I, Justin Keeble, laid out our vision for improving access to climate data (e.g., participating in the Net Zero Data Public Utility), building a climate tech ecosystem, and unlocking the power of geospatial analysis.On Day 2, the first-ever COP28 Health Day, Googlers Phil Davis and Daniel Elkabetz highlighted how environmental data can raise awareness, help citizens make better decisions, and spur adoption of solutions for better climate health.On Day 3, as the COP28 community pondered finance and trade, Kevin Ichhpurani and Denise Pearl discussed how partners can adopt programs in Google Cloud Ready – Sustainability catalog to help their customers identify and deploy better solutions.Likewise, EMEA Googlers Tara Brady and Jackie Pynadath highlighted how Google Cloud financial services customers are using generative AI to fund climate transition initiatives.For Day 5, Energy and Industry Day, Google Cloud Consulting leader Lee Moore and Poki Chui showed how businesses are decarbonizing their supply chains and helping consumers make enlightened choices.On Day 6, Urbanization and Transport Day, Umesh Vemuri and Jennifer Werthwein showed off examples of Google Cloud automotive customers using AI to, um, drive innovation in automobile design, supply chain, power and mobility.Denise Pearl then took the pen with our partner mCloud to discuss how we, in addition to shifting away from carbon-based energy, are actually using less energy to begin with.We then heard from the Google Cloud Office of the CTO, where Will Grannis and Jen Bennett relayed the lessons they’ve learned from talking to CTOs about sustainability.In honor of Day 9, Nature Day, Chris Lindsay and Charlotte Hutchinson talked about how geospatial analytics and solutions built on top of Google Earth Engine are helping partners preserve the natural world.Then, Yael Maguire and Eugene Yeh of our Geo Sustainability team looked at the new European Union Deforestation Regulation (EUDR), and how to comply with it using geospatial data and tools.In honor of Day 10, Food, Agriculture and Water Day, Karan Bajwa and Leah Kaplan provided a view from Google Cloud Asia at how Google Earth Engine and AI can support sustainable agriculture; andCarrie Tharp and Prue Mackenzie from the Industries team examined how generative AI can encourage healthier food systems.Product management leads Gabe Monroy and Cynthia Wu described how Google Cloud tools like Carbon Footprint are helping companies decarbonize their cloud footprints; andFinally, Michael Clark and Diane Chaleff shared techniques that software builders can use to bolster sustainable software design, and encourage end users to make better choices from their apps.
This is just a small sample of the many things that we are doing to address the climate crisis, both in our own operations, and through technologies that our customers can use to drive their own efforts. We hope that you take the time to read about what Google Cloud customers and partners are doing to accelerate action on climate, and that you will take inspiration in them for your own business transformation. You can also learn more about Google’s sustainability efforts, and keep up-to-date on Google Cloud’s sustainability news. See you next year for COP29.
Read More for the details.
Calling all cloud developers. This year there has been a lot of focus on generative AI, with plenty of excitement about the potential it brings to the future of cloud technology. Today we’re launching new training on Duet AI in Google Cloud that can help you get the most out of the new product features so that you can build customer experiences quickly, more securely, and like never before.
Duet AI is an AI-powered collaborator that helps you build secure, scalable applications while providing expert guidance. It’s built on top of Google’s large language models (LLMs) and tuned by Google Cloud experts.
Find out how to incorporate Duet AI in Google Cloud into your day-to-day activities. The new Duet AI in Google Cloud Learning Path consists of a series of videos, quizzes and labs (requiring credits) to test your skills. Each video and quiz can be completed at no-cost. Labs cost 5 credits each in Google Cloud Skills Boost, with a total of 8 labs (for 40 credits total) across the learning path. These resources could help you become more productive in Google Cloud and IDEs, and to get more done, faster.
Duet AI is more than just a developer productivity tool, it’s also a learning companion for developers of all skill levels embarking on the development lifecycle. Whether you’re just starting with cloud technologies or are a seasoned expert, Duet AI is here to guide and enhance your journey to mastery in software development and Google Cloud.
It offers personalized, expert-level advice in natural language on a wide range of topics, from crafting specific queries and scripts, to mastering common Google Cloud tasks, to best practices for application deployment, and cost optimization. Integrated within both the IDE and the Google Cloud console, Duet AI promotes a learning experience that can minimize the need for context-switching.
Here are some examples on how Duet AI can help:
Real-time code creation and editing assistance. Describe a task that you have in mind as a comment or function name, and Duet AI will generate suggested code that can be reviewed and modifiedChat assistance in the IDE for troubleshootingOvercome challenges that many of us experience day-to-day, like friction when integrating new tools and services, hashing over repetitive tasks, and spending time understanding a new code base or projectUse templates to develop a basic app. Then run, test, and deploy to Google Cloud, all with Duet AI assistanceSave time and increase code quality when writing tests
Ready to get started using Duet AI? Go to the Duet AI in Google Cloud Learning Path, or dive directly into each of the role-based courses (1 to 2.5 hours each) within the learning path:
Duet AI for Application DevelopersDuet AI for Cloud ArchitectsDuet AI for Data Scientists and AnalystsDuet AI for Network EngineersDuet AI for Security EngineersDuet AI for DevOps EngineersDuet AI for end-to-end SDLC
Looking to build your cloud skills even more? Google Cloud Innovators Plus is a great way to access the Duet AI in Google Cloud learning path, and the entire catalog of Google Cloud Skills Boost on-demand training. Your membership gets you $1,500 in developer benefits for $299 a year, including Google Cloud Credits, a Google Cloud certification voucher, 1:1 access to experts, and more. Join here.
Read More for the details.
As more and more organizations embrace multi-cloud data architectures, a top request we constantly receive from customers is how to make cross-cloud analytics super simple and cost-effective in BigQuery. To help customers on their cross-cloud analytics journey, today we are thrilled to announce the public preview of BigQuery Omni cross-cloud materialized views (aka cross-cloud MVs). Cross-cloud MVs allow customers to very easily create a summary materialized view on GCP from base data assets available on another cloud. Cross-cloud MVs are automatically and incrementally maintained as base tables change, meaning only a minimal data transfer is necessary to keep the materialized view on GCP in sync. The result is an industry-first, cost-effective and scalable capability that empowers customers to perform frictionless, efficient, and economical cross-cloud analytics.
The demand for cross-cloud MVs has been growing, driven by customers wanting to do more with their data across cloud platforms while leaving large data assets intact in separate clouds. Today, analytics on data assets across clouds is cumbersome, as it usually involves copying or replicating large datasets across cloud providers. This process is not only burdensome to manage, but also incur substantial data transfer costs. By integrating cross-cloud MVs, customers are looking to optimize these processes, seeking both efficiency and cost-effectiveness in their data operations.
Some of the key customer use cases where cross-cloud MVs can greatly simplify workflows while reducing costs include:
Predictive analytics: Organizations are increasingly eager to harness Google Cloud’s cutting-edge AI/ML capabilities with Vertex AI integration. With the ability to effortlessly build ML models on GCP using cross-cloud MVs, — leveraging Google’s large language foundational models like PaLM 2 and Gemini — customers are excited to discover new ways of interacting with their data. To leverage the power of Vertex AI and Google Cloud’s large language models (LLMs), cross-cloud MVs seamlessly ingest and combine data across a customer’s multi-cloud environments.Cross-cloud or cross-region data summarization for compliance: There’s an emerging set of privacy use cases where raw data cannot leave the source region, as it must adhere to stringent data sovereignty regulations. However, there is a viable workaround for cross-regional or cross-cloud data sharing and collaboration: aggregating, summarizing, and roll-ups of the data. This processed data, which complies with privacy standards, can be replicated across regions for data sharing and consumption with other team and partner organizations and kept up to date incrementally through cross-cloud MVs.Marketing analytics: Organizations often find themselves combining data sources from various cloud platforms. A common scenario involves the integration of CRM, user profile, or transaction data on one cloud with campaign management or ads-related data in Google Ads Data Hub. This integration is critical to ensure a privacy-safe method of segmenting customers, managing campaigns, and other marketing analytics requirements. Some of the user profile and transaction data is available in another cloud and oftentimes only a subset or summary of this data is required to be brought through these cross-cloud MVs to join with Ads or campaign data available on Google platform. Customers also want to ensure these integrations meet their high-levels of efficiency, and provide governance controls over their data.Near real-time business analytics: Real-time insights rely on powerful business intelligence (BI) dashboards and reporting tools. These analytical applications are crucial as they aggregate and integrate data from multiple sources. To reflect the most up-to-date business info, these dashboards require regular updates at intervals aligned with business needs — whether that’s hourly, daily, or weekly. Cross-cloud MVs enable consistent updates with the latest data regardless of where data assets live, ensuring that derived insights are relevant and timely. Combining these capabilities with GCP’s powerful Looker platform and semantic models further provides value and updated insights to end users.
BigQuery Omni’s cross-Cloud MV solution has a unique set of features and benefits:
Ease of use: Cross-cloud MVs simplify the process of combining and analyzing data regardless of whether the data assets live on different clouds. They minimize the complexity of running and managing complex analytics pipelines, large scale duplication of data especially when dealing with frequently changing data.Significant cost reduction: It significantly reduces egress costs of bringing data across clouds by only transferring the incremental data when needed.Automatic refresh: Designed for convenience, cross-cloud MVs work out of the box, automatically refreshing and incrementally updating based on user specifications.Unified governance: BigQuery Omni provides secure and governed access to materialized views in both clouds. This feature is crucial for both local analytics and cross-cloud analytics needs.Single pane of glass: The solution provides seamless access through the familiar BigQuery user interface for defining, querying and managing cross-cloud MVs.
Cross-cloud MVs offer significant benefits across a variety of industries and customer scenarios as illustrated below:
Healthcare: Data scientists in one department want to bring summaries of their data in regular intervals (daily or weekly) from AWS to Google Cloud (BigQuery) for aggregate analytics and model building.Media and entertainment: A marketing analyst wants to join, de-duplicate, and segment AdsWhiz data from AWS with listener and audience data on a weekly basis from Google Cloud to expand audience reach.Telecom: A data analyst seeks to centralize log level data from AWS and streaming data from Ads server for revenue targeting periodically.Education: Data analysts need to join product instrumentation data on AWS with enterprise-level data on Google Cloud. As new products are added to their platform, they want to simplify their company ETL pipeline and cost challenges by using cross-cloud MVs.Retail: A marketing analyst needs to join their user profile data from Azure with campaign data in Ads Data Hub in a privacy-safe manner. With new retail users coming into the system daily, they rely on cross-cloud MVs for regular combined analysis, ensuring up-to-date data processing.
With cross-cloud MVs, we are empowering organizations to break down cloud silos and harness the power of their rich, changing data, in near real-time in Google Cloud. This breakthrough capability is not only shaping the future of cross-cloud analytics, but also multi-cloud architectures, enabling customers to achieve new levels of flexibility, cost-effectiveness, and actionable insights. With a powerful combo of cross-cloud analytics with BigQuery Omni and agile semantics with Looker, we are able to bring rich and actionable insights faster and more easily to data consumers.
Ability to create Cross-cloud MV in BQ using SQL:
Ability to perform effective and cost efficient cross-cloud analytics with cross-cloud MVs
To learn more about cross-cloud MVs and how they can transform your organization’s cross-cloud analytics capabilities, watch the demo, explore the public documentation and try the product in action now.
Read More for the details.
In today’s competitive business landscape, cost is a crucial factor for success. Spanner is Google’s fully managed database service for both relational and non-relational workloads, offering strong consistency at a global level, high performance at virtually unlimited scale, and high availability with an up to five-nines SLA. Spanner is a versatile database that caters to a vast array of workloads, powering some of the world’s most popular products. And with Spanner’s recent price performance improvements, it’s even more affordable, due to improved throughput and storage per node.
In this blog post, we will demonstrate how we benchmark a subset of these workloads, key-value workloads, using the industry-standard Yahoo! Cloud Serving Benchmark (YCSB) benchmark. YCSB is an open-source benchmark suite for evaluating and comparing the performance of key-value workloads across different databases. While Spanner is often associated with relational workloads, key-value workloads are also common on Spanner among industry-leading enterprises.
The results show that our recent price-performance improvements provide at least a 50% increase in throughput per node, with no change in overall system costs. When running at the recommended CPU utilization in regional configurations, this throughput is delivered with single-digit millisecond latency.
With that out of the way, let’s dig deeper into the YCSB benchmarking setup that we used to derive the above results.
We used the YCSB benchmark on Spanner to run these benchmarks.
We want to mimic typical production workloads with our benchmark. To that end, we picked the following characteristics:
Read staleness: Read workload comparison is based on “consistent/strong reads” and not “stale/snapshot” reads.Read/write ratio: The evaluation was done for a 100% read-only workload and a 80/20 split read/write workload.Data distribution: We use the Zipfian distribution for our benchmarks since it more closely represents real-world scenarios.Dataset size: A workload dataset size of 1TB was used for this benchmark. This ensures that most requests will not be served from memory.Instance configuration: We used a three-node Spanner instance in a regional instance configurationCPU utilization: We present here two sets of benchmarks: the first set of benchmarks run at the recommended 65% CPU utilization for latency-sensitive and fail-over safe workloads; a second set of benchmarks runs at near 100% CPU utilization to maximize Spanner’s throughput. The performance numbers at 100% CPU utilization are also published in our public documentation.Client VMs: We used Google Compute Engine to host the client VMs, which are co-located in the same region as the Spanner instance.
To simplify cost estimates, we introduce a normalized metric called “Dollars per kQPS-Hr.” Simply put, this is the hourly cost for sustaining 1k QPS for a given workload.
Spanner’s cost is determined by the amount of compute provisioned by the customer. For example, a single-node regional instance in us-east4 costs $0.99 per hour. For the results that we present here, we consider the cost of a Spanner node without any discounts for long-term committed usage, e.g., committed use discounts. Those discounts would only make price-performance better, due to reduced cost per node. For more details on Spanner pricing, please refer to the official pricing page.
For simplicity, we also exclude storage costs from our pricing calculations, since we are measuring the cost of each additional “thousand” QPS.
The results below show reduction in costs for different workloads due to the ~50% improvement in throughput per node.
Benchmark 1: 100% Reads
YCSB Configuration: We used the Workloadc.yaml configuration in the recommended_utilization_benchmarks folder to run this benchmark.
Deriving the metric for Spanner: A three-node Spanner instance with a preloaded 1TB of data produced a throughput of 37,500 QPS at the recommended 65% CPU utilization. This equates to 12,500 QPS per Spanner node.
It now costs just 7.9 cents to sustain 1k read QPS for one hour in Spanner.
Benchmark 2: 80% Read-20% Write
YCSB Configuration: We used the ReadWrite_80_20.yaml configuration in the recommended_utilization_benchmarks folder to run this benchmark.
Deriving the metric for Spanner: A three-node Spanner instance with a preloaded 1TB of data produced a throughput of 15,600 QPS at the recommended 65% CPU utilization. This equates to 5,200 QPS per Spanner node.
It now costs just 19 cents to sustain 1k 80/20 Read-Write QPS for one hour in Spanner.
Benchmark 1: 100% Reads
YCSB Configuration: We used the Workloadc.yaml configuration in the throughput_benchmarks folder to run this benchmark.
Deriving the metric for Spanner: A three-node Spanner instance with a preloaded 1TB of data produced a throughput of 67,500 QPS at near 100% CPU utilization. This equates to 22,500 QPS per Spanner node.
It now costs just 4.4 cents to sustain 1k Read QPS for one hour in Spanner at ~100% CPU utilization.
YCSB Configuration: We used the ReadWrite_80_20.yaml configuration in the throughput_benchmarks folder to run this benchmark.
Deriving the metric for Spanner: A three-node Spanner instance with a preloaded 1TB of data produced a throughput of 35,400 QPS at near 100% CPU utilization. This equates to 11,800 QPS per Spanner node.
It now costs just 8.3 cents to sustain 1k 80/20 Read-Write QPS for one hour in Spanner at ~100% CPU utilization.
Please note that benchmark results may vary slightly depending on regional configuration. For instance, in the recommended utilization benchmarks, CPU utilization may deviate slightly from the recommended 65% target due to the test’s fixed QPS. However, this does not impact latency.
The 100% CPU utilization benchmarks represent the approximate theoretical limits of Spanner. In practice, the throughput can vary based on a number of factors such as system tasks, etc.
We recognize the importance of cost-efficiency and remain committed to performance improvements. We want customers to have access to a database that delivers strong performance, near-limitless scalability and high availability. Spanner’s recent performance improvements let customers realize cost savings through improved throughput. These improvements are available for all Spanner customers without needing to reprovision, incur downtime, or perform any user action.
Learn more about Spanner and how customers are using it today. Or try it yourself for free for 90 days, or for as little as $65 USD/month for a production-ready instance that grows with your business.
Read More for the details.
There’s more demand than ever for the digital products and services that people and businesses rely on every day. Greater digital demand in turn requires greater data center capacity, and here at Google we’re committed to finding sustainable ways to deliver that capacity.
Today, we’re sharing our new framework to more precisely evaluate the health of a local community’s watershed and establish a data-driven approach to advancing responsible water use in our data centers. Building on our climate-conscious approach to data center cooling, the framework is an important element of our commitment to water stewardship in the communities where we operate.
When we build a data center, we consider a variety of factors, including proximity to customers or users, the presence of a community that’s excited to work with us, and the availability of natural resources that align with our sustainability and climate goals. Water cooling is generally more energy-efficient than air cooling, but with every campus, we ask an important question: Is it environmentally responsible to use water to cool our data center?
To find the answer in the past, we used publicly available tools to gain high-level insights, or provide an “Earth View” of the water challenges facing large aquifers or river basins, such as the Columbia River in the Pacific Northwest, or the Rhine River in northern Europe. But when we wanted to get more of a local “street view” and dive deeper into the state of a specific water source — like the Dog River in Oregon, or the Eems Canal in Groningen, Netherlands — we struggled to find a tool that sufficiently captured the local water challenges to inform us about how to cool the data center in a climate-conscious way.
We recognize that addressing global water challenges requires local solutions, so we developed a data-driven water risk framework in collaboration with a team of industry-leading environmental scientists, hydrologists, and water sustainability experts.
Our framework provides an actionable and repeatable decision-making process for new data centers and helps us evaluate evolving water risks at existing sites, with the specificity we need to understand watershed health at a hyperlocal level. The evaluation results tell us if a watershed’s risk level is high enough that we should consider alternative solutions like reclaimed water or air-cooling technology, which uses minimal water but consumes more energy.
The framework has two main steps to assess the water risk level for a data center location:
Evaluate responsible use. We compare the current and future demand for water — from both the community and our potential data center — to the available supply, using data from the local utility and water district management plans. In this evaluation, we also consider the recent water-level history compared to levels expected of a healthy watershed using flowstream and groundwater monitoring data, as well as whether the local water authority has rationed water use. Based on these indicators, our watershed health experts determine if a water source is considered at high risk of scarcity or depletion. If it’s high risk, we will pursue alternative sources or cooling solutions at the data center campus.Measure composite risk. We look at the feasibility of treating and delivering water to and from the data center, whether with existing infrastructure or by collaborating with a utility partner to build new solutions such as reclaimed wastewater or an industrial water solution. In addition, we assess community access to water, regulatory risk, local sentiment, and any climate trends that could affect the future water supply, such as reduced precipitation or increased drought.
We designed this framework to take a comprehensive look at the water-related risks for each potential data center location. The results provide context for locally relevant watershed challenges and how our own investments in improved or expanded infrastructure or replenishment projects can help support local watershed health. Given the dynamic nature of water resources, we will repeat these assessments across our portfolio every three to five years to identify new and ongoing risks at existing sites that may require mitigation.
We have integrated the water risk framework into our planning and development processes for all new data center locations.
Notably, the responsible-use evaluation completed during the planning stage for our recently announced data center in Mesa, Arizona, found the local water source was at high risk of depletion and scarcity. Therefore, we opted to air-cool the data center, minimizing our impact on the local watershed. To further support water security in the area, we also donated to Salt River Project’s (SRP) effort focused on watershed restoration and wildfire risk. This collaboration with SRP was a follow-up to our 2021 investment in the Colorado River Indian Tribes system conservation and canal-lining projects to improve water conservation in the Southwest.
Collective effort and transparency are necessary to keep global watersheds thriving and healthy, as we work to deliver products and services that people and businesses use every day. We want to share what we’ve learned through our work with climate-conscious cooling and provide an example for how other industrial water users can make responsible water decisions for their own operations.
Our water risk framework is one of many pieces of responsible water use in action. Implementing this framework is another step on our water stewardship journey and complements our ambitious 24/7 carbon-free energy goal. Watershed health is both complex and dynamic, and as we make progress on our framework, we will continue to refine it, sharing lessons learned with others who aspire to practice responsible water use in their own organizations and communities. Check out our water risk framework white paper for more detail.
Read More for the details.
Ride hailing and cab services have been forever transformed by smartphone apps that digitally match drivers and riders for trips. While some of the biggest names in the ride-hailing industry are digital natives, cab services that have been around for decades are working to modernize and succeed in this competitive and tech-driven industry.
eCabs, a Malta-based digital ride hailing company, started off as a dial-a-cab provider in 2010, working to disrupt the industry with a high-quality, professional, and reliable 24/7 service.
As mobile apps began to be introduced into the industry globally, the company’s leaders began digitally transforming the company to create a tech stack of its own.
“We were keeping our eyes on the development of the international ride-hailing industry from inception,” says Matthew Bezzina, Chief Executive Officer at eCabs. “It was evident that embracing and investing in technology would be critical to our growth and success. We also knew that a powerful combination would come from coupling our hands-on experience of running a mobility operation with a forward-looking technology mindset.”
eCabs built its own tech, creating a platform that could be tailored for other operators and tweaked to accommodate any jurisdiction’s particular needs. The company worked with Google Cloud and partner TIM Enterprise to achieve its vision.
eCabs initially started with an IT environment built on bare metal architecture but quickly recognized the limitations of this approach.
“Having redundancies on bare metal did not match our ambitions, and we knew that a robust international technology service required a borderless solution that did not rely on the limited connectivity of a European island nation like Malta,” says Luca Di Michele, Chief Technology Officer at eCabs. “The requirement to migrate to the cloud with all the invaluable benefits that came with it was very clear.”
eCabs’ platform was first rolled out in its home country of Malta, which has the EU’s densest road network. This made it an ideal sandbox to test and refine its products. The company’s ambitions, however, always stretched beyond its home’s shores. Redundancies were vital to maximize uptime and reliability, while scalability was necessary to manage the massive spikes in demand common in the ride hailing market.
“Migrating to Google Cloud was a natural progression for us because it offers greater flexibility, scalability, and reliability,” says Bezzina.
“This has enabled us to grow locally and meet the demands of our growing international business. It’s also improved our platform’s uptime and makes it easier to manage spikes in demand as the platform met its planned growth across different markets.”
eCabs uses Google Kubernetes Engine (GKE) to power its microservices architecture that includes unique environments for each of the other international tenants it serves.
GKE makes it easy to replicate environments, which has allowed eCabs to quickly get new customers up and running as its platform becomes more popular across Europe.
“When we have a new potential customer, we can easily build a test environment, deploy it, run the setups in accordance with the prospect’s preferences, and show them the value of the platform,” says Di Michele. “The ability to replicate environments at speed while providing redundancy and disaster recovery capabilities enables our growth and grants our tenants peace of mind.”
Google Cloud partner TIM Enterprise has helped eCabs since the outset of its migration, assisting with provisioning, setup, maintenance, and more.
This helped eCabs get through the more challenging elements of the switch from bare metal to GKE architecture, while opening the door to other data cloud solutions such as BigQuery.
With TIM Enterprise’s guidance, eCabs has effectively begun to use BigQuery for all its data needs across regions, allowing it to offer customers insights on platform use for planning and performance review purposes. Given the importance of data analytics in ride hailing businesses, this checked a critical box for eCabs while setting the stage for the introduction of machine learning and generative AI projects.
Now, TIM Enterprise and Google Cloud continue to share new opportunities and insights into eCabs’ use of cloud computing to unlock savings, improve scalability, and reduce the burden of infrastructure management.
More importantly, TIM Enterprise and Google Cloud have enabled eCabs to protect its platform against moments that would have previously presented significant challenges.
“The ease in which we can scale is crucial, because we have customers of all sizes across regions,” says Di Michele.
“If we can’t scale quickly, our customers could see downtime when their riders rely most on their services like during a major sporting event. Google Cloud and TIM Enterprise have helped to position us as an always-on option, which positively impacts our customers and our customers’ customers.”
Given that eCabs is a group that houses a tech company, a 24/7 ride-hailing business, and full fleet, it has real-world experience in the issues legacy operators face and the pains of transitioning to a digital-first operation.
eCabs is enabling digital transformation among legacy cab companies and new entrants into the digital ride-hailing market in Europe and the wider region to compete against the biggest names in digital ride sharing with Google Cloud tools supporting its vision.
“Google Cloud has always been there to help,” says Bezzina. “Google Cloud and TIM Enterprise offer excellent support irrespective of the size of your business. This is a testament to the power of Google Cloud. We know they believe in us and take action to drive our success forward.
Looking ahead, eCabs is continuing its strategy of international expansion as data plays an ever-greater role in the market.
“Eventually, ride hailing platforms like ours will be primarily used as data aggregators,” says Bezzina. “The data we gather and analyze now will eventually be used to inform decisions related to autonomous vehicles, like where fleets should be placed. Google Cloud and TIM Enterprise enable us to support current market opportunities and be prepared for tomorrow.”
Check out more stories on the Google Cloud blogto learn how other companies are using Google Cloud and partners like TIM Enterprise to power their operations.
Read More for the details.
Post Content
Read More for the details.