Azure – Update type on your application insights troubleshooting guides by 31 March 2024
Troubleshooting guides within Azure Monitor application insights will be retired on 31 March 2024 – update the type to workbook.
Read More for the details.
Troubleshooting guides within Azure Monitor application insights will be retired on 31 March 2024 – update the type to workbook.
Read More for the details.
Starting today, AWS Billing Conductor (ABC) customers can view proforma costs in AWS Cost Explorer. This release allows ABC customers’ account owners to analyze and save reports of their proforma costs. For example, organizations can use the feature to grant cross-account billing visibility for their business units. Partners can use the feature to give their customers a cost reporting experience in AWS Cost Explorer that matches the customer’s specific pricing agreement.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex and M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Europe (Spain, Stockholm) regions. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
Read More for the details.
We are pleased to announce that, starting today, you can preserve client IP addresses towards Network Load Balancers (NLBs) with AWS Global Accelerator. With this feature, you can meet security and compliance restrictions around client IP addresses, apply client-specific logic for IP address or location-based filters, or gather connection statistics. You can also use client IP address preservation to serve personalized content in your applications.
Read More for the details.
Starting today, you can now hibernate Amazon EC2 M7i and M7i-flex instances. Hibernation provides you with the convenience of pausing your instances and resuming them later from a saved state. Using hibernation, your applications will resume from right where they left off. With hibernation, you can maintain pre-warmed instances that can get to a productive state faster without modifying existing applications.
Read More for the details.
AWS Outposts rack is now supported in the AWS Middle East (UAE) and the AWS Israel (Tel Aviv) Regions. AWS Outposts rack is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or co-location space for a truly consistent hybrid experience.
Read More for the details.
Amazon GuardDuty announces a new capability to help customers streamline and simplify how they set up and administer protection plan coverage across all member accounts in an organization. Delegated Administrators (DAs) can now enable one or more GuardDuty features, for all existing and newly-added members of an organization within the same region and helps ensure consistent security coverage across their organization.
Read More for the details.
Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes in Amazon SageMaker Studio. SageMaker Data Wrangler enables you to access data from a wide variety of popular sources including Amazon S3, Amazon Athena, Amazon Redshift, Amazon EMR, Snowflake, and over 50 other third-party sources. Starting today, you can use role-based access control with AWS Lake Formation in EMR Hive and Presto connections to create datasets for ML in SageMaker Data Wrangler.
Read More for the details.
Amazon ElastiCache for Memcached now makes it simpler and faster for you to get started with setting up an ElastiCache for Memcached cluster. The new console experience offers streamlined navigation and minimal settings required to configure a cluster, in just a few clicks.
Read More for the details.
Starting today, Amazon Virtual Private Cloud (VPC) customers can use Reachability Analyzer and Network Access Analyzer in Asia Pacific (Osaka) region.
Read More for the details.
Amazon Monitron is an end-to-end system that uses machine learning to detect abnormal conditions in industrial equipment to enable predictive maintenance. Amazon Monitron includes wireless sensors to capture vibration and temperature data; gateways to automatically transfer data to the AWS Cloud; the Amazon Monitron service that analyzes the data for abnormal machine patterns using ML, and a companion mobile app to set up the devices and receive reports on operating behavior and alerts to potential failures in machinery.
Read More for the details.
Cross Subscription Restore support for Azure VM is generally available.
Read More for the details.
Protect applications from abnormally high levels of traffic with rate-limit rules on Azure’s regional Web Application Firewall running on Application Gateway.
Read More for the details.
Imagine a world where you can see some of the most powerful hardware on the planet. Now imagine a place that combines high-performance hardware and interactive experiences together, for you to see, feel, and touch?
When you come to Google Cloud Next, be sure to enter the Hardware-verse, our showcase area that brings experiences to life with the latest in Google Cloud hardware offerings, including Google Distributed Cloud (GDC), and Cloud Tensor Processing Units (TPUs).
The Hardware-verse showcases Google Cloud infrastructure products designed for edge, private data center, and hybrid cloud deployments. Here’s a peak at what you’ll see:
Google Distributed Cloud is a fully managed hardware and software product that delivers modern applications equipped with AI, security, and open source at the edge. Available for enterprises and the public sector, customers can leverage GDC for best-in-class AI, security, and open-source workloads with data independence and control. We’ll be using GDC to demonstrate:
Vertex AI Vision, showing you how to easily build, deploy, and manage computer vision applications with a fully managed, end-to-end application development environment that reduces time to build computer vision applications from days to minutes at one tenth the cost of current offerings. Learn more about GDC here, and Vertex AI here.
Cloud TPUs: Google’s custom hardware lets you develop large and complex deep learning models such as large language models (LLMs). Cloud TPUs optimize performance and cost for all AI workloads, from training to inference, while providing high reliability, availability, and security. Learn more here.
NetApp: Simplify how you migrate and run enterprise workloads in the cloud with innovative data services from NetApp. Realize cloud advantages faster with integrated solutions tailored to your unique needs. Learn more about NetApp and Google Cloud here.
Intel: See 4th Generation Intel® Xeon® Scalable Processors power Google Distributed Cloud Hosted and 3rd Generation Intel® Xeon® Scalable Processors power Google Distributed Cloud Edge. Learn more about Intel and Google Cloud here.
In the Hardware-verse, you’ll see and experience first-hand how Google hardware can unlock the potential of AI and your data. You’ll see the same types of hardware that Google Cloud uses to run its applications, and experience how to leverage it in a series of expert-led demonstrations for three real-world applications:
Experience real-time visual inspection at the edge. Learn how to use Google Distributed Cloud Edge to scale quality control, inventory analysis, and staff health and hygiene with best-in-class AI, security, and open ecosystem. Add to your agenda here.
Address compliance needs with an air-gapped cloud solution. See how to leverage the best-in-class AI and analytics services in a fully disconnected environment, for highly regulated industries and use cases such as emergency preparedness, response, and recovery. Add to your agenda here.
Develop generative AI models with Cloud TPUs. Watch how Google Cloud TPUs accelerate the training and inference of machine learning models with optimized performance-to-cost efficiencies at scale. See how they operate in tandem with an open and robust software ecosystem. Add to your agenda here.
“Organizations often face challenging decisions such as data residency requirements, connectivity demands, low-latency needs, strict regulatory compliance, and massive scale compute resources for AI,” says Sally Revell, Director, Modern Infrastructure Product Marketing. “The Hardware-verse experience at Google Cloud Next 2023 is designed to give customers a unique opportunity to get hands-on with the most common use cases and see what’s under the hood of Google Distributed Cloud and Google’s Tensor Processing Units.”
Join the experts from Google Cloud, customers, and partners as we come together to share challenges, solutions, and game-changing technologies in San Francisco at Next ‘23 from August 29-31!
If you’re unable to attend in-person, please get your digital pass here and view these sessions:
Running AI at the edge to deliver modern customer experiences Session ARC 101
Mind the air gap: How cloud is addressing today’s sovereignty needs Session ARC100
What’s next for architects and IT professionals Spotlight SPTL202
Read More for the details.
We are thrilled to announce general availability of a new feature that simplifies the export of tabular data from Google Earth Engine into BigQuery.
Earth Engine and BigQuery share a common goal to make large-scale data processing accessible and usable by a wide range of people and applications. Earth Engine focuses on image (raster) processing, whereas BigQuery is optimized for processing large tabular datasets and is a critical part of many Google Cloud data analytics workflows. This new connector is our first major step towards deeper interoperability between the two platforms, improving ease-of-use for workflows that use both services, and enabling new analyses that combine raster and tabular data.
Woza, a Google partner with Google Cloud Ready – Sustainability designation, is a startup that develops next-gen geospatial intelligence technology using Earth Engine and BigQuery, helping governments and corporations tackle global challenges such as climate change, geographic risk, and sustainability. Utilizing the new connector, Woza is able to integrate new and more intricate variables into their crop and supply chain sustainability solutions.
The actionable sustainability insights and solutions Woza provides are beneficial for Consumer Packaged Goods (CPGs) companies looking to optimize their supply chains by identifying the most environmentally responsible suppliers and participants in their supply chain, right down to a specific farm. Access to cloud-based information, without the need for technical infrastructure, enables small and medium-sized producers to validate their processes instantly and in a highly accessible manner, opening the possibility for them to enter more stringent markets, such as the European Union and USA.
Ag-tech company Sima created an intelligent agriculture solution that allows CPGs to monitor their fields, geolocate data, analyze information, and generate spray application orders. To build this solution, they employ Woza innovations that allow them to perform sustainability validations and to support and improve on-field productivity.
In Sima’s solution, information is displayed through grids, allowing financial and operational actors within the supply chain to better identify potential areas for cultivation that meet sustainability guidelines, while also identifying and disqualifying areas that do not meet sustainability criteria. This type of aggregated geospatial analysis from Earth Engine and BigQuery allows for optimization of resources for large regions, which is not feasible with manual, on-the-ground, field-by-field analysis.
“The recent fusion of BigQuery with Google Earth Engine holds the key to unlocking new possibilities for cutting-edge geospatial architectures,” says Sebastian Priolo, CEO of Woza. “This integration paves the way for transforming satellite images into tabular data, expanding the horizons of geospatial data handling. Together, they form a dynamic duo that empowers us to create high-performing and scalable applications with unprecedented ease.”
“This connector revolutionizes the traditional workflow for geospatial data science teams, making data accessibility seamless for generating valuable insights. Previously, tedious activities like downloading data to a cloud storage and uploading to an analytical database for feature selection, transformation and extraction were a must when working with geospatial datasets. Now this connector enables us to optimize data availability and reduce delivery times by up to 5x, focusing only on real value-adding tasks.”
Check out this complete guide that walks through the process of exporting data from Earth Engine to BigQuery. The guide builds a real-world example of using Google’s geospatial tools to identify flooded roads.
To hear more about this feature, please join our session at Google Cloud Next ‘23: “Saving the world with geospatial data: Sustainability analytics on Google Cloud”, where we will be presenting customer use cases having a transformative impact on sustainability efforts worldwide.
Read More for the details.
Editor’s note: The post is part of a series showcasing our partners and their solutions that are Built with BigQuery.
The footprint of humanity is evolving substantially – from climate change alone scientists estimate that more than 600 million people on the planet have already been stranded outside of habitats that best support life. Other factors such as natural disasters, conflict, a global pandemic, and economic opportunity are changing the map faster than traditional predictive platforms and systems can keep pace with. How we understand this change and anticipate its direction impacts how governments allocate social support, development institutions finance sustainable development initiatives, and commercial enterprises pursue growth in the most inclusive and low carbon manner possible.
Atlas AI has built a geospatial artificial intelligence platform that helps every organization anticipate changing societal conditions — where people live, where wealth and poverty are concentrated, how the physical makeup of communities are evolving, and more — to determine where to invest today to prepare for the world of tomorrow.
There are a range of alternative geospatial data sources that can help companies better understand their markets. However, while these additional sources have been available to companies for years, this data has not yet resulted in greater awareness of market changes or the ability to respond to them with agility.
There are many reasons behind this, including:
The available data sources only tell one part of the story. But many different types of data — each archaic in their own way — need to be brought together into a single integrated ‘fabric’ to tell the complete story of local change.
There isn’t enough geospatial information available from Europe and the wide range of rapidly growing markets that make up the global operating footprint for most multinational companies. And while there is better information available from the US, it’s often incomplete.
The modern data and analytics stack is still not mature when it comes to developing predictive machine learning models on geospatial data. Structuring this data for the purposes of training ML models is typically a bespoke activity within data science teams. In particular, integrating external market data with internal operations data is a poorly supported workflow within data science workflows.
Even the best market data available can only tell you what happened a minute, an hour, a week or a year ago. None of this information alone can help you anticipate what comes next. If you’re investing in new supply chain capacity, the demand for your product a year from now is far more relevant than the demand a year or even a month ago.
When you compound these challenges with increasing levels of migration, growth, and upheaval in the world, you can see the massive opportunity in helping companies meet the evolving demands of our society and promote inclusive commercial growth.
Commercial enterprises most frequently use Atlas AI’s platform to plan investments and operational priorities at the intersection of sustainable supply chains and consumer demand forecasting. The Atlas AI platform monitors and anticipates changing patterns in local development for every community on the planet, predicts the implications of those trends in the context of demand for new infrastructure as well as consumer products and services, and supports decision makers to act with agility to meet that demand. Companies that utilize Atlas AI’s predictive intelligence platform invest more efficiently, accelerate revenue growth, and reduce risk of being poorly positioned in their most attractive future markets.
By working in partnership with BigQuery and utilizing Google Cloud’s infrastructure, AI platform capabilities, and geospatial analytics, Atlas AI has been able to realize their vision of a truly global AI-powered platform. Companies using the Atlas AI platform have successfully monitored changing industrial supply chains in the US, forecasted consumer demand in Indonesia, optimized farm equipment utilization in Africa and assisted vulnerable communities by providing life-saving interventions in Southeast Asia.
Companies that benefit most from Atlas AI’s platform:
Are seeking to better integrate data across the business with the latest market insights for the purposes of business intelligence and business planning
Have ambitious growth plans but are struggling to optimize predictive models with the best data and AI techniques to optimize and adapt pathways to growth
Are particularly sensitive to changing human migration and development patterns, given infrastructure and supply chains that are not easily adjusted once implemented
Are focused on bringing infrastructure, products and services to traditionally underserved communities around the world that are not as well captured in traditional market research
Engie Energy Access, the global renewable energy company, is using Atlas AI’s platform in Kenya to predict the location of the optimal customers for home solar energy-powered appliances. Home solar energy adoption is a critical component to SDG7, the sustainable development goal focused on universal energy access by 2030, but it is often unclear which households have the need and the capacity to adopt solutions from commercial providers. This is all the more challenging as solar energy companies have to grow alongside a rapidly growing market, meaning a company can’t set a fixed growth strategy and execute it.
Responding to a rapidly evolving market context is essential to commercial success. In early usage of the Atlas AI platform, Engie’s commercial team was able to experience a 48% increase in sales in the sales regions where the platform was deployed.
Underpinning the commercial and sustainability opportunity requires substantial cloud and data infrastructure to power the extent of imagery, spatial data, artificial intelligence, and enterprise analytics capabilities to realize the above vision. Atlas AI is only able to achieve global insight into market trends at such a local level because of extensive integration with Google Cloud solutions such as BigQuery and years of platform development at the intersection of software development, MLOps, and data engineering.
These capabilities allow Atlas AI to:
Use a range of structured and unstructured datasets within BigQuery and GCS (Google Cloud Storage) in an “AI lakehouse” architecture to model the state of wellbeing of local communities around the world.
Harness scalable geospatial ML capabilities withinGoogle Earth Engine and Vertex AI to integrate a range of disparate data about a community to model core commercial and social sector outcomes such as consumer demand potential or nutrition vulnerability.
Integrate operational data from our customers’ CRM, ERP and other decision support systems into a dynamic geospatial feature store built on BigQuery. This then enables real-time predictive modeling, in order to forecast where the greatest future opportunity will be for the organization to advance its commercial and sustainability goals.
Deploy this information in the manner most conducive to the end users consuming it – whether via data exchange on Analytics Hub, API integration or our Aperture® web application, enabling more commercially and societally impactful decisions, optimizing supply chains, and better responding to consumer demand signals.
This end-to-end technology stack offers an innovative and powerful illustration of the breadth of Google Cloud services, from Earth Engine, to BigQuery, to Vertex AI as depicted in the following architecture diagram; and in particular the application of these platform capabilities to globally scalable sustainability solutions.
To discover the power of Atlas AI, get in touch here.
Google is helping companies like Atlas AI build innovative applications on Google’s data cloud with simplified access to technology, helpful and dedicated engineering support, and joint go-to-market programs through the Built with BigQuery initiative. Participating companies can:
Accelerate product design and architecture through access to designated experts who can provide insight into key use cases, architectural patterns, and best practices.
Amplify success with joint marketing programs to drive awareness, generate demand, and increase adoption.
BigQuery gives ISVs the advantage of a powerful, highly scalable data warehouse that’s integrated with Google Cloud’s open, secure, sustainable platform. And with a huge partner ecosystem and support for multi-cloud, open source tools and APIs, Google provides technology companies the portability and extensibility they need to avoid data lock-in.
Click here to learn more about Built with BigQuery.
Read More for the details.
You have something in your pocket right now that possesses more computing power than spacecraft fifteen billion miles away from Earth. No, not your mobile phone — the key fob for your car! Voyager 1 and Voyager 2 launched in 1977, and forty five years later, these little rascals still work and send data back to Earth while moving through cold, empty space at over thirty thousand MPH. Think your key fob will still be going strong in 2068? It’s also amazing that the Romans created aconcrete so long-lasting that two thousand years later, structures built with it are still in use. Meanwhile, the roads outside my house develop potholes if a rabbit sneezes on them.
I’m a fan of durability because it allows you to think outside the box. If I know that my foundations are safe, I’ll be more inclined to push the boundaries to innovate outside my comfort zone because the environment can handle it. Durability needs to be a cornerstone of the application modernization conversation, as each application and environment we create for our customers is the first step of their next-generation technology and business strategy. An entire organization’s goals and dreams rests on the shoulders of the technology architectures we are building, and we need to ensure that the foundation has the strength to withstand everything the future can throw at it — and then some.
When it comes to selecting the appropriate foundational technologies for modern applications, it’s crucial to consider the longevity and reliability of the technology. Will an exciting new open-source large language model be around in a year? Or that interesting new database from PayPal? How do I choose something from the ever-growing CNCF landscape diagram? This is where the strength of open-source communities and vendor backing comes into play. For instance, Kubernetes, PostgreSQL, and Java have stood the test of time, thanks to their robust feature sets, dedicated communities, and strong vendor support.
Kubernetes provides a scalable solution for managing and deploying applications, with Google’s strong backing: As of July, we’ve made over 1,000,000 contributions to the k8s project — 2.3 times more than any other contributor. PostgreSQL, meanwhile, is one of the world’s most advanced open-source databases and offers a comprehensive set of features that cater to a wide range of data processing needs, with a vibrant community constantly enhancing its capabilities. And Java, a general-purpose programming language, has been a staple in the development community for decades, providing a reliable platform for building robust applications.
Choosing an established technology not only brings the benefit of a mature feature set but also the assurance of continuity. A new database or language model may be promising, but they lack the track record of these tested technologies. The risk of adopting such new technologies is their potential discontinuation or lack of support, which could jeopardize your application’s stability and longevity.
The Voyager team’s use of aluminum foil is a great illustration of this principle. They chose a simple, reliable, and available solution to protect sensitive instruments during their mission. The choice of aluminum foil might not have been the most cutting-edge or exciting, but it was practical, reliable, and ultimately successful. Similarly, when choosing foundational technologies for your modern applications, sometimes the “boring” choice is the best one. It’s not about chasing the latest trends; it’s about choosing what works and stands the test of time.
Vendor backing is another critical consideration when choosing foundational technologies. A reliable platform provider that runs these technologies ensures a high-uptime Service Level Agreement (SLA). For example, Google Kubernetes Engine (GKE) offers a 99.95% uptime SLA, while Bigtable “just works,” and Cloud Storage doesn’t lose data thanks to a design that supports 99.999999999% annual durability.
That’s not to say we shouldn’t experiment with new technologies and encourage our customers to do the same. Everyone needs an innovation strategy. The concept of an ‘innovation spectrum’ is helpful here. This spectrum represents different degrees of technological innovation that companies can employ based on their specific needs and capabilities. On one end of the spectrum, there’s incremental innovation, which involves making small improvements or extensions to existing products, services, or processes. On the other end, there’s radical or disruptive innovation, which involves creating entirely new products or services that can potentially disrupt entire industries.
A classic example of balancing cutting-edge technology with “boring” or legacy technology is seen in many financial institutions. They might use AI and ML for fraud detection or predictive analytics while still relying on tried and true technologies for their core banking systems. This blend of newer and older technologies allows them to benefit from the latest advancements without jeopardizing the stability and reliability of their critical operations. However, as the banking industry is finding out, that can also come at a risk of stifling innovation and can cause customers to look elsewhere.
For developers, Google Cloud’s Kubernetes-based platforms present a similar balance between innovation and stability. For example, researchers can leverage cutting-edge GPU sharing in GKE to explore the origins of the universe, while the BBC uses Cloud Run serverless containers to keep up with demands of a very busy news day.
Adopting best practices like platform engineering can provide a robust foundation for rolling out new technologies. Platform engineering focuses on creating a stable, scalable, and secure platform that allows for rapid deployment of applications. GitOps is another important practice that involves using Git as a single source of truth for declarative infrastructure and applications. With Git at the center of the delivery pipelines, developers can use familiar tools to make pull requests. Changes can be rolled out or rolled back easily, making the process of adopting new technologies smoother.
When it comes to modern application development, developers need to be able to trust that the foundational technologies they choose will be reliable and durable. Without this assurance, developers may be hesitant to take risks or explore creative solutions. To give them the confidence they need, an effective platform engineering strategy can provide a strong foundation for rolling out new technologies while ensuring stability and security.
Boring can be beautiful, especially if you’re building for the long haul. Regardless of what you’re developing, from roads and rocketships to microservices or network architecture, the fundamental structure needs to withstand everything the conceivable future can throw at it. A solid, durable foundation offers developers the capabilities they need to push the boundaries, and the reliability they need so their brainchild is still humming along, 15 billion miles away.
Read More for the details.
The Google Cloud region in Berlin-Brandenburg is now ready for customer use, our second region in Germany, and our 12th in Europe. The Berlin-Brandenburg region serves Google Cloud customers with local cloud capacity to scale their workloads and satisfy important in-country disaster recovery requirements — and it is part of our plan to invest 1 billion euros in Germany’s digital infrastructure and clean energy between 2021 and 2030.
“The opening of the Berlin-Brandenburg region is great news for our joint solution with Google, as well as for Europe’s digital sovereignty. Our unique proposition brings together European values for data and the innovative potential of Google’s global network – and is growing it with a new cloud region.” – Adel Al-Saleh, Member of the Deutsche Telekom AG Board of Management and CEO T-Systems
“Berlin is a successful enterprise and technology hub with a strong reputation beyond its borders. Our capital city is at the cutting edge, especially in artificial intelligence, the Internet of Things and 3D printing. We are going to continue to promote this development. The Berlin-Brandenburg Cloud Region is a huge opportunity for our metropolitan region. The new possibilities the Google Cloud Region brings will make Berlin-Brandenburg even more attractive as a place to do business for many different sectors. The region offers ideal conditions for companies establishing new locations, for new start-ups and for the creation of new jobs and traineeships.” – Kai Wegner, Governing Mayor of Berlin.
The new Berlin-Brandenburg region becomes available at a time when German organizations of all sizes are eager to accelerate their cloud adoption, with access to additional capacity close to their customers, and to satisfy regulators’ requirements and ensure in-country business continuity for critical workloads.
We are committed to making Google Cloud the best place for European companies to digitally transform on their terms, providing low-carbon options for customers to run their applications and infrastructure. In 2021, we announced a first-of-its-kind agreement with ENGIE to purchase clean energy to power our operations in Germany, in line with our ambitious 2030 goal to operate on carbon-free energy 24/7 everywhere we operate worldwide. By working with our energy suppliers to transform how clean energy is delivered to customers, Google is supporting the broader decarbonization of the German electricity grid.
“To create a powerful and reliable ecosystem of innovative GovTech solutions while accelerating the procurement processes of the public sector — that is our goal with GovMarket. In doing so, we set new standards for digital collaboration between private enterprises and public administrations. The new Google Cloud region in Berlin-Brandenburg is an essential component of our work. The joint sovereignty offering from Google and T-Systems leverages the scalability and innovative potential of the cloud, while committing to comply with European data protection regulations. Through our collaboration, we contribute to enhancing the government’s future in the technology sector, ensuring security and sustainability, and jointly establishing new benchmarks for digital cooperation between providers and public administrations.” – Jana Janze, Managing Director, GovMarket
“With our multi-cloud strategy, we at Deutsche Börse Group are setting new standards for cloud innovation within the financial services industry. With Google Cloud as our preferred partner, the availability of the new Google Cloud region in Berlin-Brandenburg together with the existing region in Frankfurt allows us to use two fully resilient and locally accessible cloud regions with the highest German security standards for our services.” – Hinrich Völcker, Chief Security Officer, Deutsche Börse AG
“Millions of people shop every day in REWE stores and Online at REWE Group using innovative services and technologies. With our multi-cloud strategy, we at REWE Digital are setting new standards to revolutionize digital food retail. The availability of the new Google Cloud region Berlin-Brandenburg together with the existing cloud region in Frankfurt allows us to leverage two fully resilient cloud regions with the highest German security standards for our services.” – Robert Zores, Chief Digital Innovation Officer, REWE digital
“Scalability and fast performance are mission-critical for running a successful retail business. As the leading commerce platform, we aim to prioritize seamless shopping experiences during peak sales moments, as well as supporting the international ambitions of our millions of merchants while optimizing for local data hosting. The new Google Cloud region in Berlin-Brandenburg empowers us to raise the bar for performance, scale and reliability, delivering an exceptional customer experience that enables our merchants to thrive.” – Birk Angermann, Head of Revenue, EMEA, Shopify
“In our strategic partnership with Google Cloud, we develop AI-based decision-support tools for the Lufthansa Group airlines, and by this achieve a more sustainable and resilient airline operation. The new Berlin-Brandenburg cloud region underscores Google’s commitment to technology in Germany, emphasizing geo-redundancy and superior data protection. We welcome the potential this region holds for us as a partner and Germany as a business location.” – Christian Most, Senior Director Digital Operations Optimization, Lufthansa Group
The new Berlin-Brandenburg region is now part of Google Cloud’s global network of 38 regions and 115 zones that bring cloud services to over 200 countries and territories worldwide. The new region brings high-performance, low-latency services and products to customers of all sizes, from public sector organizations, to small, medium and large enterprises and startups in Germany and the European Union. Organizations in the region will also benefit from key controls that allow them to maintain high security, data residency, and compliance standards, including specific data storage requirements.
Like all Google Cloud regions, the Berlin-Brandenburg region is connected to Google’s secure network infrastructure, comprising a system of high-capacity fiber optic cables under land and sea around the world. The new region will bring high-performance, low-latency services and products to organizations across Germany.
For help migrating to Google Cloud, please contact our local partners. For additional details on Google Cloud regions, visit our locations page, where you’ll find updates on the availability of additional services and regions. You can always contact us to help you get started or access our many educational resources. We’re excited to see what you build next with Google Cloud.
Read More for the details.
Amazon Aurora now supports Global Database Failover, a fully managed experience for performing a cross-Region database failover to respond to unplanned events such as a regional outage. With Global Database Failover, you can convert a secondary region into the new primary region in typically a minute and also maintain the multi-region Global Database configuration. To learn more about Global Database Failover, see this blog.
Read More for the details.
Amazon RDS for MariaDB now supports MariaDB major version 10.11, the latest long-term maintenance release from the MariaDB community. Amazon RDS for MariaDB 10.11 includes performance improvements that enable up to 40% higher transaction throughput than prior versions. You can deploy RDS for MariaDB 10.11 on RDS Optimized Read and Optimized Write enabled instance classes for additional performance gains. MariaDB 10.11 major version also includes improvements to authentication, information schema, system versioning, and the InnoDB storage engine made by the MariaDB community.
Read More for the details.