Azure – General availability: Call automation capabilities for Azure Communication Services
A new set of APIs are in preview, to help developers build server-based, calling workflows.
Read More for the details.
A new set of APIs are in preview, to help developers build server-based, calling workflows.
Read More for the details.
Explore new capabilities with Datadog – An Azure Native ISV Service, that include a scalable solution to monitor resources in multiple subscriptions, configuration for Cloud Security Posture Management and ability to mute the monitor for expected VM shutdowns.
Read More for the details.
Starting today we will be rolling out a new Experimental release of cloud-based maps styling for the Maps JavaScript API to give you more control over the look and feel of your maps than ever before You now have more options to fine-tune your geospatial use cases by targeting more of the content that matters most. More granular control over styleable elements such as labels visibility, areas fill color, and stroke settings, will enable you to better integrate your own design decisions into the map styles you create.
Behind these improvements is a new inventory of customizable map features that supports nearly 100 individual map elements, which is two times more features and four times as many POI categories as the current generally available version of cloud-based maps styling. This means you can build maps with our POI data that covers 200+ million businesses and places, and have more control to filter and customize what data you show on your maps. You now get more granular control over the look and feel of your map styles with new cartographic details that were previously not accessible. For example you can now style in a different way areas of reservations, crops, and types of water surfaces. You can also have diverse settings across labels applied to POI categories of tourist attractions, recreational areas, emergency services, retail, and more
Along with the expanded taxonomy, we’ll roll out some new styling capabilities for different cartographic elements, such as geometries and labels. These new customization properties are all designed to give you even greater flexibility to create custom styles that reflect your brand or application’s unique needs.
Cloud-based maps styling is included with Dynamic Maps for Maps Javascript API. Developers can use Cloud-based maps styling features for Dynamic Maps by creating a Javascript Vector Map configured MapID and a new map style in Google Cloud Console.
To try these experimental styling capabilities choose the ‘Experimental’ option as you create a new map style.
Experimental map styles require your web app to use JavaScript Maps API version 3.47 or higher for vector-based maps and 3.49 or higher for raster-based maps. For unlocking most of the experimental features consider JavaScript Maps API beta channel. Over the course of this Experimental phase, we’ll expand the list of customizable map features and related stylers for better usability and greater styling control across all zoom levels.
We believe that the new cloud-based map styling in Google Maps Platform will empower you to create more engaging and informative maps than ever. Try it and let us know what you think!
For more information on Google Maps Platform, visit our website.
Read More for the details.
Amazon Location Service now supports publishing tracked device position updates on Amazon EventBridge, allowing customers to leverage position updates to deliver features tailored to the physical location of tracked devices. Developers can create applications that show devices as they move on a map, or store movement data in long-term storage to be used for purposes like asset movement insights, predictive analytics, or compliance.
Read More for the details.
Microsoft Entra ID is the new name for Azure Active Directory (Azure AD). No action is required from you.
Read More for the details.
The new second generation burstable VMs provide the lowest cost of entry to the Azure VM portfolio with greater performance.
Read More for the details.
In my previous post, I talked about how you can use a parent workflow to execute child workflows in parallel for faster overall processing time and easier detection of errors. Another useful pattern is to use a Cloud Tasks queue to create Workflows executions and that’s the topic of this post.
When your application experiences a sudden surge of traffic, it’s natural to want to handle the increased load by creating a high number of concurrent workflow executions. However, Google Cloud’s Workflows enforces quotas to prevent abuse and ensure fair resource allocation. These quotas limit the maximum number of concurrent workflow executions per region, per project, for example, Workflows currently enforces a maximum of 2000 concurrent executions by default. Once this limit is reached, any new executions beyond the quota will fail with an HTTP 429 error.
A Cloud Tasks queue can help. Rather than creating Workflow executions directly, you can add Workflows execution tasks to the Cloud Tasks queue and let Cloud Tasks drain the queue at a rate that you define. This allows for better utilization of your workflow quota and ensures the smooth execution of workflows.
Let’s dive into how to set this up.
We’ll start by creating a Cloud Tasks queue. The Cloud Tasks queue acts as a buffer between the parent workflow and the child workflows, allowing us to regulate the rate of executions.
Create the Cloud Tasks queue (initially with no dispatch rate limits) with the desired name and location:
Now that we have our queue in place, let’s proceed to set up the child workflow.
The child workflow performs a specific task and returns a result to the parent workflow.
Create workflow-child.yaml to define the child workflow:
In this example, the child workflow receives an iteration argument from the parent workflow, simulates work by waiting for 10 seconds, and returns a string as the result.
Deploy the child workflow:
Next, create a parent workflow in workflow-parent.yaml.
The workflow assigns some constants first. Note that it’s referring to the child workflow and the queue name between the parent and child workflows:
In the next step, Workflows creates and adds a high number of tasks (whose body is an HTTP request to execute the child workflow) to the Cloud Tasks queue:
Note that task creation is a non-blocking call in Workflows. Cloud Tasks takes care of running those tasks to execute child workflows asynchronously.
Deploy the parent workflow:
Time to execute the parent workflow:
As the parent workflow is running, you can see parallel executions of the child workflow, all executed roughly around the same:
In this case, 100 executions is a well under the concurrency limit for Workflows. Quota issues may arise if you submit 1000s of executions all at once. This is when Cloud Tasks queue and its rate limits become useful.
Let’s now apply a rate limit to the Cloud Tasks queue. In this case, 1 dispatch per second:
Execute the parent workflow again:
This time, you see a more smooth execution rate (1 execution request per second):
By introducing a Cloud Tasks queue before executing a workflow and playing with different dispatch rates and concurrency settings, you can better utilize your Workflows quota and stay below the limits without triggering unnecessary quota related failures.
Check out the Buffer HTTP requests with Cloud Tasks codelab, if you want to get more hands-on experience with Cloud Tasks. As always, feel free to contact me on Twitter @meteatamel for any questions or feedback.
Read More for the details.
We’re kicking off the summer by welcoming the inaugural 2023 North American Google for Startups Accelerator: Cloud cohort, our new class of cloud-native startups in the United States and Canada.
This 10-week virtual accelerator, which we announced at the Startup Summit on April 25th, brings the best of Google’s programs, products, people and technology to startups doing interesting work in the cloud. We’re excited to offer these startups with cloud mentorship and technical project support, along with deep dives and workshops on product design, customer acquisition, and leadership development for technology startup founders and leaders.
We heard from some of the founders from this year’s cohort — including New York City-based Harmonic Discovery, Toronto-based Oncoustics, and Vancouver-based OneCup AI — demonstrating how they are using Google Cloud data, analytics, AI, and other technologies across healthcare, agriculture and farming, and more. Read more on their aspirations for the program below:
“The team at Harmonic Discovery is excited to scale our deep learning infrastructure for drug discovery using Google Cloud. We also want to learn best practices from the Google team on training and developing machine learning models in a cost effective way.” – Rayees Rahman CEO, Harmonic Discovery
“We’re very excited to grow our presence in the healthcare space by bringing our ultrasound based ‘virtual biopsy’ solutions to clinics and serve over 2B people with liver diseases globally. Specifically in the Google for Startups Accelerator: Cloud program, we’re looking to develop and hone our ability to efficiently scale our ML environments and processes to support the development of multiple new diagnostic products in parallel. We’re also very excited about creating an edge-cloud hybrid solution with effective distribution of AI processing across GCP and Pixel 7 Pro.” – Beth Rogozinski CEO, Oncoustics
“Our primary objective is to leverage Google Cloud ‘s cutting-edge technologies to enhance BETSY, our computer vision AI for animal care. Our milestones include developing advanced image recognition models and achieving real-time processing speeds for large-scale datasets. The accelerator will play a vital role in helping us refine our algorithms and optimize our infrastructure on Google Cloud.” – Mokah Shmigelsly, Co-Founder & CEO and Geoffrey Shmigelsky, Co-Founder & CTO, OneCup AI
We received so many great applications for this program and we’re excited to welcome 12 new startups to the inaugural North American Cloud cohort:
Aiden Automotive (San Ramon, Calif.): Aiden is one of the first software solutions to provide streaming two-way communication directly with the vehicle and across vehicle brands. Aiden provides simple and intuitive 100% GDPR- and CCPA-compliant consent management, enabling car owners to choose which digital services they desire.
Binarly (Santa Monica, Calif.): Binarly’s agentless, enterprise-class AI-powered firmware security platform helps protect from advanced threats below the operating system. The company’s technology solves firmware supply chain security problems by identifying vulnerabilities, malicious firmware modifications and providing firmware SBOM visibility without access to the source code. Binarly’s cloud-agnostic solutions give enterprise security teams actionable insights, and reduce the cost and time to respond to security incidents.
Duality.ai (San Mateo, Calif.): Duality AI is an augmented digital twin platform that provides end-to-end workflows for predictive simulation and high-fidelity visualization. The platform helps close data gaps for machine learning teams working on perception problems and helps robotics teams speed up design and validation of their autonomy software.
HalloAI (Provo, Utah): Hallo is an AI-powered language learning platform for speaking. Press a button and start speaking any language with an AI teacher in three seconds.
Harmonic Discovery (New York, NY): Harmonic Discovery uses machine learning to design multi-targeted kinase drugs for cancer and autoimmune diseases.
MLtwist (Santa Clara, Calif.): MLtwist helps companies bring AI to the world faster. It gives data scientists and ML engineers access to the easiest and best way to get out of the weeds of data pipelines and back to what they enjoy and do best – designing, building, and deploying AI.
Oncoustics (Toronto, ON): Oncoustics is creating advanced solutions for low-cost and non-invasive surveillance, diagnostics, and treatment monitoring of diseases with high unmet clinical need through the use of patented AI-based solutions running on ultrasound scans. Using a handheld point of care ultrasound, Oncoustics’ first solution allows clinicians to obtain a liver health assessment within 5 minutes.
OneCup AI (Vancouver, BC): OneCup uses Computer vision for Animal Care. Our AI, BETSY, is the eyes of the rancher when the rancher is away.
Passio AI (Menlo Park, Calif.): Passio AI is a mobile AI platform that helps developers and companies build mobile applications powered by expert-level AI and computer vision
RealKey (San Francisco, Calif.): RealKey is one of the first collaboration platforms built specifically for finance (starting with mortgages), automating documentation collection/review, tasks, and communication for all parties (not just borrowers) involved in transactions to reduce time, effort, and costs to close.
Sevco Security Inc. (Austin, TX): Sevco Security a leading IT asset visibility and cybersecurity company that provides the industry’s first unified asset intelligence platform designed to address the new extended attack surface and create a trusted data repository of all devices, users and applications that an organization uses.
VESSL AI (San Jose, Calif.): VESSL is an end-to-end MLOps platform. The platform enables MLEs to run ML workloads at any scale on any cloud, such as AWS, Google Cloud, Oracle Cloud, or on-premises.
Tech advancements continue to come at lightning speed, and it’s exciting to work with these founders and startup teams to help grow and scale their businesses. Programming for the Google for Startups Accelerator: Cloud begins mid-July and we can’t wait to see how far these startups go! You can learn more and express interest in joining an upcoming Accelerator cohort here.
Read More for the details.
In an ever-changing business landscape, the challenges of fraud detection, customer service enhancement, and supply chain optimization present themselves more frequently. With the necessity of real-time analysis growing evermore critical, solutions for these challenges deliver significant value.
For instance, the ability to identify fraudulent transactions as they transpire safeguards the financial health of your business while preserving data integrity. Next, by monitoring customer interactions and identifying potential issues, it’s possible to refine customer service processes, thereby reducing customer attrition. Lastly, vigilance in tracking the flow of goods within supply chains can aid in detecting potential disruptions. This will not only optimize your business’s supply chain operations but also guarantee that products are where they need to be at the right time.
Thus, implementing real-time analysis solutions offer enhanced protection, improved customer satisfaction, and optimized operations, catering to the vital demands of our rapidly evolving world.
In this blog post, we’re excited to showcase a real-time analytics pipeline that effectively utilizes Google Cloud services and Sigma Computing to serve these diverse scenarios.
Sigma Computing, renowned for its cloud-native business intelligence and data analytics solutions, empowers business teams by making complex data modeling and sophisticated analysis accessible to everyone, without needing extensive coding skills. By converging Sigma Computing’s data democratization capabilities with Google Cloud’s robust, scalable, and secure infrastructure, we’re serving a wide array of use-case scenarios.
The collaboration of Sigma Computing and Google Cloud brings together the best of both worlds — Google Cloud’s advanced cloud computing infrastructure and Sigma’s simplified data analysis. Together, they enable businesses to make data-driven decisions swiftly, thus promoting efficiency, innovation, and growth.
Pub/Sub: Real-time signals or events are channeled into Pub/Sub, which efficiently processes real-time signals or events. Its value lies in its ability to integrate and manage distributed services and handle large message volumes. It ensures reliable message delivery, high scalability, and facilitates real-time analytics, event-driven computing, and IoT communications. These capabilities make it an asset in instantly gaining business insights, enhancing infrastructure responsiveness, and ensuring secure IoT communications.
Cloud Dataflow: This serverless, fully managed Google Cloud service is tailored for real-time streaming tasks, constructing a metrics processing pipeline from Pub/Sub. It’s invaluable for quick data processing and analytics, underpinning real-time decision-making. Its capabilities include auto-scaling to meet data volume changes and ensuring consistent, reliable performance. Typical use cases encompass live dashboards, anomaly detection, and real-time personalization.
Cloud Memorystore: Cloud Memorystore is used as the Metrics Database for its effective in-memory capabilities, supporting real-time scenarios. It offers built-in functions for diverse needs, delivering high performance and low-latency data access. Ideal for use cases like caching, session management, and gaming leaderboards, it enhances speed, scalability, and application responsiveness.
BigQuery and Cloud Storage: To further incorporate historical data into real-time analysis, you can transition data from Cloud Memorystore to BigQuery using the Cloud Memorystore for Redis API. This allows you to fully exploit BigQuery’s robust analytical capabilities and its scalable and efficient performance. Here’s the general workflow:
Create a Cloud Storage bucket to store the exported data.
Export the data from Cloud Memorystore in Redis RDB format.
Use the Cloud Memorystore for Redis API to copy the data from Cloud Memorystore to the Cloud Storage bucket.
Use the BigQuery Import tool to import the data from the Cloud Storage bucket into BigQuery.
Based on your specific needs, you might also consider streaming data directly from Pub/Sub to BigQuery via Dataflow, while also streaming data into Cloud Memorystore. Should you wish to explore this further, additional information is available in this referenced Google document.
Sigma: Sigma is a cloud-native analytics tool that presents users with real-time insights from large-scale data sets. Its interface resembles Google Sheets, making data analysis and visualization accessible to both code-free and code-friendly users. Despite its simplicity, Sigma can handle datasets as vast as Google BigQuery tables, turning them into interactive documents. It also provides direct, live access to data without extracting or moving it, ensuring strong data governance and security. Overall, Sigma combines the ease of spreadsheets with the power of big data analytics, creating a versatile tool for businesses. For further guidance on integrating Sigma with Google Cloud services, please visit Sigma’s documentation.
Cloud IAM, Cloud Logging, Cloud Monitoring: These services play a pivotal role in our design. Specifically,
Cloud IAM ensures secure and granular access control to your Google Cloud resources, enforcing the principle of least privilege and safeguarding your data and services.
Cloud Logging enables you to store, search, analyze, and alert on log data and events, aiding in troubleshooting and gaining insight into your applications’ operational health.
Lastly, Cloud Monitoring offers real-time visibility into the performance, uptime, and overall health of your applications, proactively notifying you of any issues so you can respond promptly, ensuring service reliability and optimal user experience.
In conclusion, our demonstrated real-time analytics pipeline offers a dynamic solution to modern-day business challenges, combining the power of Google Cloud services and Sigma Computing. This architecture not only responds rapidly to evolving situations but also integrates historical data, providing a comprehensive view of business operations. With Sigma, even users without coding skills can gain actionable insights from their data. We are confident that this solution will elevate your business decision-making processes, ensuring you stay ahead in this fast-paced world. Try this solution and experience firsthand how it can revolutionize your real-time analytics capabilities. Don’t miss out on this opportunity to upgrade your business intelligence and take your data-driven decision-making to the next level.
Acknowledgements: We sincerely express our gratitude towards David Porter and Reed Rawlings from Sigma Computing for their invaluable suggestions, recommendations, and review, and the Google Cloud and Sigma Computing team members who contributed to this collaboration.
Read More for the details.
Google Cloud partners are at the forefront of digital transformations. They’re working with organizations of all sizes and across all industries to apply cloud infrastructure, data, and artificial intelligence (AI) to modernize businesses and launch entirely new digital ones.
Customers are looking to their partners for strategic counsel, expert implementation services, and deep product domain expertise, and we’re on a journey with our partners to expand their capacity and expertise in each of these areas. At the start of this year, we announced an evolution to Partner Advantage to align with these needs, including new product-specific Premier levels for partners and greater incentives for partners who attain Premier status, earn more certifications on Google Cloud products, and drive customer success and growth.
Our partners have already responded. Google Cloud partners earned greater than 180% more certifications in the first half of 2023 compared to the first half of 2022. Just in the last several months, our consulting and systems integrator partners have committed to train more than 150,000 additional people to bring new generative AI products from Google Cloud to customers. Overall, our partners’ net promoter scores (NPS) from customers have increased, signifying the critical role that they play in delivering customer success.
Today, the opportunity for our partners is greater than ever. Our incentives and funding for partners are growing, and new product-specific Premier levels are launching on August 1, 2023. As we move forward with our partners, these new programs and incentives work together to support customer success.
As part of our announcement earlier this year, we said that new product-family-specific tracks are coming to Partner Advantage, including new Premier badges associated with specific Google Cloud products. We’ve been in ongoing dialog with our partners over the last six months, and we’re excited that many have already attained one or more of these new Premier badges. Beginning August 1, 2023 these new Premier levels will take effect within the Partner Advantage program, and beginning in January of 2024, our partners will receive new incentives based on these new levels.
Partners can now attain and display eight new Premier badges, aligned with key Google Cloud product areas across our three Engagement Models: Sell, Service, and Build. These are:
Premier Partner for Google Cloud in the Sell Engagement Model
Premier Partner for Google Workspace in the Sell Engagement Model
Premier Partner for Chrome in the Sell Engagement Model
Premier Partner for Google Cloud in the Service Engagement Model
Premier Partner for Google Workspace in the Service Engagement Model
Premier Partner for Chrome in the Service Engagement Model
Premier Partner for Google Cloud in the Build Engagement Model
Premier Partner for Google Workspace in the Build Engagement Model
By achieving one or more of these Premier badges, partners can demonstrate to customers that they have earned the highest level of proficiency in a particular product area and have already helped multiple customers implement and see value from Google Cloud products.
These updates were designed and implemented with input and feedback from our partners and customers, and will put Google Cloud partners in the best position to support customers’ most pressing needs, and we’re excited to see strong responses from our ecosystem:
“Working with Google Cloud, we are helping customers modernize their businesses in critical areas using technologies including artificial intelligence, cybersecurity, and data analytics,” said Matt Lacey, Consulting Google Alliance leader, Deloitte Global. “We share Google Cloud’s commitment to customer success, and will continue to combine our deep industry knowledge with Google certified resources to address our clients’ most pressing cloud projects.”
“The evolutions in Google Cloud’s Partner Advantage program reflect increasing customer demand for partners who bring deep product knowledge and experience to the table.” said Robbie Clews, Sr. Director of Google Cloud at Kin + Carta. “We’re fully aligned with the Partner Advantage program and are accelerating our focus on high quality services delivery.”
“Driving digital transformations with Google Cloud represents one of the most significant and growing opportunities ahead for Quantiphi,” said Asif Hasan, Co-founder of Quantiphi. “Google Cloud’s approach to partnering, including its partner-led approach to services delivery and its focus on enabling deeply qualified partners, is aligned with customers’ needs, and we’re excited to double down on our work with Google Cloud.”
Partners can still demonstrate even further proficiency with specific products, industries, and workloads by earning Specializations and Expertises in areas like cloud migrations, application development, AI and ML, SAP on Google Cloud, and much more.
We launched the Delivery Readiness Index (DRI) in 2022 to provide partners with a consistent measure of their readiness, capacity, and offerings in services delivery. The DRI evaluates partners’ experiential knowledge and certifications and provides them with resources and support from Google Cloud to grow their experience and expertise in key areas of customer demand, including account-specific bootcamps, training, and Challenge Labs to help partners understand project requirements before engaging with customers.
With the launch of new Premier badges and deeper product and certification requirements, partners can continue to utilize the DRI to ensure their services offerings align with particular customer projects. Similarly, customers can utilize the DRI to ensure that the partners’ of their choice possess the proven expertise and capacity they need for any given project.
Partners are at the forefront of Google Cloud’s go-to-market and delivery initiatives across every product area, including Google Workspace, infrastructure and cloud migrations, artificial intelligence, data and analytics, and more.
To reach more customers and to receive additional support and resources, we encourage our partners to utilize the many new go-to-market programs we have launched this year, including:
Built with Google Cloud AI helps ISV partners accelerate AI and generative AI solutions built with Google Cloud.
Google Cloud Ready initiatives provide support for ISV partners building applications and integrations in key product areas, including BigQuery, AlloyDB, and sustainability.
RaMP, our program for cloud migrations, gives customers and partners everything they need to evaluate and execute large-scale migrations to Google Cloud, with a strong focus on partner-delivered services.
Google Cloud Marketplace now offers more capabilities for resellers to transact.
New incentives and funding for 2H 2023 is launching soon, including increased rebates for partners who source new and expanded deals or grow their SMB accounts, pre-approved discounts based on growth in Workspace seats, and additional funding for certain partner-delivered presales activities.
Together, these new programs, incentives, and Premier badges will ensure that customers have access to highly skilled partners with the deep product domain expertise that their transformation projects demand, while providing our partners with more resources, funding, and access to go-to-market opportunities than ever.
We’re committed to supporting our partner ecosystem, and will continue to dialog about these updates and opportunities with our partners over the coming months. I’m personally looking forward to engaging with partners at our upcoming Next ‘23 Partner Summit, and in additional meetings with partners across North America, EMEA, and JAPAC throughout the year.
For more information on these new programs and resources, reach out to your Partner Account Manager or login to the Partner Advantage portal at partneradvantage.goog.
Read More for the details.
Azure Data Explorer has released three new types of external tables: PostgrSQL, MySQL, and CosmosDB SQL. These new external tables allow users to query data from these sources directly within Azure Data Explorer using the Kusto Query Language (KQL).
Read More for the details.
General availability enhancements and updates released for Azure SQL in early July 2023.
Read More for the details.
Amazon RDS for SQL Server now allows customers to directly join their RDS for SQL Server DB instances to the domains of self-managed Microsoft Active Directory (AD). Self-managed AD can be on-premises or in the cloud. Currently, customers can only use NTLM as the authentication protocol for self-managed AD.
Read More for the details.
Amazon Omics has achieved Federal Risk and Authorization Management Program (FedRAMP) Moderate authorization for the AWS US East-West Regions. You can use Amazon Omics to store and process your data in AWS with up to the Moderate impact level.
Read More for the details.
Reduce infrastructure setup with a managed file system optimized for HPC and AI workloads.
Read More for the details.
AWS Elemental MediaLive now saves channel metrics to Amazon CloudWatch in 1-second intervals allowing you to track fast-changing activity. These metrics can be retrieved using the console or the CloudWatch API at 1-second periods for up to 3 hours after the datapoints were created.
Read More for the details.
You can now run OpenSearch version 2.7 in Amazon OpenSearch Service. With OpenSearch 2.7, we have made several improvements to observability, security analytics, index management, and geospatial capabilities in OpenSearch Service.
Read More for the details.
Amazon Location Service now supports API keys as an alternative for authenticating Maps, Places, and Routes resources, making it easier for developers to create authenticated Amazon Location resources. With API Keys, developers can easily create, manage, and expire access to Amazon Location resources, making it simpler for location based applications to interoperate with Amazon Location Service.
Read More for the details.
Azure Load Balancer support for cross-region load balancing is now generally available. You can now load balance across Azure regions enabling a globally redundant architecture via Azure Load Balancer.
Read More for the details.
Azure Modeling and Simulation Workbench is a new service that provides a fully managed work environment for engineering development, accelerating project startup, and enabling secure collaboration between development teams.
Read More for the details.