Azure – Public preview: New Azure Arc capabilities in November 2021
Azure Arc is announcing new functionality, which includes improved machine learning capabilities, and new features for Azure Arc-enabled data services.
Read More for the details.
Azure Arc is announcing new functionality, which includes improved machine learning capabilities, and new features for Azure Arc-enabled data services.
Read More for the details.
New DCsv3 and DCdsv3-series Azure Virtual Machines transform the state-of-the-art for confidential workloads
Read More for the details.
GraphQL is increasingly seen as more efficient, flexible, and powerful way of working with APIs, as they have tremendous benefits such as faster performance, longer API call limit threshold, and lesser time retrieving data.
Read More for the details.
Native support for WebSocket APIs is now available across all pricing plans of Azure API Management except Consumption. This will help you manage WebSocket APIs along with REST APIs in Azure API Management.
Read More for the details.
Logic Apps Standard was released at Ignite 2020 and since then we have added a lot of features to enable better enterprise integration between mission critical systems on cloud, on premises, and in a hybrid way with runtime available on Azure, Azure Arc and locally. This post includes capabilities that are now generally available.
Read More for the details.
Amazon Time Sync Service now allows you to easily generate and compare timestamps from Amazon EC2 instances with ClockBound, an open source daemon and library. This information is valuable to determine order and consistency for events and transactions across EC2 instances, independent from the instances’ respective geographic locations. ClockBound calculates your Amazon EC2 instance’s clock error bound to measure its clock accuracy and allows you to check if a given timestamp is in the past or future with respect to your instance’s current clock. On every call, ClockBound simultaneously returns two pieces of information: the current time and the associated absolute error range. This means that the actual time of a ClockBound timestamp is within a set range.
Read More for the details.
In order to keep businesses running, organizations require uninterrupted access to business critical applications. While most of these applications can be accessed through a modern browser, others require a specific legacy browser, which can add extra work for IT teams to try and protect users in older, less secure browsers. Many enterprises use Chrome’s Legacy Browser Support, allowing IT to designate which apps require a legacy browser. With this capability, specified apps automatically open in the designated browser when needed, and users are redirected back to Chrome once finished with the app.
Since Microsoft has announced the end of life of Internet Explorer 11 for certain operating systems starting June 15, 2022, many enterprises are revisiting the solutions that they have for supporting their legacy sites and moving away from IE. The good news is that Chrome’s Legacy Browser feature now also supports IE mode in Microsoft Edge and makes it easy to leverage organization’s existing site lists thanks to our new policy.
As organizations move away from IE 11, they can configure Legacy Browser Support (LBS) so that users can use Microsoft Edge in IE mode to view legacy websites that require IE compatibility. This way, organizations can continue to support legacy sites as needed while removing IE 11 from their environment.
Learn how to implement these enhancements to Legacy Browser Support within your organization today through Group Policy or within Chrome Browser Cloud Management.
(Help Center article is located here)
Download the Chrome Browser Enterprise bundle.
Deploy the included LBS MSI to your end users machines using your deployment method of choice.
If you already have the LBS MSI should automatically update. Just make sure that it is on version 6.0.0 or later.
Follow the steps for setting up LBS here (Google Admin Console and GPO methods).
Go to Administrative Templates > Microsoft Edge.
Automatically install the Legacy Browser Support for IE Mode in Edge extension on users’ devices:
Turn on Control which extensions are installed silently.
Under Options, enter the extension’s ID: “acallcpknnnjahhhapgkajgnkfencieh”
For details about Microsoft Edge’s ExtensionInstallForcelist policy, see Microsoft documentation.
See the extension in Microsoft Edge Add-ons.
Enable IE integration:
Turn on Configure Internet Explorer integration.
Under Options, select Internet Explorer mode if you want sites to open in Microsoft Edge using IE mode.
For details about Microsoft Edge’s InternetExplorerIntegrationLevel policy, see Microsoft documentation.
Once set up, your users will be able to seamlessly switch between legacy and modern browsers for uninterrupted access to applications and websites. This results in greater productivity and increased security by spending less time in less secure browsers, which is a win for IT teams and users alike.
Read More for the details.
At the core of a zero trust approach to security is the idea that trust needs to be established via multiple mechanisms and continuously verified. Internally, Google has applied this thinking to the end-to-end process of running production systems and protecting workloads on cloud-native infrastructure, an approach we call BeyondProd. Establishing and verifying trust in such a system requires: 1) that each workload has a unique workload identity and credentials for authentication, and 2) an authorization layer that determines which components of the system can communicate with other components.
Consider a cloud-native architecture where apps are broken into microservices. In-process procedure calls and data-transfers become remote procedure calls (RPCs) over the network between microservices. In this scenario, a service mesh manages communications between microservices, and is a natural place to embed key controls that implement a zero trust approach. Securing RPCs is extremely important: each microservice needs to ensure that it receives RPCs only from authenticated and authorized senders, is sending RPCs only to intended recipients, and has guarantees that RPCs are not modified in transit. Therefore, the service mesh needs to provide service identities, peer authentication based on those service identities, encryption of communication between authenticated peer identities, and authorization of service-to-service communication based on the service identities (and possibly other attributes).
To provide managed service mesh security that meets these requirements, we are happy to announce the general availability of new security capabilities for Traffic Director which provide fully-managed workload credentials for Google Kubernetes Engine (GKE) via CA Service, and policy enforcement to govern workload communications. The fully-managed credential provides the foundation for expressing workload identities and securing connections between workloads leveraging mutual TLS (mTLS), while following zero trust principles.
As it stands today, the use of mTLS for service-to-service security involves considerable toil and overhead for developers, SREs, and deployment teams. Developers have to write code to load certificates and keys from pre-configured locations and use them in their service-to-service connections. They typically also have to perform additional framework or application-based security checks on those connections. Adding complexity, SREs and deployment teams have to deploy keys and certificates on all the nodes where they will be needed and track their expiry. The replacement or rotation of these certificates involves creating CSRs (certificate signing requests), getting them signed by the issuing CA, installing the signed certificates, and installing the appropriate root certificates at peer locations. The process of rotation is critical, as letting an identity or root certificate expire means an outage that can take services offline for an extended amount of time.
This security logic cannot be hardcoded because the routing of RPCs is orchestrated by the traffic control plane and, as microservices are scaled to span multiple deployment infrastructures, it is difficult for the application code to verify identities and perform authorization decisions based on them.
Our solution addresses these issues by creating seamless integrations between the Certificate Authorities’ infrastructure, the compute/deployment infrastructure, and the service mesh infrastructure. In our implementation, Certificate Authority Service (CAS) provides certificates for the service mesh, the GKE infrastructure integrates with CAS, and the Traffic Director control plane integrates with GKE to instruct data plane entities to use these certificates (and keys) for creating mTLS connections with their peers.
The GKE cluster’s mesh certificate component continuously talks to the CA pools to mint service identity certificates and make these certificates available to intended workloads running in GKE pods. Issuing Certificate Authorities are automatically renewed and the new roots pushed to clients before expiry. Traffic Director is the service mesh control plane which provides policy, configuration, and intelligence to data plane entities, and supplies configurations to the client and server applications. These configurations contain the necessary transport and application-level security information to enable the consuming services to create mTLS connections and apply the appropriate authorization policies to the RPCs that flow through those connections. Finally, workloads consume the security configuration to create the appropriate mTLS connections and apply the provided security policies.
To learn more, check out the Traffic Director user guide and see how to setup Traffic Director and the accompanying services in your environment to take a zero trust approach to securing your GKE workloads.
Read More for the details.
Rodan and Fields (R+F) focuses a lot of time and energy working to empower its 300,000 Independent Consultants and provide them with the tools they need to to run their businesses successfully. One of these tools, the R+F Solution Tool, allows R+F Consultant’s to provide personalized, one-on-one recommendations to customers regarding the best Rodan and Fields products for their skincare needs.
In the past few years, the company has managed several important technology initiatives, including migrating its SAP systems to Google Cloud and modernizing databases. All of these digital transformation efforts were done with the goal of providing consultants with stronger insights supported by advanced analytics.
Rodan and Fields was already benefiting from many of the data analytics tools in Google Cloud, but wanted to make this information available in real-time. Batch-oriented processes made it difficult to keep up on the peak-volume days they have each month, and the latencies were becoming disruptive. A natural component of modern, real-time systems is Apache Kafka, but the company’s IT team knew that self-managing Apache Kafka could come with risks and challenges, so they turned to Confluent, the platform for data in motion and a strategic Google Cloud partner.
Confluent built its cloud-based data platform with companies like Rodan and Fields in mind – ones that have enormous data demands but want to keep the total IT cost of ownership low. After recognizing that Confluent could provide an integrated experience with Google Cloud and SAP solutions through its partnership, Rodan and Fields looked closer at Apache Kafka.
It soon became clear that Confluent was the right choice, as its fully-managed platform aligned with the company’s needs and came with white-glove services to quickly train employees on how best to use Kafka.
“The collaborative experience we had with Confluent made all the difference in our implementation,” says Jason Mattioda. “We did several workshops and trainings, and this helped us to use Confluent alongside our Google Cloud tools with confidence. What could have easily taken two or three years was instead done in 12 months.”
Confluent helped Rodan and Fields to break its data flows into pieces that enabled stronger, more valuable definitions across its data pipeline. At the same time, Google Cloud provided assistance throughout the migration process, providing resources, direction, and thought leadership to ensure Rodan and Fields maximized the value of its investments, its use of Confluent, and more.
Given the complexity of the project, Rodan and Fields relied on the Confluent and Google Cloud teams to work together across a range of needs. They relied on a legacy platform to get data from its commerce system to a large SQL Server database, and from there through the batch processes to the consultant portal, reporting applications, and other downstream systems. From licensing and product setups to creating new clusters and overcoming networking issues, Confluent and Google Cloud worked hand-in-hand to achieve success for Rodan and Fields.
“Google Cloud and Confluent worked together with a true spirit of partnership—that was instrumental in accelerating our timelines,” says Mattioda. “Both partners proved that they were entirely dedicated to our success.”
Ultimately, Rodan and Fields achieved its goal of delivering real-time insights while also reducing costs of data-related needs through its digital transformation with Google Cloud and Confluent.
Rodan and Fields Independent Consultants, as well as all internal stakeholders, now have faster, more accessible data analytics tools available at their fingertips thanks to Google Cloud, Confluent, and SAP solutions working in concert.
Rodan and Fields is continuing to train its Consultants and other end users in the new data insights capabilities it has deployed, fulfilling its mission to further empower all of its consultants with powerful tools for success. Alongside its use of SAP on Google Cloud and BigQuery, Rodan and Fields continues to push the limits of data-driven engagement and enablement.
“If we chose another cloud provider or partner, this data transformation project would have taken far more time and likely caused a lot more headaches,” says Jason. “We’re thrilled about the success we’ve had with Confluent and Google Cloud.”
Read More for the details.
Starting today, Amazon EC2 T4g instances are available in the AWS GovCloud (US-West) Region. T4g instances are powered by Arm-based AWS Graviton2 processors and deliver up to 40% better price performance over T3 instances. These instances provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. They offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads, including large scale micro-services, caching servers, search engine indexing, e-commerce platforms, small and medium databases, virtual desktops, and business-critical applications.
Read More for the details.
Building on the recent releases of the Android 12 operating system and Google Distributed Cloud, Google is paving the way for communication service providers (CSPs) as well as enterprises to deliver enterprise applications and services over 5G network slices, for enhanced application security, reliability, and performance.
5G network slicing enables virtual mobile networks to be created dynamically with varying properties utilizing shared underlying physical resources. 5G slicing is one of the key areas of innovations that will let CSPs earn a return on their investments in 5G, by offering a secure and dynamic network platform to enterprises. For enterprises, meanwhile, the true value of network slicing will be realized when the ecosystems for devices, network, edge cloud, and applications work together seamlessly and automatically.
With the depth of our technology focused on 5G network slicing, enterprises can boost application performance through a dedicated network channel that’s optimized for higher bandwidth, lower latency, higher reliability, and increased security and isolation. For example, financial institutions can deliver critical and sensitive data to employees’ mobile devices, while enterprises of all types can use network slices to deploy secure WFH connectivity for their employees.
Android’s support for 5G enterprise network slicing routes application traffic on managed devices on CSPs’ 5G networks. For devices provisioned with a work profile, data from work apps will be routed over an enterprise network slice. CSPs looking for more in-depth information on Android’s 5G network slicing implementation can visit the Android SAC site.
5G slicing is just one of the exciting developments in the mobile sphere. In addition, edge computing is critical to the 5G slicing ecosystem allowing application functionality to be processed at the edge while managed in a centralized cloud, further enhancing performance, resilience and operational continuity. CSPs must consider three core attributes to ensure enterprises get value from 5G slicing: 1) Enterprise tools and device integration 2) turn-key network & cloud automation, and 3) compelling use cases. This is where choosing the right partner becomes critical, and Google is working across the ecosystem to ensure these attributes are seamlessly implemented.
The support of 5G network slicing capabilities for enterprises was validated in partnership with both Ericsson and Nokia through a successful integration of 5G radio access networks (RAN) and 5G core network solutions with test units of the recently released Pixel 6 smartphone running the Android 12 operating system. Additionally, Far EasTone (FET), Android, and Ericssoncollaborated on an end-to-end demonstration of 5G Network Slicing with 5G URSP. This demonstration showcased URSP capability on the Google Pixel 6 to direct work apps traffic over the enterprise slice.
To unlock the value of 5G slicing, CSPs must consider how they want to participate across the wider 5G ecosystem, which is rapidly extending beyond mere connectivity. With Pixel devices, Android OS, Google Distributed Cloud, and applications, in partnership with CSPs, Google is creating optimal value for both 5G providers and enterprises.
Read More for the details.
Editor’s note:Today’s post is by Steven Kernahan, Portfolio/Solution Owner for Australia’s Woolworths Group and its 176 BIG W department stores across Australia. In collaboration with Woolworths Group IT, BIG W X improved in-store processes for printing delivery and online pickup, using Chrome OS and PaperCut software.
When BIG W’s customers embraced online pickup and delivery at the start of the COVID-19 pandemic, our store printers went into overtime. For every package that had to be picked up or shipped from our stores, we needed to print out a label. To keep pace with the demand for pickups and deliveries, printers had to run without a hitch, including printing out the standard tickets for in-store pricing. Now that we have Chrome OS devices and PaperCut software, our printers keep humming even at our peak when the number of print jobs reach over half a million per week.
In 2016, we made changes to printing storewide to keep pace with our growth. Now that most of the apps our team use day to day are web-based, we replaced PCs with more than 3,000 enrolled Acer Chromebooks that store workers use to manage printing. In 2018, our BIG W X Team figured out that employees spent too much time and extra keystrokes to print each label. They used to have to walk back and forth between two printers: one that printed out mailing labels, and one that printed out other packaging labels.
Then, once we chose the PaperCut Mobility Print solution for our multifunction printer devices, we were able to simplify all in-store printing significantly, customizing the solution’s internal APIs to make the printing process even easier and more robust. With in-house One Click Print functionality enabled by our move to Google, we were able to save some clicks per print job and streamline the user process. In fact, we’re working on adding PaperCut Mobility Print to many more of our in-house software solutions.
Right away, the BIG W X Team reduced the number of clicks it took to create a label, saving about 30 seconds per print job. That time adds up: We were able to dramatically speed up the process for packing and labeling pickups and deliveries, which has helped us get products in customers hands much faster. Combining Chromebases and PaperCut with our internal platforms helped us quickly develop a solution with greater reliance and stability for printing, and increased efficiency for our team.
Reducing printer issues to just about nothing is a tremendous benefit that has a knock-on effect on everything from budgets to customer happiness. On top of the significant dollars saved with improving the team’s process, we’ve eliminated several thousand points of failure, like power cables and network dependencies. And the simplified printing architecture means we can diagnose issues faster, should they crop up.
Once we started using PaperCut Mobility Print and Chrome OS for BIG W’s online operations in January 2021, we tracked a 95% reduction in operational incidents for printing, including all the process efficiencies introduced in 2018, such as one-click print.
In fact—and we think this is amazing—we haven’t seen a single printing product or solution failure across 176 stores using PaperCut and Chrome OS since January. Our store support team tells me that printing issues in stores are virtually nonexistent and the switch to PaperCut was completely transparent to employees. Everything worked without a hitch.
Of course, the result of our trouble-free printing process is that we can get in-store pricing updated and purchases into the mail or outside for curbside pickup much faster. BIG W customers might not know what’s going on with label printing behind the scenes, but we know they’re happy when their orders show up quickly.
Read More for the details.
One challenge to migrating databases is lining up your environment so that you don’t end up with compatibility issues. So what happens, when you want to move to a managed service in the Cloud, like Cloud SQL, and you discover that your favorite extension isn’t supported? Of course we want to support all the things, but supporting each individual plugin takes time to be sure it gets integrated into Cloud SQL without destabilizing anything.
Specifically, let’s chat about pg_cron. The PostgreSQL plugin which gives you a crontab inside your database. Handy for all kinds of things from pruning old unused data with vacuum, truncating data from tables that’s no longer needed, and a slew of other periodic tasks. Super handy plugin.
For now, pg_cron isn’t supported, but wait, don’t go! It doesn’t have to be a heavy lift to reimplement the functionality depending on what you want to be doing. It may even make sense to break things out into their own services even when we do support pg_cron down the road to isolate business logic from your data source. Today I’m talking about pg_cron, but thinking about moving business logic out of database extensions into separate services gives you the flexibility to shift your data wherever it needs to be without worrying about data specific solutions.
Let’s walk through one way to break out pg_cron tasks.
The primary product we’ll be using to produce cron tasks is Cloud Scheduler. Long story short, it’s a crontab (mostly) for GCP products. Going to create a new job in the console starts you off with the familiar cron interface for defining when you’d like your job to trigger, and you can define what timezone you want it to be in.
Next comes the different piece. Unlike normal cron, where you define the path to what you’d like to execute, in the case of Scheduler you need to define a trigger target. You can hit an arbitrary HTTP URL, send a message to a predefined Pub/Sub Topic, or send an HTTP message to an App Engine instance you’ve created. Naturally which method you want to use depends entirely on what the existing tasks you’re wanting to port over.
For example, if you have one job that needs to trigger multiple actions that aren’t necessarily related? Probably makes the most sense to send a message to Pub/Sub and have other services subscribed to the topic where the message will go. This would mirror a delegator pattern. Alternatively, if the job needs to trigger a set of related tasks, building an App Engine application as an endpoint which can then handle the related tasks in a bundle may make the most sense. Lastly, and what I’m going to show here, is if the job is a one-off and just needs to accomplish a small task, it may make sense to build a Cloud Function, or set up a container to run in Cloud Run to handle these one-off tasks as these serverless offerings scale to zero, so won’t cost you anything while they aren’t being run.
Let’s take a look at a simple example just to walk through one way to do this.
Say for the sake of argument, you’ve got a pg_cron job that runs every night at 1 o’clock in the morning after your backup has finished which prunes older data from one of your tables to keep operational data at a 30-day window.
Step one is getting that functionality of our SQL query to remove our old data somewhere else. There’s a multitude of ways to do this in GCP as I mentioned. For this, I’m going to stick to Google Cloud Functions. They’re incredibly simple to stand up and this sort of one-off function is a perfect use-case.
There’s a very well written Codelab that walks through creating a Cloud Function which talks to a Cloud SQL instance. Couple things need changing from the Codelab. First is the stmt variable from the insert call that’s in the code sample to the delete call from our pg_cron function. Second we want to not listen when the Codelab tells us to allow unauthenticated invocations of our Cloud Function. Nothing catastrophic would happen if you do allow unauthenticated requests, because we’re only deleting older data that we want gone anyway, but if someone happens to get ahold of the URL, then they can spam it, which could impact performance on the database, as well as costing you some extra money on the Cloud Function invocations.
One other thing to note about this setup is that the Cloud SQL instance gets created with a public IP address. For the sake of this post staying focused on converting an extension into a microservice I’m not going to go into too much detail, but know that connectivity can become a bit sticky depending on your requirements for the Cloud SQL instance. In an upcoming post I’m going to cover connectivity around our serverless offerings to Cloud SQL in a bit more depth.
Okay, if you’re doing this inline while reading the post, go and do the Codelab with the changes I mentioned, then come back. I’ll wait.
All set? Awesome, back to our story.
So now we have a function set up, and when I tested/ran it, it correctly deleted entries older than a month from our database. Next up we’ve got to set up our Cloud Scheduler task to call our function.
Revisiting the creation page from earlier, now let’s dig in and get things rolling.
As it says in the UI, Frequency is standard cron formatting, so we want our cleanup script to fire every day at 1:00 AM so set our frequency field to: 0 1 * * *
I created my Cloud SQL instance in us-west2, so I’ll set my timezone to Pacific Daylight Time (PDT).
Since we set up our Cloud Function to be triggered by HTTP, we set our Scheduler task to hit an HTTP endpoint. You can get the URL from the details of your Cloud Function you created.
Now, if you’ve set your Cloud Function to accept unauthorized connections just to play around with it (please don’t do that in production) then you’re pretty much all set. You can hit Create at the bottom and poof done, it’ll just start working. If however, you disabled that, then you’ll need to send along an Auth header with your request. Your two options are an OAuth token, or an OIDC token. Broadly speaking, at least as far as GCP targets are concerned, if you’re hitting an API that lives on *.googleapis.com then you’ll want an OAuth token, otherwise an OIDC token is preferred. So in our case, Cloud Functions can use an OIDC token. The service account you want to specify can be the same one you used from the Cloud Function service account if you want. Either way, the role you’ll need to add to the service account to successfully call the Cloud Function is the Cloud Functions Invoker role. Either create a new one with that role, or add that role to your existing service account, and then specify the service account’s full email in the Scheduler field. The audience field is optional and you can ignore it for this service.
That should be it! Hit the create button and your Scheduler task will be created and will run at the specified schedule! When I test this, I set my frequency to 5 * * * * and have my Cloud Function just output something to console. That way I can just check Logging to see if it’s firing. Once you click into the Cloud Function you created’s details, there’s a navigation tab in there for LOGS. Clicking that will show you a filtered view of your project’s logs for that function.
I would suggest testing, to be sure you’re not going to spam your database, by creating a simple Hello World! Cloud Function first and trigger that with your scheduler.
That’s it then! Replacing a PostgreSQL extension with a microservice. While I showed you here how to do it for pg_cron and Cloud Scheduler, hopefully this sparks some thought around splitting some of that business logic away from the database and into services. This is a simple case of course, but this can help alleviate some load on your primary database.
Thanks for reading! If you have any questions or comments, please reach out to me on Twitter, my DMs are open.
Read More for the details.
Today, AWS Transit Gateway Network Manager launched new APIs that enable you to perform automated analysis of your global network and allow you to build your own topological views for visualization purposes. You can get an aggregated view of your global network resources, analyze routes, and retrieve telemetry data across AWS regions using the following APIs:
Describe the network resources for the global network (GetNetworkResources)
Get the network health information of the global network (GetNetworkTelemetry)
Get the network routes of a specific route table (GetNetworkRoutes)
Get the network resource relationships of a specific resource (GetNetworkResourceRelationships)
Get the count of network resources for the global network (GetNetworkResourceCounts)
Read More for the details.
Amazon MemoryDB for Redis now supports AWS CloudFormation, enabling you to manage MemoryDB resources using CloudFormation templates. Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. CloudFormation makes it easier for you to create and manage MemoryDB resources without having to configure MemoryDB separately through the console. For example, you can create MemoryDB clusters, subnet groups, parameter groups, and users using CloudFormation templates.
Read More for the details.
Amazon Simple Email Service (Amazon SES) is pleased to announce the launch of the newly redesigned service console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer.
Read More for the details.
We’re excited to share the news that leading global research and advisory firm Forrester Research has named Google AppSheet a Leader in the recently released report The Forrester Wave™: Low-code Platforms for Business Developers, Q4 2021. It’s our treasured community of business developers—those closest to the challenges that line-of-business apps can solve—who deserve credit for not only the millions of apps created with AppSheet, but also the collaborative approach to no-code development that has helped further our mission of empowering everyone to build custom solutions to reclaim their time and talent.
AppSheet received the highest marks possible in the product vision and planned enhancements criteria, with Forrester noting in the report that “AppSheet’s vision for AI-infused-and-supported citizen development is unique and particularly well suited to Google. The tech giant’s deep pockets and ecosystem give AppSheet an advantage in its path to market, market visibility, and product roadmap.”
“Features for process automation and AI are leading,” remarking that, “The platform provides a clean, intuitive modeling environment suitable for both long-running processes and triggered automations, as well as a useful range of pragmatic AI services such as document ingestion.”
Many enterprise customers including Carrefour Property – Carmila, Globe Telecom, American Electric Power, and Singapore Press Holdings (SPH) choose AppSheet as their business developer partner, along with thousands of other organizations in every industry. We are honored to serve these customers and to be a Leader in the Forrester Wave™: Low-code Platforms for Business Developers. We look forward to continuing to innovate and to helping customers on their digital transformation journey. To download the full report, visit here and enter your email address. To learn more about AppSheet, visit our website.
Read More for the details.
With a multi-user account set up, organizers (aka Account Administrators) can now provide racers access to the AWS DeepRacer service under their account ID, monitor spending on training and storage, enable/disable training, and view/manage models for every user in their account from the AWS DeepRacer console
Read More for the details.
The world’s resources are increasingly scarce and yet too often they are wasted. At Veolia, we use waste to produce new resources, helping to build a circular economy that redefines growth with a focus on sustainability.
Our sustainability mission transcends borders, and nearly 179,000 employees work in dozens of profit centers worldwide to bring it to life. It’s a massive operation that requires an IT architecture to match, which is why we’ve streamlined critical IT work across our global operations with Google Cloud. As Technical Lead and Product Manager for Veolia’s Google Cloud, it’s my team’s responsibility to standardize processes around security, governance, and compliance, and make sure our employees have all the right tools to do their best work, securely. Google Cloud’s Security Command Center (SCC) Premium is the core product that we use to protect our technology environment. We use it across 39 business units, spanning 31 countries worldwide.
In line with our sustainability motto “create once, then copy and adapt to reuse many times,” we encourage local teams to work autonomously. That includes their use of Google Cloud solutions. We use many Google Cloud products including BigQuery, Compute Engine, GKE, Cloud Functions, and Cloud Storage. Across the board, we’re working with Google Cloud in an agile and collaborative way to deliver smart water, waste, and energy solutions to communities globally.
But there’s no agility without security, and it’s my team’s responsibility to make sure our environment is secure at all times. Because our Google Cloud environment is extensive, and we give individual business units autonomy over their use of cloud solutions, we also set the parameters and policies for them to operate with all compliance and security controls in place in an organized way. SCC Premium is the common tool that all our business units use to keep their individual projects and assets secure. It helps us to gain visibility over our entire Google Cloud environment and identify threats, vulnerabilities, and misconfigurations with real-time insights.
Here’s that visibility in numbers: we use SCC Premium to monitor 2,800 projects with hundreds of thousands of assets. We continually observe our Google Cloud environment using SCC to quickly discover misconfigurations and respond to threats based on our latest findings. If an anomaly is revealed, we remediate incidents ourselves or alert respective business units. We’ve also started to consolidate our SCC findings in a global dashboard to give business units an overview of their security position, enabling them to take swift action.
As our risk management platform for Google Cloud, SCC enables us to streamline the process of security management. It provides findings in near real-time and with all its insights, we can decide on the next steps and alert relevant parties to remediate misconfigurations. I really like the context and recommended actions that SCC provides for each of the findings. These recommendations help us to remediate incidents ourselves or alert project owners. This new visibility has already helped us remediate misconfigurations that could adversely affect our cloud services. SCC, for example, enabled us to identify firewall misconfigurations and it saved us around 500 hours when compared to pre-SCC times.
Another benefit of the visibility we’ve gained with SCC is our ability to prioritize our security tasks and use our time more efficiently. As one of France’s biggest users of public cloud services, we have a lot of Google Cloud projects running, and a lot of ground to cover — from misconfigurations to imminent threats. Without SCC, it was difficult to identify patterns and adapt our priorities accordingly. Deleting unused service account keys, for example, used to be difficult, because we had to check service accounts for each project separately. With SCC, we identified unused keys and marked them for deletion. This has cut the time it takes us to delete unused service account keys by 1,000 hours. In addition, we use SCC to identify any misconfigurations like overly permissive roles associated with the service account and threats like service account self-investigation. Using SCC’s container threat detection, we can proactively identify threats like remote shell execution in our containers. For example, we were alerted to 1800 findings when a container with a remote shell inside had been duplicated. Thanks to SCC, we managed to identify the root cause and remediate these containers quickly.
SCC also helps us to strengthen our compliance standards. Our Google Cloud environment needs to align with the CIS Google Cloud Computing Foundations Benchmark v1.1, which helps our organization to improve our overall security posture. Often, a lack of compliance simply means a lack of training. With our SCC findings, we don’t only evaluate where we stand, we are also able to educate our workforce to address issues proactively that help make us more compliant.
We’ve already achieved a lot with SCC, and we are excited about the new capabilities we’re yet to explore. Currently, we’re working to implement auto-remediation to help us act on alerts immediately, whenever they occur. By connecting SCC with Pub/Sub, we’ll be able to trigger workflows that fix potential breaches automatically within minutes, by disabling accounts, for example. We also plan to use synergies with Google Workspace to send SCC findings directly to the project owners in real-time via Google Chat, ensuring that relevant employees are made aware of potential vulnerabilities right away.
Like all our cloud solutions, we want to use SCC to empower our individual business units with the autonomy they need to pursue their own goals as part of our larger organization. It’s a great tool at their fingertips, helping us to reduce risk and cut down waste across our cloud environment as we work to resource the world more sustainably.
Read More for the details.
Let’s face it: in the globalized world, which is now more than ever a digital demand world, you need to scale and reach your customers right where they’re at. Translation is a critical piece of that, whether you’re translating a website in multiple languages or releasing a document, a piece of software, or training materials.
Manual translation does not scale, which is why machine translation, powered by machine learning (ML), is becoming more important to our customers. Machine translation has historically been challenging because of the sheer volume and breadth of content that can add value when translated into multiple languages. Companies acquire and share content in many languages and formats, and scaling translation to meet needs is a tall order due to multiple document formats, integrations with optical character recognition (OCR), and the need to correct for domain-specific terminology.
Our goal is to simplify translation services, while enabling flexibility and control for our customers’ unique needs across industries. Read on to learn more about recent features and updates.
In many cases, the layout of a document dictates how it should be interpreted—e.g., readers navigate text and discern meaning based on formatting, like bold or italicized text, or markups for headers, paragraphs, and columns. Previously, to automate translation of documents, text needed to be separated from these layout attributes, meaning the document’s structure was either lost or needed to be recreated later in the developer pipeline, after the text had been translated. This required translation teams to do a lot of extra work and maintain a lot of additional code. But now, those steps are unnecessary. Formatting can be retained throughout the translation process, handled directly by the Translation API Advanced.
This feature lets customers translate documents in 100+ languages and supports document types such as Docx, PPTx, XLSx, and PDF while preserving document formatting.
And if your needs go beyond Document Translation, we can help you translate audio as well. For real-time streaming translation, check out the Media Translation API, and for offline transcription translation, combine the Translation API with the Video Intelligence API.
One of the biggest differentiators for Translation API Advanced’s document translation capabilities is the ability to do real-time, synchronous processing for a single file.
For example, if you are translating a business document such as HR documentation, online translation provides flexibility for smaller files and provides faster results. You can easily integrate with our APIs via REST or gRPC with mobile or browser applications, with instant access to 100+ language pairs so that content can be understandable in any supported language.
Meanwhile, batch translation allows customers to translate multiple files into multiple languages in a single request. For each request, customers can send up to 100 files with a total content size of up to 1 GB or 100 million Unicode codepoints, whichever limit is hit first.
In order to achieve the highest level of accuracy for your translation, we now support multiple options:
Use Google’sSOTA translation models: Each year, Google heavily invests to improve the quality of our translations across Apps, Cloud APIs, and Chrome, as well to enable multilanguage answers in Search. A popular metric for automatic quality evaluation of Machine translation systems is the BLEU score, which is based on the similarity between machine translation and the reference translations that were generated by people. While we push out incremental improvements for individual models on a monthly cadence, there are also times where we make significant leaps. In the releases since 2019, we have improved our average BLEU score by 5pts on average across 100+ languages and 7pts on low resource languages.
Leverage glossaries for specific terms and phrases: Glossary is our terminology control feature. It allows you to import source content to define preferred translations, such as product names or department names. Then, when calling the glossary in the API request, your preferred translations will be enforced. This will work for words as well as phrase translation.
Pick a pre-trained model with model selection: If you create custom models for machine translation, we don’t think you should have multiple client libraries and multiple APIs to maintain in order for you to use the best model for your needs. Translation API Advanced now supports Model Selection. Pick your pretrained model or pick your custom ML model built on AutoML for any language pair you’ve created and use the same API and the same client library.
Build custom translation models with AutoML: AutoML Translation is a suite of ML products that enable you to build high quality models for your own use case or data, with limited-to-no ML expertise or coding required. Bring your past human-validated translations to improve translation specificity for your domain.
If you are a customer operating in the EU, we recently launched an endpoint specifically for EU regionalization. This is a configurable endpoint for customers to store and perform machine translation processing of customer data only in the EU multi region. For now, this only supports our pretrained translation models and glossary, but batch translations will be coming soon.
Historically, translations at Eli Lilly have been complicated: numerous translation vendors have been needed for different languages and organizations, all with their own processes and expectations. On top of that, translations have been costly and slow.
To solve this, Eli Lilly took a codified approach to enable users and systems to spend less time and resources to safely generate quality translations.
Learn more, and even catch a demo, from Thomas Griffin, Translation Tech Lead & Global Regulatory Architect for Eli Lilly.
To get started using Cloud Translation – Advanced, complete the setup and then try the Translate text (Advanced edition) quickstart.
Document Translation is priced per page. For more information, see Pricing.
Read More for the details.