Azure – Public preview: License Geo-redundant Disaster Recovery for SQL Managed Instance for free
Save on business continuity costs and license geographically redundant Disaster Recovery with Azure SQL Managed Instance.
Read More for the details.
Save on business continuity costs and license geographically redundant Disaster Recovery with Azure SQL Managed Instance.
Read More for the details.
Optimize business costs by licensing your Disaster Recovery secondary for free with SQL Server on Azure Virtual Machines.
Read More for the details.
As an organization increasingly relies on cloud, its number of cloud projects can also increase. Over time, project sprawl creeps in, and an organization can be left with tens, or even hundreds, of unnecessary projects. While these projects can be deleted in bulk, it becomes challenging to determine which projects are no longer needed. As manual efforts to understand each project are undertaken, valuable resources go wasted performing this arduous task. Even worse, resources running in the superfluous projects could be increasing your costs, carbon footprint, and security risk.
Remora is a serverless solution that helps limit the number of unused projects in your organization. It works with the Unattended Project Recommender to notify project owners of their unused projects, escalate notifications, and then delete those projects if no action is taken after a predetermined period.
The way the solution works is straightforward. The unattended project recommender analyzes usage activity on projects in your organization to make recommendations about reclaiming or removing unattended projects. Taking those recommendations a step further, Remora was designed to identify owners of the unattended projects and then send them a customizable email notification or assign them a Jira ticket. You can establish a predefined cadence to send notifications, designating an Essential Contact to be copied after the first email (e.g., the folder owner). After three emails are sent for any given project, a time-to-live (TTL) can be set to determine how long the unused project can stay unused before it is removed. Remora labels each project with an impending deletion date.
Remora was built with several essential capabilities to ensure it could be customized to help meet each organization’s unique requirements:
Dry-run mode:dry-run mode is enabled by default, which prevents Remora from deleting projects. Dry-run mode must be turned off in order for projects to be deleted by the solution.
Multiple notifications:owners of unused projects should have multiple opportunities to act on the recommendations. Remora notifies owners every time it runs, and Cloud Scheduler can be used to set up periodic Remora runs (e.g., once a week).
Summary notifications:an owner on multiple unused projects receives a single email notification with all the projects identified.
Escalation of notifications:the first notification is always sent directly to the project owner(s). We’ve implemented two mechanisms for escalations of subsequent notifications:
Essential Contacts: Remora escalates to the specified category of Essential Contacts for the project. If your identities are different from your email addresses, configuring Essential Contacts will inform Remora of the correct escalation email addresses.
Folder or organization admins: when an Essential Contacts category is not specified, Remora escalates to the admin of the project’s parent folder or organization (whichever is the parent in the resource hierarchy).
Time-to-live:Organization admins can set the number of days during which an unused project can remain in their organization. Remora will label the projects with their impending deletion date and delete the projects after the designated period of time and three notifications.
Notification mechanisms:Remora sends email notifications using Sendgrid or creates Jira tickets.
Deployment using Google Cloud CLI or Terraform: Remora can be deployed manually using gcloud commands or as a Terraform module.
The entire solution is made up by combining the components below.
Unattended Project Recommender
The unattended project recommender analyzes project usage and provides recommendations to remove unused projects. Generally, a project will be recommended for deletion when it has low usage for 30 days and no OAuth tokens used in the last 180 days. Remora will then label the unattended project for deletion.
Google Cloud Workflows and Scheduler
Workflows is a service that lets you connect different Google Cloud services and APIs to create pipelines and process automation. Workflows are configured with a YAML or JSON file that lists a series of steps in their order of execution. For this solution, Workflows are used to create the initial BigQuery dataset and tables where recommendations will be tracked, retrieve the latest unattended project recommendations from the Recommender API, and call Pub/Sub to initiate the notification process to the owners of the identified unattended projects. The workflows execute on a schedule configured using Cloud Scheduler, Google Cloud’s crontab-as-a-service solution. Cloud Scheduler is where you configure how often you want Remora to process unattended project recommendations.
Cloud Functions
Cloud Functions is Google Cloud’s function-as-a-service offering that lets you execute lightweight functions without the need to manage any servers. Cloud Functions can execute programmatically when triggered by events from Cloud Storage, Pub/Sub, Firebase or HTTP requests. Here, a Cloud Function is triggered via Pub/Sub to alert the project owner via email using Sendgrid or via an issue in Jira.
Terraform
To simplify and streamline the deployment of Remora, we compiled the individual Google Cloud CLI commands into a Terraform module that creates all the resources needed to get Remora running. As a Terraform module, Remora can be deployed to the provided Google Cloud project and customized with just a few variables.
The module will handle the creation and configuration of Workflows, Cloud Scheduler, Cloud Functions, and a service account with custom role assignments on the project and organization IAM policy. The code used for the Cloud Functions is included in the module and is uploaded as an archive file to a Cloud Storage bucket.
Check out the documentation in the repository for more detailed usage information and examples. Here’s one simple example of what a module might look like:
The example above will retrieve recommendations from the unattended project recommender every Sunday night, then use Sendgrid to send an email to the unattended project owner.
As soon as Remora is deployed in your organization, Workflows will query the Recommender API based on a specified interval. With Sendgrid configured as the notifier, the project owner will receive a message like this:
After being notified, the unattended project owner will have two options: delete the project right away, or dismiss the recommendation so that it won’t be picked up by the Recommender API again.
If no action is taken after the first notification, the next notification will include your specified category of Essential Contacts. If the Essential Contacts category is not set, the next owner in the resource hierarchy (i.e., the folder or organization) is included instead. The second message will look like this:
Finally, if no action is taken after three notifications and the TTL has expired, the project is automatically shut down and marked for deletion when Remora runs. Just like shutting down a project manually, there is a 30-day period where the project can be restored in case it was deleted in error.
By leveraging the intelligence of Active Assist’s recommendation APIs, Workflows, and Cloud Functions, Remora will prune unattended projects to potentially resolve security risks, reduce your carbon footprint, and lower the associated costs of your cloud infrastructure without the overhead incurred from frequent manual auditing. Additionally, since Remora is an open-source project, you can examine and customize the logic used in the Workflows and Cloud Functions to tailor the solution to your organization’s needs. You can get started by checking out the project repository on GitHub and deploying Remora using the provided Terraform module. If you would like to learn more about Active Assist, please take a look at this YouTube playlist covering Active Assist and its intelligent features.
Read More for the details.
Today, we are excited to announce general availability of tooling support to build and deploy native AOT compiled .NET 7 applications to AWS Lambda. .NET 7 is the latest version of .NET and brings several performance improvements and optimizations, including support for the native AOT deployment model. Native AOT compiles .NET applications to native code. By using native AOT with AWS Lambda, you can enable faster application starts, resulting in improved end-user experience. You can also benefit from reduced costs through faster initialization times and lower memory consumption of native AOT applications on AWS Lambda.
Read More for the details.
Kubernetes is rapidly evolving, with frequent feature releases and bug fixes. You can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.24.
Read More for the details.
AWS announces the availability of AWS Fluent Bit container images for Windows Server on Amazon ECS and Amazon EKS, to help customers easily process and forward their container logs to various AWS and third-party destinations, such as Amazon CloudWatch, Amazon S3, Amazon Kineses Data Firehose, Datadog, and Splunk. This capability helps customers to centrally view, query, and manage their logs without needing to implement or manage a custom logging solution or agents to extract logs from their Windows containers. With this launch, customers have a common mechanism to process and route their logs across ECS and EKS for both Linux and Windows workloads. For more details about the supported Windows versions and the image tags for Fluent Bit, visit the public github repository here.
Read More for the details.
Starting today, you can use AWS Nitro Enclaves in Asia Pacific (Osaka) and Asia Pacific (Jakarta) regions.
Read More for the details.
One benefit of cloud migration is having access to managed services, which can reduce operational overhead. For organizations operating in Microsoft-centered environments, Google Cloud offers a highly-available, hardened Managed Service for Microsoft Active Directory running on Windows virtual machines. Managed Microsoft AD provides benefits such as automated AD server updates, maintenance, default security configurations, and needs no hardware management or patching.
Most organizations adopting Managed Microsoft AD will be migrating from an existing on-premises AD deployment. Typically, when migrating Active Directory objects, existing users cannot continue to access resources in the new domain unless security identifier (SID) history is preserved. This can lead to additional work for administrators as the permissions need to be recreated post migration.
To make migrations more seamless and eliminate extra effort, we are excited to announce a new capability in Managed Microsoft AD: support for the migration of AD users with SID history, now available in Preview. Now, users can retain historic Access Control List (ACL) entries so that users can access resources without having to recreate resource permissions post-migration.
To get started, you can use Active Directory Migration Tool (ADMT) to migrate an on-premises AD domain to Managed Microsoft AD specifically with SID history.
1. Prepare your on-premises Active Directory and Managed Microsoft AD
As a prerequisite for migration, users need to set up a two-way trust between existing on-premises AD domain and new Managed Microsoft AD domain.
Either a single user or a team within the users’ organization can perform the migration activities. When a team is involved, we recommend adding the team members to a domain local group in Managed Microsoft AD. Users can connect to a Managed Microsoft AD domain and use the standard Active Directory tools such as Active Directory Users and Computers (ADUC) that is part of RSAT: Active Directory Domain Services for adding those users to the domain local group. Remember that users need to add this domain local group to the pre-created groups in Managed Microsoft AD, after enabling permissions as described in step 3.
2. Prepare a Google Compute Engine Virtual Machine and set up ADMT
As a next step, install and set up Microsoft Active Directory Migration Tool (ADMT) and Microsoft SQL Server 2016 Express on a Google Compute Engine Virtual Machine. You need to make sure that this VM is not a domain controller and then join the VM to Google Cloud Managed Microsoft AD domain.
3. Enable permissions on Managed Microsoft AD
After users prepare the on-premises Active Directory and Managed Microsoft AD, they can enable the required permissions in Managed Microsoft AD to migrate the users with SID History. You can use the following gCloud CLI command:
After users enable the permissions, add the domain local group to the “Cloud Service Administrators” and “Cloud Service Migrate SID Administrators” (these groups are pre-created in Managed Microsoft AD) so that the designated users will have the required permissions for Active Directory migration with SID history.
For more information, see Enable permissions for migrating an on-premises domain with SID history and also ensure you understand these security implications.
4. Configure ADMT and install PES service
To migrate passwords securely during the domain migration, users need to use the Microsoft Password Export Server (PES) service. Before users install PES on the on-premises domain, create the encryption key on the VM running ADMT. Users can use the following command to create the encryption key:
It creates and stores the encryption key at the location specified at KEY_FILE_PATH. You can copy the encryption key to the local drive of the on-premises domain controller where you have installed the PES service.
Note: It is recommended to run the PES service only when you migrate passwords.
5. Migrate AD users to Google Cloud Managed Microsoft AD
Now that users have the environment ready and the required permissions enabled, migrate the AD users & groups along with the SID history. Users will have to use the User Account Migration Wizard in the ADMT tool. For detailed steps, please refer to the ADMT guide. In this wizard, select the “Migrate user SIDs to target domain” option on the Account Transition Options page.
Note: You may want to migrate other AD objects as well, so as a best practice, we recommend that you maintain a migration checklist and verify managed objects in the checklist post-migration.
The migration process window will show the status of how many users, groups and computers have been migrated. You can click on “View Log” to examine the migration logs for errors and check the details of the operations.
After the migration process is completed, verify that the users exist in the Managed Microsoft AD domain using the Active Directory Users and Computers tool. Users can also use the AD login to verify that the access is set up correctly and verify that users can login successfully with resource permissions based on the SID history.
Now users have successfully migrated AD objects including users along with SID history from on-premises AD domain to a new Managed Microsoft AD domain. After successful migration, we recommend disabling the user permissions that was enabled for migration.
We are continually evolving our Managed Microsoft AD features to help you more easily and effectively manage AD tasks. Here are additional resources where you can learn more about Managed Microsoft AD and these new features.
Managed Service for Microsoft AD documentationExisting domain migration overviewEnable permissions for migrating an on-premises domain with SID history
Read More for the details.
Participate in retail evaluation now to ensure compatibility. The Azure Sphere team has also updated the trusted keystore of Azure Sphere devices, resulting in an additional reboot for production devices.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) for Oracle now supports integration with Amazon Elastic File System (EFS). You can now transfer files between the RDS for Oracle DB instance and Amazon EFS file system. Amazon EFS is designed for 99.999999999% (11 9s) of durability and up to 99.99% (4 9s) of availability. You can scale to petabytes on a single NFS file system.
Read More for the details.
Amazon Redshift concurrency scaling is leveraged by thousands of customers to support virtually unlimited concurrent users and queries, and meet their SLAs for BI reports, dashboards and other analytics workloads. In addition to the read queries, Amazon Redshift concurrency scaling is now extended to support scaling of most common write operations performed as part of workloads such as data ingestion and processing. The write workloads support with concurrency scaling is available on Amazon Redshift RA3 instance types.
Read More for the details.
Companies using the Amazon Connect Customer Profiles APIs for custom agent applications and automated interactions (e.g., IVR) can now search for profiles using multiple search terms, making it easier to find the right profile. Using the enhanced SearchProfiles API, customers can search for profiles using up to 5 terms to narrow down or expand search results. For example, when dealing with common names, you can narrow your search results to one profile by searching for profiles that match more than one term such as phone number, and name. As another example, when uncertain on the search term that matches a specific profile, you can expand search results to all the profiles matching any of the terms provided such as phone number, name, or social security number.
Read More for the details.
Amazon HealthLake announces new analytics capabilities, making it easier for customers to query, visualize, and build machine learning models on their HealthLake data. With this launch, HealthLake transforms customer data into an analytics-ready format in AWS Lake Formation in near real-time. This removes the need for customers to execute complex data exports and data transformations. Now customers can simply focus on querying the data with SQL using Amazon Athena, building visualizations using Amazon QuickSight or other third party tools, and using this data to build ML models with Amazon SageMaker.
Read More for the details.
AWS re:Post is a cloud knowledge service designed to help AWS customers remove technical roadblocks, accelerate innovation, and operate efficiently. re:Post has only supported English since the launch at re:Invent 2021. Today, re:Post has expanded the user experience to support five additional languages. Customers can now learn, design, build, and troubleshoot AWS technology by posting questions and consuming content in the following languages: Traditional Chinese, Simplified Chinese, French, Japanese, and Korean. Multi-lingual support makes the re:Post community more accessible to AWS enthusiasts globally, allowing them to collaborate and build connections with community members in their preferred or chosen language(s) and to locate the content they need faster.
Read More for the details.
The AWS Serverless Application Model (SAM) Command Line Interface (CLI) announces the preview of AWS Lambda local testing and debugging on Terraform. The AWS SAM CLI is a developer tool that makes it easier to build, test, package, and deploy serverless applications. Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-premises resources safely and efficiently.
Read More for the details.
Amazon Managed Service for Prometheus now supports 200M active metrics per workspace. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a Cloud Native Computing Foundation open source project for monitoring and alerting that is optimized for container environments such as Amazon EKS and Amazon ECS. With this release, customers can send up to 200M active metrics to a single workspace after filing a limit increase, and can create many workspaces per account, enabling the storage and analysis of billions of Prometheus metrics. To get started, customers can create an Amazon Managed Service for Prometheus workspace and increase their workspace active series limits by filing a limit increase in AWS Support Center or AWS Service Quotas.
Read More for the details.
AWS Elemental MediaConnect now supports RGB 10- and 12-bit 4:4:4 color spaces via AWS Cloud Digital Interface (AWS CDI) enabling workloads such as color grading, that require high-fidelity color at low latencies. The RGB 10- and 12-bit 4:4:4 color spaces are in addition to the currently supported option of YCbCr 10-bit 4:2:2.
Read More for the details.
Amazon HealthLake Imaging is a new HIPAA-eligible capability now in preview that enables healthcare providers and their software partners to easily store, access, and analyze medical images at petabyte scale. With HealthLake Imaging, healthcare providers and their software partners can run their medical imaging applications in the cloud to increase scale while also reducing infrastructure costs.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) now supports the delivery of message attributes, which provide structured metadata about a message. RDS event attributes are separate from the message, but are sent with the message body. The message receiver can use this information to decide how to handle the message, enabling routing and filtering without having to process the message body first.
Read More for the details.
Every year, government agencies are responsible for distributing services and benefits that require processing billions of documents, images and forms. These documents are often expected to be manually transcribed into an electronic system error-free by a government employee to support time-sensitive tasks such as application intake, verification, enrollment, vendor management, and procurement.
While agencies want to process these documents quickly to support their citizens, they’re often challenged with the following:
1. Most applications require a variety of custom forms and verification documents (e.g. tax forms and utility bills) that are received at different times and in different formats (paper, images, etc.), making it difficult to track and match documents to a single applicant.
2. Handwritten entries and those in multiple languages add to the challenges of understanding and entering data quickly.
3. Document processing is labor intensive when scanning, reading, data entry, indexing, and matching are all done manually
With Document AI, state, local and federal agencies can accelerate time to delivery of critical services by decreasing the manual labor needed to process documents.
This solution can assist government workers by automatically extracting content from unstructured and handwritten documents and keying data into their existing system of record. We have expanded the capability to understand documents more easily and accurately over the years. As such, we are excited to announce new functionality that addresses the unique challenges faced by public sector agencies and their employees.
Introducing pre-built Document AI models to support common government use cases
Document AI for government includes pre-built document models for many widely used government document types, including invoices, receipts, driver’s license, passport, payslips, tax forms, and more. There may be scenarios where you want to extract additional fields or improve accuracy with your own documents. We now offer the ability to transfer learning or uptrain a pre-built model.
For scenarios where constituents are dealing with custom documents where a pre-built model may not exist, we offer Document AI Workbench. With the release of Document AI Workbench, government agencies can now create custom document models to extract fields from any document, image and form, enabling you to accelerate and streamline almost any workflow that requires heavy document processing.
Finally, with Document AI Accelerator, government workers can now use a lightweight workflow tool to manage the flow of incoming applications by identifying and classifying documents, matching documents to a single constituent and proactively reaching out if there is missing or inaccurate information. We also provide the ability to integrate human reviews for accuracy and validation.
Document AI helps unlock insights for a variety of use cases at Government agencies. Be it extracting fields from custom documents, or specific documents such as driver’s license, passport, taxes, or text in over 200 languages, Document AI offers a way more easily to process and automate documents.
The State of Hawaii used Document AI to extract travel and health information from visitors to safely reopen to tourism. “You need technology that can scale up easily and continue to be fast. The system now routinely handles 25,000 or more per day..” Doug Murdock, CIO, Office of Enterprise Technology Services, State of Hawaii.
Learn more about how the government is accelerating innovation at ourGoogle Government Summit and upcoming coffee hour on Automated Document Processing.
Read More for the details.