Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Starting March 5, 2025, Amazon FSx for NetApp ONTAP eliminates SnapLock licensing fees for data stored in SnapLock volumes, making it more cost-effective for customers to protect their business-critical data from ransomware, unauthorized deletions, and malicious modifications.
SnapLock is an ONTAP feature that offers Write Once, Read Many (WORM) protection to prevent alteration or deletion of data for specified retention periods, enabling customers to meet regulatory compliance and improve data protection. After this billing change, volumes with SnapLock enabled will no longer incur licensing charges. This license removal requires no changes to customer applications and takes effect automatically for both new and existing SnapLock volumes.
The removal of SnapLock licensing fees applies to all FSx for ONTAP file systems across all AWS Regions where they are available. To learn more, visit the product page and SnapLock in the user guide.
Amazon Nova Pro foundation model now supports latency-optimized inference in preview on Amazon Bedrock, enabling faster response times and improved responsiveness for generative AI applications. Latency-optimized inference speeds up response times for latency-sensitive applications, improving the end-user experience and giving developers more flexibility to optimize performance for their use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times.
Latency optimized inference for Amazon Nova Pro is available via cross-region inference in US West (Oregon), US East (Virginia), and US East (Ohio) regions. Learn more about Amazon Nova foundation models at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. Learn more about latency optimized inference on Bedrock in documentation. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.
Starting today, you can use AWS WAF in the AWS Asia Pacific (Thailand) and AWS Mexico (Central) Region.
AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources.
To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. Please note that only core AWS WAF features like AWS Managed Rules and rules are currently available in these new regions. For more information about the service, visit the AWS WAF page. AWS WAF pricing may vary between regions. For more information about pricing, visit the AWS WAF Pricing page.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Asia Pacific (Mumbai) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7g instances are available in the AWS Europe (Zurich) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.
Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).
AWS Identity and Access Manager (IAM) Access Analyzer now supports Internet Protocol version 6 (IPv6) addresses via our new dual-stack endpoints. The existing IAM Access Analyzer endpoints supporting IPv4 will remain available for backwards compatibility. The new dual-stack domains are available either from the internet or from within an Amazon Virtual Private Cloud (VPC) using AWS PrivateLink.
To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. Support for IPv6 on IAM Access Analyzer is available in the AWS Commercial Regions, the AWS GovCloud (US) Regions, and the China Regions. To get started with using IAM Access Analyzer to continuously monitor access to your resources and remove unused permissions, visit our documentation.
Today, AWS has announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now provides a default bootstrap container image that simplifies system setup tasks, eliminating the need for most customers to maintain their own container images for initial configuration. Bootstrap containers are special-purpose containers that handle pre-startup operations such as directory creation, environment variable setup, and node-specific configurations before the main application containers start.
This enhancement allows customers to focus on their startup scripts rather than container image maintenance and regional availability. Previously, customers needed to create, maintain, and update their own container images while managing separate image repositories for each AWS Region. By using Bottlerocket’s default bootstrap container image, customers can specify their configuration tasks through simple user data, while the system automatically handles image updates. The default image is maintained by AWS, reducing operational overhead and improving system security.
Is your legacy database sticking you with rising costs, frustrating downtime, and scalability challenges? For organizations that strive for top performance and agility, legacy database systems can become significant roadblocks to innovation.
But there’s good news. According to a new Forrester Total Economic Impact™ (TEI) study, organizations may realize significant benefits by deploying Spanner, Google Cloud’s always-on, globally consistent, multi-model database with virtually unlimited scale. What kind of benefits? We’re talking an ROI of 132% over three years, and multi-million-dollar benefits and cost savings for a representative composite organization.
Read on for more, then download the full study to see the results and learn how Spanner can help your organization increase cost savings and profit, as well as reliability and operational efficiencies.
The high cost of the status quo
Legacy, on-premises databases often come with a hefty price tag that goes far beyond initial hardware and software investments. According to the Forrester TEI study, these databases can be a burden to maintain, requiring dedicated IT staff and specialized expertise, as well as high capital expenditures and operational overhead. Outdated systems can also limit your ability to respond quickly to changing market demands and customer needs, such as demand spiking for a new game or a viral new product.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud databases’), (‘body’, <wagtail.rich_text.RichText object at 0x3e96e4dbffd0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/products?#databases’), (‘image’, None)])]>
To quantify the benefits that Spanner can bring to an organization, Forrester used its TEI methodology, conducting in-depth interviews with seven leading organizations across the globe who had adopted Spanner. These organizations came from a variety of industries such as retail, financial services, software and technology, gaming, and transportation. Based on its findings, Forrester created a representative composite organization: a business-to-consumer (B2C) organization with revenue of $1 billion per year, and modeled the potential financial impact of adopting Spanner.
In addition to a 132% return on investment (ROI) with a 9-month payback period, Forrester found that the composite organization also realized $7.74M in total benefits over the three years, from a variety of sources:
Cost savings from retiring on-prem legacy database: By retiring the on-prem legacy database and transitioning to Spanner, the composite organization can save $3.8 million over three years. Savings came from reduced infrastructure capital expenditure, maintenance costs, and system licensing expenses.
“The system before migration was more expensive. It was the cost of the entire system including the application, database, monitoring, and everything. We paid within the $5 million to $10 million range for a mainframe, and I expect that the cost of it would almost double within the next few years. Currently, we pay 90% less for Spanner.” – Senior Principal Architect at a software and technology organization
Profit retention and cost savings from reduced unplanned downtime: Prior to adopting Spanner, organizations suffered unplanned database downtime triggered by technical malfunctions, human errors, data integration issues, or natural disasters. With up to 99.999% availability, Spanner virtually eliminates unplanned downtime. Forrester calculates that the composite organization achieves $1.2 million in cost savings and profit retention due to reduced unplanned downtime.
“In the last seven years since we migrated to Spanner, the total number of failures caused by Spanner is zero. Prior to Spanner, some sort of problem would occur about once a month including a major problem once a year.” – Tech Lead, gaming organization
Cost savings from reduced overprovisioning for peak usage: With on-prem database systems, long infrastructure procurement cycles and large up-front expenditures mean that organizations typically provision for peak usage — even if that means they are over-provisioned most of the time. Spanner’s elastic scalability allows organizations to start small and scale up and down effortlessly as usage changes. Databases can scale up for however long you need, and then down again, cutting costs and the need to predict usage. For the composite organization, this results in cost savings of $1 million over three years.
“The number of transactions we are able to achieve is one of the main reasons that we use Spanner. Additionally, Spanner is highly consistent, and we save on the number of engineers needed for managing our databases.” – Head of SRE, DevOps, and Infrastructure, financial services organization
Efficiencies gained in onboarding new applications: Spanner accelerates development of new applications by eliminating the need to preplan resources. This resulted in 80% reduction in time to onboard new applications and $981,000 in cost savings for the composite organization.
Beyond the numbers
Beyond the quantifiable ROI, the Forrester TEI study highlights unquantified benefits that amplify Spanner’s value. These include:
Improved budget predictability, as Spanner shifts expenditures from capex to opex, enabling more effective resource allocation and forecasting.
Greater testing and deployment flexibility, allowing software development engineers to rapidly scale development environments for testing, conduct thorough load tests, and quickly shut down resources.
Expert Google Cloud customer service, providing helpful guidance to maximize Spanner’s benefits.
“The Spanner team are experts. They have a deep understanding of the product they’ve built with deep insights on how we’re using the product if we ask them.” – Head of Engineering, financial services organization
An innovation-friendly architecture, facilitating the design and implementation of new business capabilities and expansion, improving automation and customer satisfaction, all without incurring downtime.
Together, these strategic advantages contribute to organizational agility and long-term success.
Unlock the potential of your data with Spanner
We believe the Forrester TEI study clearly demonstrates that Spanner is more than just a database; it’s a catalyst for business transformation. By eliminating the constraints of legacy systems, Spanner empowers organizations to achieve significant cost savings, improve operational efficiencies, and unlock new levels of innovation. Are you ready to transform your data infrastructure and unlock your organization’s full potential?
It’s indisputable. Over just a few short years, AI and machine learning have redefined day-to-day operations across the federal government–from vital public agencies, to federally-funded research NGOs, to specialized departments within the military—delivering results and positively serving the public good. We stand at a pivotal moment in time, a New Era of American Innovation, where AI is reshaping every aspect of our lives.
At Google, we recognize the immense potential of this moment, and we’re deeply invested in ensuring that this innovation benefits all Americans. Our commitment goes beyond simply developing cutting-edge technology. We’re focused on building a stronger and safer America.
Let’s take a closer look at just a few examples of AI-powered innovations and the transformative impact they are having across agencies.
The National Archives and Records Administration (NARA) serves as the U.S. Government’s central recordkeeper—digitizing and cataloging billions of federal documents and other historical records–starting with the original Constitution and Declaration of Independence–at the National Archives. As the sheer volume of these materials inevitably grows over time, NARA’s mission includes leveraging new technologies to expand—yet simplify—public access, for novice info-seekers and seasoned researchers alike.
Sifting through NARA’s massive repositories traditionally required some degree of detective work—often weaving archival terminology into complex manual queries. As part of a 2023 initiative to improve core operations, NARA incorporated Google Cloud’s Vertex AI and Gemini into their searchable database, creating an advanced level of intuitive AI-powered semantic search. This allowed NARA to more accurately interpret a user’s context and intent behind queries, leading to faster and more relevant results.
The Aerospace Corporation is a federally funded nonprofit dedicated to exploring and solving challenges within humankind’s “space enterprise.” Their important work extends to monitoring space weather—solar flares, geomagnetic storms and other cosmic anomalies, which can affect orbiting satellites, as well as communications systems and power grids back on earth. The Aerospace Corporation partnered with Google Public Sector to revolutionize space weather forecasting using AI. This collaboration leverages Google Cloud’s AI and machine learning capabilities to improve the accuracy and timeliness of space weather predictions, and better safeguard critical infrastructure and national security from the impacts of space weather events.
The Air Force Research Laboratory (AFRL) leads the U.S. Air Force’s development and deployment of new strategic technologies to defend air, space and cyberspace. AFRL partnered with Google Cloud to integrate AI and machine learning into key areas of research, such as bioinformatics, web application efficiency, human performance, and streamlined AI-based data modeling. By leveraging Google App Engine, BigQuery, and Vertex AI, AFRL has accelerated and improved performance of its research and development platforms while aligning with broader Department of Defense initiatives to adopt and integrate leading-edge AI technologies.
Google’s AI innovations are truly powering the next wave of transformation and mission impact across the public sector—from transforming how we access our history, to understanding the cosmos, to strengthening national defense back on Earth, with even more promise on the horizon.
At Google Public Sector, we’re passionate about supporting your mission. Learn more about how Google’s AI solutions can empower your agency and hear more about how we are accelerating mission impact with AI by joining us at Google Cloud Next 25 in Las Vegas.
As AI use increases, security remains a top concern, and we often hear that organizations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI in a secure, compliant, and private manner.
Today, we’re introducing a new solution that can help you mitigate risk throughout the AI lifecycle. We are excited to announce AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models — irrespective of the platforms you choose to use.
AI Protection helps teams comprehensively manage AI risk by:
Discovering AI inventory in your environment and assessing it for potential vulnerabilities
Securing AI assets with controls, policies, and guardrails
Managing threats against AI systems with detection, investigation, and response capabilities
AI Protection is integrated with Security Command Center (SCC), our multicloud risk-management platform, so that security teams can get a centralized view of their AI posture and manage AI risks holistically in context with their other cloud risks.
AI Protection helps organizations discover AI inventory, secure AI assets, and manage AI threats, and is integrated with Security Command Center.
Discovering AI inventory
Effective AI risk management begins with a comprehensive understanding of where and how AI is used within your environment. Our capabilities help you automatically discover and catalog AI assets, including the use of models, applications, and data — and their relationships.
Understanding what data supports AI applications and how it’s currently protected is paramount. Sensitive Data Protection (SDP) now extends automated data discovery to Vertex AI datasets to help you understand data sensitivity and data types that make up training and tuning data. It can also generate data profiles that provide deeper insight into the type and sensitivity of your training data.
Once you know where sensitive data exists, AI Protection can use Security Command Center’s virtual red teaming to identify AI-related toxic combinations and potential paths that threat actors could take to compromise this critical data, and recommend steps to remediate vulnerabilities and make posture adjustments.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4f812c9eb0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Securing AI assets
Model Armor, a core capability of AI Protection, is now generally available. It guards against prompt injection, jailbreak, data loss, malicious URLs, and offensive content. Model Armor can support a broad range of models across multiple clouds, so customers get consistent protection for the models and platforms they want to use — even if that changes in the future.
Model Armor provides multi-model, multicloud support for generative AI applications.
Today, developers can easily integrate Model Armor’s prompt and response screening into applications using a REST API or through an integration with Apigee. The ability to deploy Model Armor in-line without making any app changes is coming soon through integrations with Vertex AI and our Cloud Networking products.
“We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because we’re getting a unified security posture from Security Command Center. We can quickly identify, prioritize, and respond to potential vulnerabilities — without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and being able to centralize the monitoring of AI security threats alongside our other security findings within SCC is a game-changer,” said Jay DePaul, chief cybersecurity and technology risk officer, Dun & Bradstreet.
Organizations can use AI Protection to strengthen the security of Vertex AI applications by applying postures in Security Command Center. These posture controls, designed with first-party knowledge of the Vertex AI architecture, define secure resource configurations and help organizations prevent drift or unauthorized changes.
Managing AI threats
AI Protection operationalizes security intelligence and research from Google and Mandiant to help defend your AI systems. Detectors in Security Command Center can be used to uncover initial access attempts, privilege escalation, and persistence attempts for AI workloads. New detectors to AI Protection based on the latest frontline intelligence to help identify and manage runtime threats such as foundational model hijacking are coming soon.
“As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security — by its nature — necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance,” said Dr. Grace Trinidad, research director, IDC.
“Piecemeal solutions can leave and have left critical vulnerabilities exposed, rendering organizations susceptible to threats like adversarial attacks or data poisoning, and added to the overwhelm experienced by security teams. A comprehensive, lifecycle-focused approach allows organizations to effectively mitigate the multi-faceted risks surfaced by generative AI, as well as manage increasingly expanding security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers,” she said.
Complement AI Protection with frontline expertise
The Mandiant AI Security Consulting Portfolio offers services to help organizations assess and implement robust security measures for AI systems across clouds and platforms. Consultants can evaluate the end-to-end security of AI implementations and recommend opportunities to harden AI systems. We also provide red teaming for AI, informed by the latest attacks on AI services seen in frontline engagements.
Building on a secure foundation
Customers can also benefit from using Google Cloud’s infrastructure for building and running AI workloads. Our secure-by-design, secure-by-default cloud platform is built with multiple layers of safeguards, encryption, and rigorous software supply chain controls.
For customers whose AI workloads are subject to regulation, we offer Assured Workloads to easily create controlled environments with strict policy guardrails that enforce controls such as data residency and customer-managed encryption. Audit Manager can produce evidence of regulatory and emerging AI standards compliance. Confidential Computing can help ensure data remains protected throughout the entire processing pipeline, reducing the risk of unauthorized access, even by privileged users or malicious actors within the system.
Additionally, for organizations looking to discover unsanctioned use of AI, or shadow AI, in their workforce, Chrome Enterprise Premium can provide visibility into end-user activity as well as prevent accidental and intentional exfiltration of sensitive data in gen AI applications.
Next steps
Google Cloud is committed to helping your organization protect its AI innovations. Read more in this showcase paper from Enterprise Strategy Group and attend our upcoming online Security Talks event on March 12.
To evaluate AI Protection in Security Command Center and explore subscription options, please contact a Google Cloud sales representative or authorized Google Cloud partner.
More exciting capabilities are coming soon and we will be sharing in-depth details on AI Protection and how Google Cloud can help you securely develop and deploy AI solutions at Google Cloud Next in Las Vegas, April 9 to 11.
In our day-to-day work, the FLARE team often encounters malware written in Go that is protected using garble. While recent advancements in Go analysis from tools like IDA Pro have simplified the analysis process, garble presents a set of unique challenges, including stripped binaries, function name mangling, and encrypted strings.
Garble’s string encryption, while relatively straightforward, significantly hinders static analysis. In this blog post, we’ll detail garble’s string transformations and the process of automatically deobfuscating them.
We’re also introducing GoStringUngarbler, a command-line tool written in Python that automatically decrypts strings found in garble-obfuscated Go binaries. This tool can streamline the reverse engineering process by producing a deobfuscated binary with all strings recovered and shown in plain text, thereby simplifying static analysis, malware detection, and classification.
Before detailing the GoStringUngarbler tool, we want to briefly explain how the garble compiler modifies the build process of Go binaries. By wrapping around the official Go compiler, garbleperforms transformations on the source code during compilation through Abstract Syntax Tree (AST) manipulation using Go’s go/ast library. Here, the obfuscating compiler modifies program elements to obfuscate the produced binary while preserving the semantic integrity of the program. Once transformed by garble, the program’s AST is fed back into the Go compilation pipeline, producing an executable that is harder to reverse engineer and analyze statically.
While garble can apply a variety of transformations to the source code, this blog post will focus on its “literal” transformations. When garble is executed with the -literalsflag, it transforms all literal strings in the source code and imported Go libraries into an obfuscated form. Each string is encoded and wrapped behind a decrypting function, thwarting static string analysis.
For each string, the obfuscating compiler can randomly apply one of the following literal transformations. We’ll explore each in greater detail in subsequent sections.
Stack transformation: This method implements runtime encoding to strings stored directly on the stack.
Seed transformation: This method employs a dynamic seed-based encryption mechanism where the seed value evolves with each encrypted byte, creating a chain of interdependent encryption operations.
Split transformation: This method fragments the encrypted strings into multiple chunks, each to be decrypted independently in a block of a main switch statement.
Stack Transformation
The stack transformation in garbleimplements runtime encrypting techniques that operate directly on the stack, using three distinct transformation types: simple, swap, and shuffle. These names are taken directly from the garble’s source code. All three perform cryptographic operations with the string residing on the stack, but each differs in complexity and approach to data manipulation.
Simple transformation: This transformation applies byte-by-byte encoding using a randomly generated mathematical operator and a randomly generated key of equal length to the input string.
Swap transformation: This transformation applies a combination of byte-pair swapping and position-dependent encoding, where pairs of bytes are shuffled and encrypted using dynamically generated local keys.
Shuffle transformation: This transformation applies multiple layers of encryption by encoding the data with random keys, interleaving the encrypted data with its keys, and applying a permutation with XOR-based index mapping to scatter the encrypted data and keys throughout the final output.
Simple Transformation
This transformation implements a straightforward byte-level encoding scheme at the AST level. The following is the implementation from the garble repository. In Figure 1 and subsequent code samples taken from the garble repository, comments were added by the author for readability.
// Generate a random key with the same length as the input string
key := make([]byte, len(data))
// Fill the key with random bytes
obfRand.Read(key)
// Select a random operator (XOR, ADD, SUB) to be used for encryption
op := randOperator(obfRand)
// Encrypt each byte of the data with the key using the random operator
for i, b := range key {
data[i] = evalOperator(op, data[i], b)
}
Figure 1: Simple transformation implementation
The obfuscator begins by generating a random key of equal length to the input string. It then randomly selects a reversible arithmetic operator (XOR, addition, or subtraction) that will be used throughout the encoding process.
The obfuscation is performed by iterating through the data and key bytes simultaneously, applying the chosen operator between each corresponding pair to produce the encoded output.
Figure 2 shows the decompiled code produced by IDA of a decrypting subroutine of this transformation type.
Figure 2: Decompiled code of a simple transformation decrypting subroutine
// Determines how many swap operations to perform based on data length
func generateSwapCount(obfRand *mathrand.Rand, dataLen int) int {
// Start with number of swaps equal to data length
swapCount := dataLen
// Calculate maximum additional swaps (half of data length)
maxExtraPositions := dataLen / 2
// Add a random amount if we can add extra positions
if maxExtraPositions > 1 {
swapCount += obfRand.Intn(maxExtraPositions)
}
// Ensure swap count is even by incrementing if odd
if swapCount%2 != 0 {
swapCount++
}
return swapCount
}
func (swap) obfuscate(obfRand *mathrand.Rand, data []byte)
*ast.BlockStmt {
// Generate number of swap operations to perform
swapCount := generateSwapCount(obfRand, len(data))
// Generate a random shift key
shiftKey := byte(obfRand.Uint32())
// Select a random reversible operator for encryption
op := randOperator(obfRand)
// Generate list of random positions for swapping bytes
positions := genRandIntSlice(obfRand, len(data), swapCount)
// Process pairs of positions in reverse order
for i := len(positions) - 2; i >= 0; i -= 2 {
// Generate a position-dependent local key for each pair
localKey := byte(i) + byte(positions[i]^positions[i+1]) + shiftKey
// Perform swap and encryption:
// - Swap positions[i] and positions[i+1]
// - Encrypt the byte at each position with the local key
data[positions[i]], data[positions[i+1]] = evalOperator(op,
data[positions[i+1]], localKey), evalOperator(op, data[positions[i]],
localKey)
}
...
Figure 3: Swap transformation implementation
The transformation begins by generating an even number of random swap positions, which is determined based on the data length plus a random number of additional positions (limited to half the data length). The compiler then randomly generates a list of random swap positions with this length.
The core obfuscation process operates by iterating through pairs of positions in reverse order, performing both a swap operation and encryption on each pair. For each iteration, it generates a position-dependent local encryption key by combining the iteration index, the XOR result of the current position pair, and a random shift key. This local key is then used to encrypt the swapped bytes with a randomly selected reversible operator.
Figure 4 shows the decompiled code produced by IDA of a decrypting subroutine of the swap transformation.
Figure 4: Decompiled code of a swap transformation decrypting subroutine
Shuffle Transformation
The shuffle transformation is the most complicated of the three stack transformation types. Here, garbleapplies its obfuscation by encrypting the original string with random keys, interleaving the encrypted data with its keys, and scattering the encrypted data and keys throughout the final output. Figure 5 shows the implementation from the garble repository.
// Generate a random key with the same length as the original string
key := make([]byte, len(data))
obfRand.Read(key)
// Constants for the index key size bounds
const (
minIdxKeySize = 2
maxIdxKeySize = 16
)
// Initialize index key size to minimum value
idxKeySize := minIdxKeySize
// Potentially increase index key size based on input data length
if tmp := obfRand.Intn(len(data)); tmp > idxKeySize {
idxKeySize = tmp
}
// Cap index key size at maximum value
if idxKeySize > maxIdxKeySize {
idxKeySize = maxIdxKeySize
}
// Generate a secondary key (index key) for index scrambling
idxKey := make([]byte, idxKeySize)
obfRand.Read(idxKey)
// Create a buffer that will hold both the encrypted data and the key
fullData := make([]byte, len(data)+len(key))
// Generate random operators for each position in the full data buffer
operators := make([]token.Token, len(fullData))
for i := range operators {
operators[i] = randOperator(obfRand)
}
// Encrypt data and store it with its corresponding key
// First half contains encrypted data, second half contains the key
for i, b := range key {
fullData[i], fullData[i+len(data)] = evalOperator(operators[i],
data[i], b), b
}
// Generate a random permutation of indices
shuffledIdxs := obfRand.Perm(len(fullData))
// Apply the permutation to scatter encrypted data and keys
shuffledFullData := make([]byte, len(fullData))
for i, b := range fullData {
shuffledFullData[shuffledIdxs[i]] = b
}
// Prepare AST expressions for decryption
args := []ast.Expr{ast.NewIdent("data")}
for i := range data {
// Select a random byte from the index key
keyIdx := obfRand.Intn(idxKeySize)
k := int(idxKey[keyIdx])
// Build AST expression for decryption:
// 1. Uses XOR with index key to find the real positions of data
and key
// 2. Applies reverse operator to decrypt the data using the
corresponding key
args = append(args, operatorToReversedBinaryExpr(
operators[i],
// Access encrypted data using XOR-ed index
ah.IndexExpr("fullData", &ast.BinaryExpr{X: ah.IntLit(shuffledIdxs[i]
^ k), Op: token.XOR, Y: ah.CallExprByName("int", ah.IndexExpr("idxKey",
ah.IntLit(keyIdx)))}),
// Access corresponding key using XOR-ed index
ah.IndexExpr("fullData", &ast.BinaryExpr{X:
ah.IntLit(shuffledIdxs[len(data)+i] ^ k), Op: token.XOR, Y:
ah.CallExprByName("int", ah.IndexExpr("idxKey", ah.IntLit(keyIdx)))}),
))
}
Figure 5: Shuffle transformation implementation
Garble begins by generating two types of keys: a primary key of equal length to the input string for data encryption and a smaller index key (between two and 16 bytes) for index scrambling. The transformation process then occurs in the following four steps:
Initial encryption: Each byte of the input data is encrypted using a randomly generated reversible operator with its corresponding key byte.
Data interleaving: The encrypted data and key bytes are combined into a single buffer, with encrypted data in the first half and corresponding keys in the second half.
Index permutation: The key-data buffer undergoes a random permutation, scattering both the encrypted data and keys throughout the buffer.
Index encryption: Access to the permuted data is further obfuscated by XOR-ing the permuted indices with randomly selected bytes from the index key.
Figure 6 shows the decompiled code produced by IDA of a decrypting subroutine of the shuffle transformation.
Figure 6: Decompiled code of a shuffle transformation decrypting subroutine
Seed Transformation
The seed transformation implements a chained encoding scheme where each byte’s encryption depends on the previous encryptions through a continuously updated seed value. Figure 7 shows the implementation from the garble repository.
// Generate random initial seed value
seed := byte(obfRand.Uint32())
// Store original seed for later use in decryption
originalSeed := seed
// Select a random reversible operator for encryption
op := randOperator(obfRand)
var callExpr *ast.CallExpr
// Encrypt each byte while building chain of function calls
for i, b := range data {
// Encrypt current byte using current seed value
encB := evalOperator(op, b, seed)
// Update seed by adding encrypted byte
seed += encB
if i == 0 {
// Start function call chain with first encrypted byte
callExpr = ah.CallExpr(ast.NewIdent("fnc"), ah.IntLit(int(encB)))
} else {
// Add subsequent encrypted bytes to function call chain
callExpr = ah.CallExpr(callExpr, ah.IntLit(int(encB)))
}
}
...
Figure 7: Seed transformation implementation
Garble begins by randomly generating a seed value to be used for encryption. As the compiler iterates through the input string, each byte is encrypted by applying the random operator with the current seed, and the seed is updated by adding the encrypted byte. In this seed transformation, each byte’s encryption depends on the result of the previous one, creating a chain of dependencies through the continuously updated seed.
In the decryption setup, as shown in the IDA decompiled code in Figure 8, the obfuscator generates a chain of calls to a decrypting function. For each encrypted byte starting with the first one, the decrypting function applies the operator to decrypt it with the current seed and updates the seed by adding the encrypted byte to it. Because of this setup, subroutines of this transformation type are easily recognizable in the decompiler and disassembly views due to the multiple function calls it makes in the decryption process.
Figure 8: Decompiled code of a seed transformation decrypting subroutine
Figure 9: Disassembled code of a seed transformation decrypting subroutine
Split Transformation
The split transformation is one of the more sophisticated string transformation techniques by garble, implementing a multilayered approach that combines data fragmentation, encryption, and control flow manipulation. Figure 10 shows the implementation from the garble repository.
func (split) obfuscate(obfRand *mathrand.Rand, data []byte)
*ast.BlockStmt {
var chunks [][]byte
// For small input, split into single bytes
// This ensures even small payloads get sufficient obfuscation
if len(data)/maxChunkSize < minCaseCount {
chunks = splitIntoOneByteChunks(data)
} else {
chunks = splitIntoRandomChunks(obfRand, data)
}
// Generate random indexes for all chunks plus two special cases:
// - One for the final decryption operation
// - One for the exit condition
indexes := obfRand.Perm(len(chunks) + 2)
// Initialize the decryption key with a random value
decryptKeyInitial := byte(obfRand.Uint32())
decryptKey := decryptKeyInitial
// Calculate the final decryption key by XORing it with
position-dependent values
for i, index := range indexes[:len(indexes)-1] {
decryptKey ^= byte(index * i)
}
// Select a random reversible operator for encryption
op := randOperator(obfRand)
// Encrypt all data chunks using the selected operator and key
encryptChunks(chunks, op, decryptKey)
// Get special indexes for decrypt and exit states
decryptIndex := indexes[len(indexes)-2]
exitIndex := indexes[len(indexes)-1]
// Create the decrypt case that reassembles the data
switchCases := []ast.Stmt{&ast.CaseClause{
List: []ast.Expr{ah.IntLit(decryptIndex)},
Body: shuffleStmts(obfRand,
// Exit case: Set next state to exit
&ast.AssignStmt{
Lhs: []ast.Expr{ast.NewIdent("i")},
Tok: token.ASSIGN,
Rhs: []ast.Expr{ah.IntLit(exitIndex)},
},
// Iterate through the assembled data and decrypt each byte
&ast.RangeStmt{
Key: ast.NewIdent("y"),
Tok: token.DEFINE,
X: ast.NewIdent("data"),
Body: ah.BlockStmt(&ast.AssignStmt{
Lhs: []ast.Expr{ah.IndexExpr("data", ast.NewIdent("y"))},
Tok: token.ASSIGN,
Rhs: []ast.Expr{
// Apply the reverse of the encryption operation
operatorToReversedBinaryExpr(
op,
ah.IndexExpr("data", ast.NewIdent("y")),
// XOR with position-dependent key
ah.CallExpr(ast.NewIdent("byte"), &ast.BinaryExpr{
X: ast.NewIdent("decryptKey"),
Op: token.XOR,
Y: ast.NewIdent("y"),
}),
),
},
}),
},
),
}}
// Create switch cases for each chunk of data
for i := range chunks {
index := indexes[i]
nextIndex := indexes[i+1]
chunk := chunks[i]
appendCallExpr := &ast.CallExpr{
Fun: ast.NewIdent("append"),
Args: []ast.Expr{ast.NewIdent("data")},
}
...
// Create switch case for this chunk
switchCases = append(switchCases, &ast.CaseClause{
List: []ast.Expr{ah.IntLit(index)},
Body: shuffleStmts(obfRand,
// Set next state
&ast.AssignStmt{
Lhs: []ast.Expr{ast.NewIdent("i")},
Tok: token.ASSIGN,
Rhs: []ast.Expr{ah.IntLit(nextIndex)},
},
// Append this chunk to the collected data
&ast.AssignStmt{
Lhs: []ast.Expr{ast.NewIdent("data")},
Tok: token.ASSIGN,
Rhs: []ast.Expr{appendCallExpr},
},
),
})
}
// Final block creates the state machine loop structure
return ah.BlockStmt(
...
// Update decrypt key based on current state and counter
Body: ah.BlockStmt(
&ast.AssignStmt{
Lhs: []ast.Expr{ast.NewIdent("decryptKey")},
Tok: token.XOR_ASSIGN,
Rhs: []ast.Expr{
&ast.BinaryExpr{
X: ast.NewIdent("i"),
Op: token.MUL,
Y: ast.NewIdent("counter"),
},
},
},
// Main switch statement as the core of the state machine
&ast.SwitchStmt{
Tag: ast.NewIdent("i"),
Body: ah.BlockStmt(shuffleStmts(obfRand, switchCases...)...),
}),
Figure 10: Split transformation implementation
The transformation begins by splitting the input string into chunks of varying sizes. Shorter strings are broken into individual bytes, while longer strings are divided into random-sized chunks of up to four bytes.
The transformation then constructs a decrypting mechanism using a switch-based control flow pattern. Rather than processing chunks sequentially, the compiler generates a randomized execution order through a series of switch cases. Each case handles a specific chunk of data, encrypting it with a position-dependent key derived from both the chunk’s position and a global encryption key.
In the decryption setup, as shown in the IDA decompiled code in Figure 11, the obfuscator first collects the encrypted data by going through each chunk in their corresponding order. In the final switch case, the compiler performs a final pass to XOR-decrypt the encrypted buffer. This pass uses a continuously updated key that depends on both the byte position and the execution path taken through the switch statement to decrypt each byte.
Figure 11: Decompiled code of a split transformation decrypting subroutine
GoStringUngarbler: Automatic String Deobfuscator
To systematically approach string decryption automation, we first consider how this can be done manually. From our experience, the most efficient manual approach leverages dynamic analysis through a debugger. Upon finding a decrypting subroutine, we can manipulate the program counter to target the subroutine’s entry point, execute until the ret instruction, and extract the decrypted string from the return buffer.
To perform this process automatically, the primary challenge lies in identifying all decrypting subroutines introduced by garble’s transformations. Our analysis revealed a consistent pattern—decrypted strings are always processed through Go’s runtime_slicebytetostring function before being returned by the decrypting subroutine. This observation provides a reliable anchor point, allowing us to construct regular expression (regex) patterns to automatically detect these subroutines.
String Encryption Subroutine Patterns
Through analyzing the disassembled code, we have identified consistent instruction patterns for each string transformation variant. For each transformation on 64-bit binaries, rbx is used to store the decrypted string pointer, and rcx is assigned with the length of the decrypted string. The main difference between the transformations is the way these two registers are populated before the call to runtime_slicebytetostring.
Figure 12: Epilogue patterns of garble’sdecrypting subroutines
Through the assembly patterns in Figure 12, we develop regex patterns corresponding to each of garble’s transformation types, which allows us to automatically identify string decrypting subroutines with high precision.
To extract the decrypted string, we must find the subroutine’s prologue and perform instruction-level emulation from this entry point until runtime_slicebytestring is called. For binaries of Go versions v1.21 to v1.23, we observe two main patterns of instructions in the subroutine prologue that perform the Go stack check.
Figure 13: Prologue instruction patterns of Go subroutines
These instruction patterns in the Go prologue serve as reliable entry point markers for emulation. The implementation in GoStringUngarbler leverages these structural patterns to establish reliable execution contexts for the unicorn emulation engine, ensuring accurate string recovery across various garble string transformations.
Figure 14 shows the output of our automated extraction framework, where GoStringUngarbleris able to identify and emulate all decrypting subroutines.
From these instruction patterns, we have derived a YARA rule for detecting samples that are obfuscated with garble’s literal transformation. The rule can be found in Mandiant’s GitHub repository.
Deobfuscation: Subroutine Patching
While extracting obfuscated strings can aid malware detection through signature-based analysis, this alone is not useful for reverse engineers conducting static analysis. To aid reverse engineering efforts, we’ve implemented a binary deobfuscation approach leveraging the emulation results.
Although developing an IDA plugin would have streamlined our development process, we recognize that not all malware analysts have access to, or prefer to use, IDA Pro. To make our tool more accessible, we developed GoStringUngarbler as a standalone Python utility to process binaries protected by garble. The tool can deobfuscate and produce functionally identical executables with recovered strings stored in plain text, improving both reverse engineering analysis and malware detection workflows.
For each identified decrypting subroutine, we implement a strategic patching methodology, replacing the original code with an optimized stub while padding the remaining subroutine space with INT3 instructions (Figure 15).
xor eax, eax ; clear return register
lea rbx, <string addr> ; Load effective address of decrypted string
mov ecx, <string len> ; populate string length
call runtime_slicebytetostring ; convert slice to Go string
ret ; return the decrypted string
Figure 15: Function stub to patch over garble’s decrypting subroutines
Initially, we considered storing recovered strings within an existing binary section for efficient referencing from the patched subroutines. However, after examining obfuscated binaries, we found that there is not enough space within existing sections to consistently accommodate the deobfuscated strings. On the other hand, adding a new section, while feasible, would introduce unnecessary complexity to our tool.
Instead, we opt for a more elegant space utilization strategy by leveraging the inherent characteristics of garble’s string transformations. In our tool, we implement in-place string storage by writing the decrypted string directly after the patched stub, capitalizing on the guaranteed available space from decrypting routines:
Stack transformation: The decrypting subroutine stores and processes encrypted strings on the stack, providing adequate space through their data manipulation instructions. The instructions originally used for pushing encrypted data onto the stack create a natural storage space for the decrypted string.
Seed transformation: For each character, the decrypting subroutine requires a call instruction to decrypt it and update the seed. This is more than enough space to store the decrypted bytes.
Split transformation: The decrypting subroutine contains multiple switch cases to handle fragmented data recovery and decryption. These extensive instruction sequences guarantee sufficient space for the decrypted string data.
Figure 16 and Figure 17 show the disassembled and decompiled output of our patching framework, where GoStringUngarblerhas deobfuscated a decrypting subroutine to display the recovered original string.
Figure 16: Disassembly view of a deobfuscated decrypting subroutine
Figure 17: Decompiled view of a deobfuscated decrypting subroutine
Downloading GoStringUngarbler
GoStringUngarbleris now available as an open-source tool in Mandiant’s GitHub repository.
The installation requires Python3 and Python dependencies from the requirements.txtfile.
Future Work
Deobfuscating binaries generated by garblepresents a specific challenge—its dependence on the Go compiler for obfuscation means that the calling convention can evolve between Go versions. This change can potentially invalidate the regular expression patterns used in our deobfuscation process. To mitigate this, we’ve designed GoStringUngarblerwith a modular plugin architecture. This allows for new plugins to be easily added with updated regular expressions to handle variations introduced by new Go releases. This design ensures the tool’s long-term adaptability to future changes in garble’s output.
Currently, GoStringUngarblerprimarily supports garble–obfuscated PE and ELF binaries compiled with Go versions 1.21 through 1.23. We are continuously working to expand this range as the Go compiler and garble are updated.
Acknowledgments
Special thanks to Nino Isakovic and Matt Williams for their review and continuous feedback throughout the development of GoStringUngarbler. Their insights and suggestions have been invaluable in shaping and refining the tool’s final implementation.
We are also grateful to the FLARE team members for their review of this blog post publication to ensure its technical accuracy and clarity.
Finally, we want to acknowledge the developers of garble for their outstanding work on this obfuscating compiler. Their contributions to the software protection field have greatly advanced both offensive and defensive security research on Go binary analysis.
Today, we are excited to announce that Amazon Q Business now supports the ingestion of audio and video data. This new feature enables Amazon Q customers to search through ingested audio and video content, allowing them to ask questions based on the information contained within these media files.
This enhancement significantly expands the capabilities of Amazon Q Business, making it an even more powerful tool for organizations to access and utilize their multimedia content. Customers can unlock valuable insights from their audio and video resources. Users can now easily search for specific information within recorded meetings, training videos, podcasts, or any other audio or video content ingested into Amazon Q Business. This capability streamlines information retrieval, enhances knowledge sharing, and improves decision-making processes by making multimedia content as searchable and accessible as text-based documents.
The audio and video ingestion feature uses the Bedrock Data Automation feature to process customer’s multimodal assets.The feature for Amazon Q Business is available in US East (N. Virginia) and US West (Oregon) AWS Regions. Customers can start using this feature in supported regions to enhance their organization’s knowledge management and information discovery processes. To get started with ingesting audio and video data in Amazon Q Business, visit the Amazon Q console or refer to the documentation. For more information about Amazon Q Business and its features, please visit the Amazon Q product page.
As of February 14, 2025, SageMaker Flexible Training Plans now supports instant start times that allow customers to book a plan starting as soon as the next 30 minutes. Amazon SageMaker‘s Flexible Training Plan (FTP) makes it easy for customers to access GPU capacity to run ML workloads. Customers who use Flexible Training Plans can plan their ML development cycles with confidence in knowing they’ll have the GPUs they need on a specific date for the amount of time they reserve. There are no long-term commitments, so customers get capacity assurance while only paying for the amount of GPU time necessary to complete their workloads.
With the ability to start a reservation within 30 minutes (subject to availability), Flexible Training Plan accelerates compute resource procurement for customers running machine learning workloads. The system first attempts to find a single, continuous block of reserved capacity that precisely matches a customer’s requirement. If a continuous block isn’t available, SageMaker automatically splits the total duration across two time segments and attempts to fulfill the request using two separate reserved capacity blocks. Additionally, with this release, Flexible Training Plan will return up to three distinct options, providing flexibility in compute resource procurement.
You can create a Training Plan using either the SageMaker AI console or programmatic methods. The SageMaker AI console offers a visual, graphical interface with a comprehensive view of your options, while programmatic creation can be done using the AWS CLI or SageMaker SDKs to interact directly with the training plans API. You can get started with the API experience here.
Amazon Lex now supports Confirmation and Alphanumeric slot types in Korean (ko-KR) locale. These built-in slot types help developers build more natural and efficient conversational experiences in Korean language applications.
The Confirmation slot type automatically resolves various Korean expressions into ‘Yes’, ‘No’, ‘Maybe’, and ‘Don’t know’ values, eliminating the need for custom slots with multiple synonyms. The Alphanumeric slot type enables capturing combinations of letters and numbers, with support for regular expressions to validate specific formats, making it easier to collect structured data like identification numbers or reference codes.
Korean support for these slot types is available in all AWS regions where Amazon Lex V2 operates.
To learn more about implementing these features, visit the Amazon Lex documentation for Custom Vocabulary and Alphanumerics.
AWS Transfer Family has reduced the service side login latency from 1-2 seconds to under 500 milliseconds.
AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, FTP, and web browser-based transfers directly into and out of AWS storage services. With this launch, you benefit from significantly reduced latency from the service to initiate the transfer over SFTP. This optimization offers substantial benefits, particularly for high-frequency, low-latency use cases with automated processes or applications requiring rapid file operations.
Amazon Neptune Database is now available in the Asia Pacific (Malaysia) Region on engine versions 1.1.0.0 and later. You can now create Neptune clusters using R6g, R6i, T4g, and T3 instance types in the AWS Asia Pacific (Malaysia) Region.
Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production.
AWS Lambda now supports Amazon CloudWatch Logs Live Tail in VS Code IDE through the AWS Toolkit for Visual Studio Code. Live Tail is an interactive log streaming and analytics capability which provides real-time visibility into logs, making it easier to develop and troubleshoot Lambda functions.
We previously announced support for Live Tail in the Lambda console, enabling developers to view and analyze Lambda logs in real time. Now, with Live Tail support in VS Code IDE, developers can monitor Lambda function logs in real time while staying within their development environment, eliminating the need to switch between multiple interfaces for coding and log analysis. This makes it easier for developers to quickly test and validate code or configuration changes in real time, accelerating the author-test-deploy cycle when building applications using Lambda. This integration also makes it easier to detect and debug failures and critical errors in Lambda function code, reducing the mean time to recovery (MTTR) when troubleshooting Lambda function errors.
Using Live Tail for Lambda in VS Code IDE is straightforward. After installing the latest version of the AWS Toolkit for Visual Studio Code, developers can access Live Tail through the AWS Explorer panel. Simply navigate to the desired Lambda function, right-click, and select “Tail Logs” to begin streaming logs in real time.
AWS CodeBuild now supports non-container builds on Linux x86, Arm, and Windows on-demand fleets. You can run build commands directly on the host operating system without containerization. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.
With non-container builds, you can execute build commands that require direct access to the host system resources or have specific requirements that make containerization challenging. This feature is particularly useful for scenarios such as building device drivers, running system-level tests, or working with tools that require host machine access.
The non-container feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page.
To learn more about non-container builds, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
Amazon S3 Tables are now available in three additional AWS Regions: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Sydney).
S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query performance through continual table optimization compared to unmanaged Iceberg tables, and up to 10x higher transactions per second compared to Iceberg tables stored in general purpose S3 buckets.
You can use S3 Tables with AWS analytics services through the preview integration with Amazon SageMaker Lakehouse, as well as Apache Iceberg-compatible open source engines like Apache Spark and Apache Flink. Additionally, S3 Tables perform continual table maintenance to automatically expire old snapshots and related data files to reduce storage cost over time.