Azure – Public preview: Private subnet
Private subnet is a new feature that allows you to create subnets that prevent the creation of insecure default outbound public IP address creation
Read More for the details.
Private subnet is a new feature that allows you to create subnets that prevent the creation of insecure default outbound public IP address creation
Read More for the details.
Amazon RDS for SQL Server now supports Microsoft SQL Server 2022 CU9 for Express, Web, Standard, and Enterprise Editions. You can now leverage SQL Server 2022 features such as Query Store Enhancements, Parameter Sensitive Plan Optimization, and SQL Server Ledger on your Amazon RDS for SQL Server DB instances.
Read More for the details.
Today, Amazon EBS announced the availability of Snapshot Lock, a new security feature that helps customers comply with their data retention policies and add another layer of protection against inadvertent or malicious deletions of data. Customers use EBS Snapshots to back up their EBS volumes for disaster recovery, data migration, and compliance purposes. Customers can set up multiple layers of data protection for EBS Snapshots, including copying them across multiple AWS regions and accounts, setting up IAM access policies as well as enabling Recycle Bin. With Snapshot Lock, customers can configure locks on individual snapshots so that they cannot be deleted by anyone, including the account owner, for a specified period of time. Customers have the flexibility of granting certain users access to modify snapshot lock configurations per their data governance guidelines or implementing stricter controls by ensuring that the lock configuration cannot be modified by anyone, including privileged users. Customers can also rely on this feature to store EBS Snapshots in a WORM (Write-Once-Read-Many) compliant format.
Read More for the details.
Today, AWS CloudTrail Lake announces a one-year extendable retention pricing option that is optimized for flexible data retention needs. CloudTrail Lake is a managed data lake that lets you aggregate, immutably store, and analyze activity logs for audit, security, and operational investigations. With the one-year extendable retention pricing, the first year of retention is included with ingestion charges. You may choose to extend the retention period to a maximum of 10 years by paying extended retention charges after the first year. Compliance, audit, security, and operations teams can use the new pricing option for cost-effective retention of auditable data sources in alignment with compliance programs such as PCI-DSS, as well as for forensic and operational investigations.
Read More for the details.
AWS introduces an enhanced co-sell experience for AWS Partners using APN Customer Engagements (ACE).
Read More for the details.
Amazon Managed Grafana now features a new self-service plugin management experience for Grafana community plugins. With this release, Amazon Managed Grafana administrators can discover and install Grafana community plugins directly from their workspace. Plugins enable you to extend your Grafana experience, unifying data from a wider variety of data sources with visualizations tailored to analyze your unique datasets.
Read More for the details.
Starting today, Amazon EC2 High Memory instances with 6TiB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are now available in Europe (London) and Europe (Spain) Regions. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.
Read More for the details.
Google Cloud Next is heading to Las Vegas in 2024 from April 9th to 11th, and I’m excited to announce that registration opens today. With a new location, fresh programming, hundreds of partners, and the latest in AI, analytics, security, infrastructure, collaboration and more, Google Cloud Next ‘24 is set to be our biggest event yet.
Early registration at the Google Cloud Next site is now open at a discounted rate of $999 USD* (that’s 50% off the full ticket price of $1,999 USD), and includes one year of Innovators Plus (limited quantity), so there’s no better time to register. Last year’s event with over 20,000 people sold out quickly — check out our 2 min recap video.
There has never been a more exciting time to learn more about cloud technology, and staying up-to-date on the latest in AI innovation from Google experts, partners, and your peers is critical for anyone looking to keep pace in this rapidly-evolving space. Throughout the conference, you’ll have opportunities to connect directly with Google engineers and experts, who can help you solve problems and brainstorm innovative solutions 1:1.
At Cloud Next ’24, we have learning opportunities for everyone.
For developers, there will be hundreds of learning sessions and an Innovators Hive with “how-to” content and demos, in addition to hands-on training on gen AI and other cutting-edge cloud technologies.For leaders we will highlight dozens of gen AI use cases in every function and industry and share details on how to move quickly from proof-of-concept to production as well as expected outcomes.For startups, we have a curated experience that includes startup sessions, networking opportunities, and the bustling Startup Lounge, where you can learn about how Google Cloud can help supercharge your startup’s growth.For everyone, we will have a huge show floor featuring the latest and greatest from our Google Cloud partner ecosystem as well as a customer innovation showcase.
There will be dedicated sessions for all levels across all roles, including application developers, architects and IT professionals, data and database engineers, data scientists and data analysts, DevOps/SREs/ITOps/platform engineers, IT managers and business leaders, productivity/collaboration app makers, security professionals, startup leaders/founders – there’s truly something for everyone!
Register today at our dedicated Google Cloud Next ‘24 site.
*The $999 USD early bird price is valid through 11:59 PM PT on Wednesday, January 31, 2024 or until it’s sold out.
Read More for the details.
In October, AWS launched a self-guided experience in AWS Partner Central with automated tasks that accelerates the partner journey from registration to listing in AWS Marketplace. Now, we’ve extended that guidance throughout the Software Path, providing partners with the personalized tasks and guidance to help you develop and mature your software offering(s). As you progress through the four growth motions of build, market, sell and grow, you’ll automatically become eligible for key ISV programs like AWS SaaS Factory and AWS ISV Accelerate. Relevant tasks will be surfaced to help you complete the requirements and unlock greater partner benefits and programs.
Read More for the details.
Today, we are announcing an improvement to the Amazon Elastic Block Store (EBS) performance of Amazon Compute-Optimized C6in, General Purpose M6in and M6idn, and Memory-Optimized R6in and R6idn EC2 instance types.
Read More for the details.
Everyday, more and more small businesses and startups are leveraging generative AI for their internal and customer-facing needs, with over 70% of gen AI’s billion dollar-unicorns using Google Cloud. As the technology continues moving at a rapid pace, with almost daily announcements of products and features, businesses are discovering new use cases and ways to leverage gen AI tools.
In our monthly gen AI workshops, we’re meeting small businesses to provide them with a technical overview of our key gen AI products, AI techniques, and best practices. Our workshops are organized and led by a team of Specialist Customer Engineers, Monique Sullivan, Bruk Zewdie, Isaac Attuah, and myself. They ignite imagination, as together, we brainstorm and ideate with our customers to blend their creativity with Google Cloud’s hands-on experience deploying large models. Our workshop provides a collaborative environment to unearth innovative applications and use cases. Working with our team, we explore industry use cases, confront and assess common challenges, and identify successful techniques in deployment strategies.
If you are a startup or small business looking for some inspiration on how to use gen AI, this article will explore some of the prominent themes and trends from our most recent workshops—and explain how you can participate in a future edition.
Customers across a range of industries are using the power of generative AI, from doctors helping patients, to businesses empowering employees and customers, to individuals bridging knowledge gaps or enhancing visual creativity. Below are some of the industry use cases that have sparked excitement in our recent sessions.
Healthcare:
Carry medical assistance in your pocket—Help doctors focus on patients rather than paperwork, with an AI physician’s assistant always at the ready.Get a bespoke workout routine via an AI personal trainer that tailors its advice from a library of fitness videos and regimens.Enable precision quality control – evaluating healthcare provider performance using statistical analysis of clinical data.
Employee Support:
Instill company wisdom with an HR chatbot answering policy questions 24/7, guiding employees with institutional knowledge.Automate headshot quality by leveraging AI to ensure polished employee profile pics, optimized with studio-quality lighting and touch-ups.Quickly find answers with a large language model(LLM)-generated knowledge base—a personalized, up-to-date employee handbook.
Customer Support:
Deliver instant, personalized support with chatbots that engage customers using natural conversations and contextual understanding.Remember each customer via support bots that tap into full histories and preferences for tailored, seamless issue resolution.Make better decisions with accurate forecasting at your fingertips, powered by an AI assistant that predicts outcomes using past data.
Visual-based:
Accelerate game development by leveraging AI to automatically generate graphics and visual assets.Gain actionable insights by analyzing videos with AI to identify trends, movements, and anomalies.Automate custom illustrations, using AI to create original drawings for children’s books to spark young imaginations.Condense information into dynamic videos, transforming documents into shareable lessons with AI-generated animations.
Bridging Gaps:
Making data accessible—automatically translate complex financial reports into plain language for any audience.Drive personalized recommendations, using AI to analyze assessment data and propose customized teaching strategies.
In addition to use cases, workshop participants are also eager to understand how to more effectively interact with generative models— that is, how to prompt them.
Prompting is the art of guiding an LLM with your input so it provides your desired response. Throughout our workshops, our attendees share and unlock effective techniques like:
Establish the LLM’s expertise and role upfront in your prompt so it understands its persona.Have the LLM restate prompts to ensure it’s correctly processed your input.Use structured instructions for clarity: “summarize the input: [text.]”Ask the LLM to craft specific prompts tailored for your needs.Streamline prompts like Google searches – no fluff.Organize the prompt with lists or dictionaries.Insert “variables” into the prompt, allowing users to customize the prompt by just changing the variable words.Prompting is a skill that mixes your personal creativity with your industry knowledge – try multiple approaches to get different results.
See gen AI in action at our next Generative AI Workshop. Preview tools and services for startups and small businesses with our specialist team. Over two hours, you and your team will have the opportunity to see a deep dive and demos on the upcoming Google Cloud gen AI product portfolio. You’ll leave equipped with strategies to implement cutting-edge gen AI solutions that give your business a competitive advantage. The workshop will also give you the opportunity to receive free credits to labs with a sandbox environment so you can experiment and get started with your generative AI journey.
The future is generative – join us to seize it and propel your startup forward.
To learn more about how Google Cloud can help your startup, visit our pageherefor more information about our program, where startups can get up to $200,000 USD (up to $350,000 USD for AI startups) in Google Cloud and Firebase cost coverage over 2 years, as well as technical training, business support, and Google-wide offers.
Read More for the details.
Geographical redundancy is one of the keys to designing a resilient data lake architecture in the cloud. Some of the use cases for customers to replicate data geographically are to provide for low-latency reads (where data is closer to end users), comply with regulatory requirements, colocate data with other services, and maintain data redundancy for mission-critical apps.
BigQuery already stores copies of your data in two different Google Cloud zones within a dataset region. In all regions, replication between zones uses synchronous dual writes. This ensures in the event of either a soft (power failure, network partition) or hard (flood, earthquake, hurricane) zonal failure, no data loss is expected, and you will be back up and running almost immediately.
We are excited to take this a step further with the preview of cross-region dataset replication, which allows you to easily replicate any dataset, including ongoing changes, across cloud regions. In addition to ongoing replication use cases, you can use cross-region replication to migrate BigQuery datasets from one source region to another destination region.
BigQuery provides a primary and secondary configuration for replication across regions:
Primary region: When you create a dataset, BigQuery designates the selected region as the location of the primary replica.Secondary region: When you add a dataset replica in a selected region, BigQuery designates this as a secondary replica. The secondary region could be a region of your choice. You can have more than one secondary replica.
The primary replica is writeable, and the secondary replica is read-only. Writes to the primary replica are asynchronously replicated to the secondary replica. Within each region, the data is stored redundantly in two zones. Network traffic never leaves the Google Cloud network.
While replicas are in different regions, they do not have different names. This means that your queries do not need to change when referencing a replica in a different region.
The following diagram shows the replication that occurs when a dataset is replicated:
The following workflow shows how you can set up replication for your BigQuery datasets.
To replicate a dataset, use the ALTER SCHEMA ADD REPLICA DDL statement.
You can add a single replica to any dataset within each region or multi-region. After you add a replica, it takes time for the initial copy operation to complete. You can still run queries referencing the primary replica while the data is being replicated, with no reduction in query processing capacity.
To confirm the status that the secondary replica has successfully been created, you can query the creation_complete column in the INFORMATION_SCHEMA.SCHEMATA_REPLICAS view.
Once initial creation is complete, you can run read-only queries against a secondary replica. To do so, set the job location to the secondary region in query settings or the BigQuery API. If you do not specify a location, BigQuery automatically routes your queries to the location of the primary replica.
If you are using BigQuery’s capacity reservations, you will need to have a reservation in the location of the secondary replica. Otherwise, your queries will use BigQuery’s on-demand processing model.
To promote a replica to be the primary replica, use the ALTER SCHEMA SET OPTIONS DDL statement and set the primary_replica option. You must explicitly set the job location to the secondary region in query settings.
After a few seconds, the secondary replica becomes primary, and you can run both read and write operations in the new location. Similarly, the primary replica becomes secondary and only supports read operations.
To remove a replica and stop replicating the dataset, use the ALTER SCHEMA DROP REPLICA DDL statement. If you are using replication for migration from one region to another region, delete the replica after promoting the secondary to primary. This step is not required, but is useful if you don’t need a dataset replica beyond your migration needs.
We are super excited to make the preview for cross-region replication available for BigQuery, which will allow you to enhance your geo-redundancy and support region migration use cases. Looking ahead, we will include a console-based user interface for configuring and managing replicas. We will also offer a cross-region disaster recovery (DR) feature that extends cross-region replication to protect your workloads in the rare case of a total regional outage. You can also learn more about BigQuery and cross-region replication in the BigQuery cross-region dataset replication QuickStart.
Read More for the details.
Since we launched Memorystore for Redis Cluster in preview, customers across a variety of industries including banking, retail, ads, manufacturing, and social media have taken advantage of the performance and scalability of Memorystore for Redis Cluster.
Today we’re thrilled to announce the general availability (GA) of Memorystore for Redis Cluster, providing a 99.99% SLA with replicas enabled. With Memorystore for Redis Cluster, you get a fully-managed and fully open-source software (OSS) compatible Redis Cluster offering with zero downtime scaling (in or out), providing up to 60 times more throughput than Memorystore for Redis, with microseconds latency. Memorystore for Redis Cluster intelligently places primaries and replica nodes across availability zones and manages automatic failovers to maximize availability and reduce complexity.
Performance sensitive customers like PLAID, Wright Flyer Studios, and Reddit are excited about this GA and the 99.99% SLA.
“Memorystore for Redis Cluster provides us the scalability and manageability required for our workloads, allowing us to use Memorystore in far more meaningful ways. The OSS cluster client compatibility also provides an easier migration path from existing self-managed Redis Clusters. Now that it is GA with a 99.99% SLA, we look forward to adopting it for our caching needs on Google Cloud!” – Stanley Feng, Senior Engineering Manager at Reddit
Memorystore for Redis Cluster takes Memorystore to the next level of scale and performance, supporting 10 times more data and providing up to 60 times more throughput than Memorystore for Redis, with microseconds latency. Memorystore’s vetted, production-ready infrastructure provides a pre-configured and optimal Redis platform to supercharge your performance. With zero downtime scaling, you can start small and grow over time, adding only the capacity required based on workload demands, without taking downtime and interrupting your production applications. Unlike client-sharded caching architectures, which are prone to data loss during scaling operations, the simple scaling APIs of Memorystore for Redis Cluster let you grow or shrink your clusters and optimize costs without data loss. The direct access (proxy-less) architecture of Memorystore for Redis Cluster ensures that throughput scales linearly as nodes are added to the cluster, providing ultra-low and predictable latencies and avoiding the cost and latency overhead of a proxy-based architecture.
Applibot, a CyberAgent company, produces and operates smartphone games and services, and depends heavily on Redis and its ultra-fast performance:
“After relying on Memorystore, we’re excited for the launch of Memorystore for Redis Cluster, which will dramatically improve performance with its ultra low and predictable latencies. We’re also very excited to take advantage of Memorystore for Redis Cluster’s easy-to-use APIs which make scaling clusters in or out simple, fast, and non-disruptive, enabling us to dynamically size the clusters based on workload demands.” – Naoto Ito, Lead of Backend Engineering at Applibot
Memorystore for Redis Cluster is built with a robust control plane that provides high availability with data replication across multiple fault-domains (zones), automatic failovers, and intelligent node placement for both primary and replica nodes. Memorystore for Redis Cluster can handle both node and zonal outages by efficiently failing over to the replica nodes, promoting them to become primaries, and automatically repairing failed nodes with seamless orchestration. Memorystore for Redis Cluster’s durable architecture is designed for reliability, as Redis clients directly access the shards (primary and replica nodes). This design avoids the risk of a single point of failure (as each shard is designed to fail independently), which is inherent in proxy-based architectures.
In addition, Memorystore for Redis Cluster’s control plane utilizes Create-Before-Destroy and graceful shutdown maintenance strategies to eliminate downtime or workload interruptions. Instead of upgrading active cluster nodes in-place, Memorystore first creates an entirely new replica with the new software (at no extra charges to you), then coordinates a lossless failover from the old node to the new node, and lastly removes the old node gracefully in a gradual process that ensures minimal-to-no impact to your application.
NTT DOCOMO, Japan’s largest telecommunications company, is thrilled with the general availability of Memorystore for Redis Cluster:
“Memorystore for Redis Cluster’s resilient design and 99.99% SLA gives us confidence that our clusters are durable and resilient. Memorystore’s sophisticated control plane and maintenance orchestration ensures minimal impact to our application during maintenance and we’re very excited to utilize this managed offering so we can focus our efforts on creating value for our customers.” – Masatoshi Kato, Manager, Service Design Department, NTT DOCOMO
Memorystore for Redis Cluster is built with Private Service Connect (PSC) for private and secure connectivity by default. With PSC, we simplified the provisioning experience so you can easily configure private networking with an automated single-step process, avoid security issues of bidirectional access, and not be limited by the quota issues posed by Virtual Private Cloud (VPC) peering-based implementations. We also simplified cluster endpoint management by only requiring two IP addresses for any sized cluster (even for 250 Redis nodes!), thereby addressing IP exhaustion and contiguous expansion problems. With PSC, client applications can easily access the cluster from any region and there are advanced security controls available, like separation of permissions for Network and Redis admin and control of the IP address space used for cluster endpoints.
Memorystore for Redis Cluster comes with built-in integration with Google Identity and Access Management (IAM) so you can easily manage and control access to your clusters, especially for a microservice based client architecture. We also provide out of the box Audit Logging, in-transit encryption with TLS, and integration with Cloud Monitoring, so your cluster metrics are accessible and actionable.
For many customers, it’s the fully-managed nature of the Memorystore for Redis Cluster that entices them to migrate. You can offload the tedious and unrewarding work of tuning Redis performance on Compute Engine VMs, orchestrating complex cluster topologies, optimizing networking implementations, managing maintenance, and striving for self-managed high availability in the face of outages and upgrades. Customers can rely on Google to continually invest in building and supporting new Redis features and delivering them to customers with non-disruptive maintenance operations. Now, you can focus on building value for your organizations instead of focusing on managing Redis Clusters. To make adoption even more enticing, Memorystore Committed Use Discounts (CUDs) are already available for Memorystore for Redis Cluster and are fungible with Memorystore for Redis as well as Memorystore for Memcached, enabling savings of 40% with a three-year CUD and 20% with a one-year CUD.
Waze, a free navigation and driving app active in more than 185 countries is adopting Memorystore for Redis Cluster to take advantage of the fully-managed offering and supercharge their applications’ performance:
“Waze is excited to use Memorystore for Redis Cluster as our primary cache solution, taking advantage of the 99.99% SLA, zero-downtime scalability, and flexible Redis data types. The ultra-fast performance has been instrumental in helping us scale our platform and provide a better experience for our users.” – Yuval Kamran, DevOps Engineer
At this point, you may be wondering how to get started or how much effort it would require to migrate to the Memorystore for Redis Cluster. Good news! We’ve already published a zero-downtime migration blog using OSS RIOT with code snippets and a detailed step-by-step walkthrough so you can quickly and easily migrate your Redis implementations from any source, including self-managed or third-party offerings without interrupting your applications.
You can easily get started today by heading over to the Cloud console and creating a cluster, and then scaling in or out with just a few clicks. Please let us know if you have any feedback by reaching out to us at cloud-memorystore-pm@google.com.
Read More for the details.
This year, Google has seen an increase in the number of vulnerabilities impacting central processing units (CPU) across hardware systems. Two of the most notable of these vulnerabilities were disclosed in August, when Google researchers discovered Downfall (CVE-2022-40982) and Zenbleed (CVE-2023-20593), affecting Intel and AMD CPUs, respectively.
This trend proves only to be intensifying as time goes on. Left unmitigated, these types of vulnerabilities can impact billions of personal and cloud computers.
Today, we’re detailing the findings of Reptar (CVE-2023-23583), a new CPU vulnerability that impacts several Intel desktop, mobile, and server CPUs. Google’s Information Security Engineering team reported the vulnerability to Intel, who disclosed the vulnerability today. Thanks to the thoughtful collaboration between Google, Intel, and industry partners, mitigations have been rolled out, and Googlers and our customers are protected.
A Google security researcher identified a vulnerability related to how redundant prefixes are interpreted by the CPU which leads to bypassing the CPU’s security boundaries if exploited successfully. Prefixes allow you to change how instructions behave by enabling or disabling features. The full rules are complicated, but in general, if you use a prefix that doesn’t make sense or conflicts with other prefixes, we call those redundant. Usually, redundant prefixes are ignored.
The impact of this vulnerability is demonstrated when exploited by an attacker in a multi-tenant virtualized environment, as the exploit on a guest machine causes the host machine to crash resulting in a Denial of Service to other guest machines running on the same host. Additionally, the vulnerability could potentially lead to information disclosure or privilege escalation.
You can read more technical details about the vulnerability at our researcher’s blog.
Our security teams were able to identify this vulnerability and responsibly disclose it to Intel. Google worked with industry partners to identify and test a successful mitigation so all users are protected from this risk in a timely manner. In particular, Google’s response team ensured a successful rollout of the mitigation to our systems before it posed a risk to our customers, mainly Google Cloud and ChromeOS customers.
As Reptar, Zenbleed, and Downfall suggest, computing hardware and processors remain susceptible to these types of vulnerabilities. This trend will only continue as hardware becomes increasingly complex. This is why Google continues to invest heavily in CPU and vulnerability research. Work like this, done in close collaboration with our industry partners, allows us to keep users safe and is critical to finding and mitigating vulnerabilities before they can be exploited.
We look forward to continuing this proactive cybersecurity work, and encourage others to join us on this journey to create a more secure and resilient technology ecosystem.
Read More for the details.
Savimbo was founded to facilitate a better way to help stop deforestation of the Amazon rainforest by directly paying small farmers and indigenous groups to be good stewards of the land. Smallfarmers control 80% of the world’s tropical forests, guarding 70% of its carbon stores. Indigenous groups are 5% of the world’s population guarding 80% of its biodiversity. Our goal is to employ one billion small farmers within ten years to support our mission.
Our company, a B Corporation, was started by and for indigenous small farmers in the Colombian Amazon who have responsibly conserved their privately held lands for generations. To support these efforts, we take advantage of the latest technologies including Google Cloud so that small farmers can better preserve and replant the rainforest, retain biodiversity, and protect wildlife, while earning a fair income.
In addition to working with small farmers, we provide a product that large global corporations need: premium-quality and high-value biodiversity, water, tree, and carbon credits that are audited and reliable. All of these things come together to further corporate environmental, social, and governance (ESG) commitments.
Drea Burbank, Co-founder and CEO, Savimbo
Removing intermediaries with direct payments to small farmers
Our fair-trade biodiversity credits are a win-win for large corporations and digitally under-represented indigenous peoples alike. We eliminate the middlemen and complexity for small farmers. Standardizing on the best technologies makes our streamlined approach possible, both managing the biodiversity credits, as well as handling small farmer land enrollments, direct-to-farmer payments, and tracking and certification for reforestation and wildlife preservation.
For example, we will be usingFirebase Realtime Database to store tree-tracking data we gather from our small farmers and other sources. Changes to the data, such as when a new native tree is planted, are logged on smartphones, even while offline in rural areas of Putumayo, Colombia. Realtime Database SDKs sync the data automatically when the mobile phones connect to the internet later.
It needs to be easy for indigenous peoples and local communities to participate directly in our programs, eliminating the intermediary cost burdens that most other programs entail. That means that the methods need to be straightforward, with any complex calculations automated in code. We useGoogle Earth Engine for that, including access to satellite imagery and geospatial data that inform our analytics for tracking progress.
WithBigQuery andLooker Studio we will use AI and ML for predictive analytics, cultivate insights using real-time data, and prepare visualizations of our findings. This helps us continue to improve our approach to promoting biodiversity richness and offer a premium product that improves protecting the Earth.
Savimbo team members
Empowering self-sufficiency in ecological silviculture
A big part of Savimbo’s mandate is to train local talent with trustless verification methods instead of relying on outsiders to come in and manage projects. We have a core group of very bright, talented young people. Some of our team members who are doing Google Earth coding today were pig farming about six months ago.
We also have a consulting network of hundreds of people in 27 countries that speak 15 languages to advise Savimbo, many of them working pro bono. These include the five indigenous taitas (local medicine doctors) that were the genesis of our company, a Nobel Peace Prize winner, and a rocket scientist. Getting these diverse perspectives have allowed us to continue to be thoughtful as we build out our company.
Our three founders are Jhony, an animal rights activist and jaguar expert who has worked with the World Wildlife Foundation and in conservation for two decades. Drea has a background in science, medicine, and technology, plus experience in the field as a ‘hotshot’ wildland firefighter for the US Forest Service and for the British Columbia Ministry of Forests in Canada. Our third co-founder, Fernando Lezama, has been an indigenous rights activist and taita for 30 years.
We now have leaders from eighteen indigenous groups, 300 small farming families, and several governmental and non-governmental agencies partnering with us to protect 16,500 hectares so far. And we’re exploring expansion soon to Ecuador and Mexico.
Launching our company with support from theGoogle for Startups Cloud Program was really helpful. We received cloud credits that gave us a jumpstart on building our infrastructure. We also just joined theGoogle Startups for Sustainable Development program and are very excited about the mentorship, funding connections, and technology support it provides. We appreciate that Google Cloud understands our impact metrics and is on board with the idea of doing good while running a successful business, and we appreciate working with the cleanest cloud in the industry.
We hope to be a role model for planetary consciousness. If we are going to address the climate challenges in front of us, we have to think and collaborate beyond national borders and work together to be smarter about resources on a global level.
If you want to learn more about how Google Cloud can help your startup, visit our pagehereto get more information about our program, andsign up for our communicationsto get a look at our community activities, digital events, special offers, and more.
Read More for the details.
Today, AWS announced a new feature in SageMaker Pipelines, the ML workflow management service, to enable users run their desired steps in a pipeline as a sub-workflow. The new feature, called Selective Execution, allows you to run your selected steps in a pipeline while avoiding to rerun the entire pipeline. As a Data Scientist, Applied Scientist or an ML Engineer iterating on a pipeline for experimentation and deployment of ML models at scale, you can use this feature to initiate a pipeline execution on your desired steps and save hours of processing time, and simplify managing the code used for executions.
Read More for the details.
Amazon Athena has updated its integration with Apache Hudi to support new features and the latest 0.8.0 community release. Hudi is an open-source data management framework used to simplify incremental data processing in S3 data lakes. The updated integration enables you to use Athena to query Hudi 0.8.0 tables managed via Amazon EMR, Apache Spark, Apache Hive or other compatible services and includes new support for snapshot queries and reading bootstrapped tables.
Read More for the details.
Azure VMware Solution has expanded availability to Australia Southeast. With this release Australia Southeast is now the second region within the Australian sovereign area to become available (joining Australia East).
Read More for the details.
Over the course of just a few years, the ways in which we consume content have changed dramatically.
In order to compete in this new landscape and to adapt to the technological change that underpins it, media studios and other content producers should consider providing relatively open access to their proprietary content. This necessitates a cultural change across the industry.
Cable television cancellation, or “cord cutting,” has increased significantly since 2010, and with the pandemic accelerating the trend, there are now more than 30 million cord-cutter U.S. households. The American digital content subscriber now watches streaming content across an average of more than three paid services. For several years, more video content has been uploaded to streaming services every 30 days than the major U.S. television networks have created in 30 years.
With an abundance of content readily available across a growing number of platforms, each accessible from a plethora of different devices, media providers should invest in making it easier for consumers to find the video content they want to watch. If a viewer can’t access and stream something with minimal effort, they’ll likely move on to one of the countless alternatives readily at their disposal.
Think about voice-based assistants and search services. When prompted to find a piece of content, these services sift through a multitude of third-party libraries, where access is permitted, and remove friction from the user experience. It’s important for media companies to evolve from siloed, closed-off content libraries to participation in digital ecosystems, where a host of partnership opportunities can precipitate wider reach and revenue opportunities. Ultimately, joining these communities facilitates the delivery of the right experience on the right device at the right time to the right consumer.
Legacy silos prevalent in the media and entertainment industry must be broken down to make way for richer viewing experiences. It’s critical that studios roll out content faster, distribute it more securely, and better understand their audiences so they can provide customers the content they want in the contexts they want. In order to achieve these goals, publishers must leverage technology that’s purpose-built for the demands of a more dynamic, competitive landscape.
Publishers should consider embracing application programming interfaces, or APIs, to better connect with viewers and maximize return on content production. APIs, which facilitate interoperability between applications, allow publishers’ content to be consumed by more developers and publishing partners, who subsequently create more intelligent, connected experiences surrounding that content for the viewers.
This new content value chain should leverage an API management tool that resides on top of cloud infrastructure to manage the partnerships that ultimately ensure media can easily make its way to the consumer on their ideal viewing platform. APIs let content owners and distributors interact with partner technologies to drive value from social interactions and attract a wider audience via insights derived from data and analytics.
Perhaps most important is the ability for APIs to allow content to follow users as they start watching on one device, stop, and transfer to another. Content is increasingly separated from the device. APIs enable experiential continuity to be maintained when devices are changed, facilitating more seamless experiences across devices of different form factors and screen sizes. Consumers expect content to follow them wherever they go.
Last year, streaming services produced more original content than the entire television industry did in 2005—so for many media producers, adjusting to consumers’ new media consumption habits involves not only making content available on more devices but also producing more content, faster.
Studios should explore solutions that help them collaborate globally and produce great content more securely and efficiently. In the content value chain, APIs are used to seamlessly connect artists and production crews to necessary resources and assets across multiple production technologies and locations. For example, via APIs, a film crew in one country can record, encode, and collaborate and share content with another studio in another country. These cloud-based production environments can offer a single destination for all contributors to access the assets they need while also keeping those assets accessible only to the right people in the right contexts.
In addition, creating and distributing content requires a complex supply chain. APIs let multiple parties, each responsible for a different core function (such as content purchasing, storage, payments, physical media delivery, customer service, etc.), meld into a seamless experience for the customer. Rather than reimagining their strategy when it comes to these backend tasks, studios can leverage third-party APIs to expedite getting content in front of the right people and ultimately execute each of those functions more efficiently than they could on their own.
Besides tapping into partner APIs, savvy media and entertainment companies can accelerate consumption of content by developing their own public APIs to securely provide access to their asset libraries, pricing, and other relevant information. This is important, as it lets media creators use the same API to serve content to a variety of services and device types, thus helping them scale content distribution without simultaneously having to scale security resources.
Media companies’ APIs can also be implemented to deliver better customer experiences. Because APIs are involved each time a customer streams a video and every time a developer integrates a media asset into a new app or digital experience, API usage analytics can provide powerful insights into where, when, by whom, and on what devices different types of media—from traditional movies to augmented reality and other interactive content—are being accessed.
In order for studios to quickly adapt to a content value chain and distribute their content across multiple platforms, it’s important that they implement an API management tool on top of the cloud environment that powers content creation and distribution. For instance, Google Cloud offers Apigee, which sits on top of its public cloud. This added layer facilitates the integration between a studio’s proprietary environment and the strategic partnerships that APIs make possible.
The API lifecycle can be rather complex, especially when multiple APIs are leveraged. It can include:
Planning, design, implementation, testing, publication, operation, consumption, maintenance, versioning, and retirement of APIs
Launch of a developer portal to target, market to, and govern communities of developers who leverage APIs
Runtime management
Estimation of APIs’ value
Analytics to understand patterns of API usage
Using a management layer such as Apigee increases the likelihood that media and entertainment companies can combine the ability offered by public clouds and APIs to adapt to the requirements of new devices and protocols. It brings next-generation technology together to ensure studios can scale, secure, monitor, and analyze digital content creation and distribution.
Note: You can find an interesting statistic and trend related article about “cutting cord” here: https://cordcuttinglab.com/cord-cutting-statistics/
As the world continues to adapt to the changes brought on by the COVID-19 pandemic, cyber threats are evolving as well. From mimicking stimulus payments, to providing purchase opportunities for items in short supply, bad actors are tailoring attacks to mimic authoritative agencies or exploit fear of the pandemic.
Last month, we posted about the large amount of COVID-19 related attacks we were seeing across the globe. At that time, Gmail was seeing 18 million daily malware and phishing emails, and more than 240 million spam emails, specifically using COVID-19 as a lure. To keep you updated on where the threat landscape stands, today we’d like to share some additional email threat examples and trends, highlight some ways we’re trying to keep users safe, and provide some actionable tips on how organizations and users can join the fight.
As COVID-19 attacks continue to evolve, over the past month we’ve seen the emergence of regional hotspots and threats.
Specifically, we’ve been seeing COVID-19-related malware, phishing, and spam emails rising in India, Brazil, and the UK. These attacks and scams use regionally relevant lures, financial incentives, and fear to create urgency and entice users to respond.
Let’s look at some examples from these countries.
India
In India, we’ve seen an increase in the number of scams targeting Aarogya Setu, an initiative by the Indian Government to connect the people of the country with essential health services.
Also, as India is opening back up and employees are getting back to their workplaces, we’re starting to see more attacks masquerading as COVID-19 symptom tracking.
And with more and more people looking to buy health insurance in India, phishing scams targeting insurance companies have become more prevalent. Often these scams rely on quoting established institutions, and getting viewers to click on malicious links.
The United Kingdom
With the UK government announcing measures to help businesses get through the COVID-19 crisis, attackers are imitating government institutions to try to gain access to personal information.
These attackers often try to masquerade as Google, as well. But whether they’re imitating the government or Google, these attacks are automatically blocked.
Brazil
With the increased popularity of streaming services, we’re seeing increased phishing attacks targeting these services.
Here’s another example that relies on fear, suggesting that the reader will be subject to fines if they don’t respond.
Overall, Gmail continues to block more than 99.9% of spam, phishing, and malware from reaching our users. We’ve put proactive monitoring in place for COVID-19-related malware and phishing across our systems and workflows. In many cases, however, these threats are not new—rather, they’re existing malware campaigns that have simply been updated to exploit the heightened attention on COVID-19.
While we’ve put additional protections in place, our AI-based protections are also built to naturally adapt to an evolving threat landscape, picking up new trends and novel attacks automatically. For example, the deep-learning-based malware scanner we announced earlier this year continues to scan more than 300 billion documents every week, and boosts detection of malicious scripts by more than 10%.
These protections, newly developed and already existing, have allowed us to react quickly and effectively to COVID-19-related threats, and will allow us to adapt quickly to new ones. Additionally, as we uncover threats, we assimilate them into our Safe Browsing infrastructure so that anyone using the Safe Browsing APIs can automatically stop them. Safe Browsing threat intelligence is used across Google Search, Chrome, Gmail, Android, as well as by other organizations across the globe.
G Suite protections
Our advanced phishing and malware controls come standard with every version of G Suite, and are automatically turned on by default. This is a key step as we move forward to a safe-by-default methodology for Google Cloud products. Our anti-abuse models look at security signals from attachments, links, external images, and more to block new and evolving threats.
Keeping email safe for everyone
While many of the defenses in Gmail leverage our technology and scale, we recognize that email as a whole is a large and complex network. This is why we’re working not just to keep Gmail safe, but to help keep the entire ecosystem secure.
We’re doing this in many ways, from developing and contributing to standards like DMARC (Domain-based Message Authentication, Reporting, and Conformance) and MTA-STS (Mail Transfer Agent Strict Transport Security), to making our technology available to others, as we have with Safe Browsing and TensorFlow Extended (TFX). We’re also contributing to working groups where we collaborate and share best practices with others in the industry.
For example, Google is a long-time supporter and contributor to the Messaging, Malware, and Mobile Anti-Abuse Working Group (M3AAWG), an industry consortium focused on combating malware, spam, phishing, and other forms of online exploitation. The M3AAWG community often comes together to support important initiatives, and today we’re co-signing a statement on the importance of authentication. You can help keep email safe for everyone by bringing authentication to your organization.
Bringing authentication to your organization
Speaking of authentication, as we mentioned above, Gmail recommends senders adopt DMARC to help prevent spam and abuse. DMARC uses Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) to help ensure that platforms receiving your email have a way to know that it originally came from your systems. Adopting DMARC has many benefits, including:
It can provide a daily report from all participating email providers showing how many messages were authenticated, how often invalidated messages were seen, and what kind of policy actions were taken on those messages
It helps create trust with your user base—when a message is sent by your organization, the user receiving it can be sure it’s from you
It helps email providers such as Gmail handle spam and abuse more effectively
By using DMARC, we all contribute to creating a safe email ecosystem between providers, organizations, and users. In our previous post, we shared that we worked with the WHO to clarify the importance of an accelerated implementation of DMARC. The WHO has now completed the transition of the entire who.int domain to DMARC and has been able to stop the vast majority of impersonated emails within days after switching to enforcement. You can find more information on setting up DMARC here.
As a user, there are also steps you can take to become even more secure:
Take the Security Checkup. We built this step-by-step tool to give you personalized and actionable security recommendations and help you strengthen the security of your Google account.
Avoid downloading files that you don’t recognize; instead, use Gmail’s built-in document preview.
Check the integrity of URLs before providing login credentials or clicking a link—fake URLs generally imitate real ones and include additional words or domains.
Report phishing emails.
Turn on 2-step verification to help prevent account takeovers, even in cases where someone obtains your password.
Consider enrolling in Google’s Advanced Protection Program (APP)—we’ve yet to see anyone in the program be successfully phished, even if they’re repeatedly targeted.
Be thoughtful about sharing personal information such as passwords, bank account or credit card numbers, and even your birthday.
Safety and security are a priority for us at Google Cloud, and we’re working to ensure all our users have a safe-by-default experience, no matter what new threats come our way.
Read More for the details.
Finally, time by time be sure your password is strong enough. How? You can check it with some useful tools on the Internet, like this: https://www.safetydetectives.com/password-meter.