Azure – Public Preview: Set Java log levels in Azure Container Apps
Java application log levels can now be set dynamically in Azure Container Apps.
Read More for the details.
Java application log levels can now be set dynamically in Azure Container Apps.
Read More for the details.
You can now now have easy access to fast, ephemeral, sandboxed compute on Azure without managing infrastructure.
Read More for the details.
Azure App Configuration extension for Azure Kubernetes Service (AKS) allows you to install and manage Azure App Configuration Kubernetes Provider on your AKS cluster via Azure Resource Manager (ARM).
Read More for the details.
The Amazon Web Services (AWS) Advanced Python Wrapper driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible edition database clusters. This database driver provides support for faster switchover and failover times, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).
The AWS Advanced Python Wrapper driver wraps the open-source Psycopg and the MySQL Connector/Python drivers and supports Python versions 3.8 or newer. You can install the aws-advanced-python-wrapper package using the pip command along with either the psycpg or mysql-connector-python open-source packages. The wrapper driver relies on monitoring database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces switchover and failover times from tens of seconds to single digit seconds compared to the open-source drivers.
The AWS Advanced Python Wrapper driver is released as an open-source project under the Apache 2.0 License. Check out the project on GitHub to view installation instructions and documentation.
Read More for the details.
AWS re:Post Private is now available in five new regions: US East (N. Virginia), Europe (Ireland), Canada (Central), Asia Pacific (Sydney), and Asia Pacific (Singapore).
re:Post Private is a secure, private version of the AWS re:Post, designed to help organizations increase speed to get started with the cloud, remove technical roadblocks, accelerate innovation, and improve developer productivity. With re:Post Private, it is easier for organizations to build an organizational cloud community that drives efficiencies at scale and provides access to valuable knowledge resources. Additionally, re:Post Private centralizes trusted AWS technical content and offers private discussion forums to improve how organizational teams collaborate internally—and with AWS—to remove technical obstacles, accelerate innovation, and scale more efficiently in the cloud. On re:Post Private, convert a discussion thread into a support case, and centralize AWS Support responses for your organization’s cloud community. Learn more about using AWS re:Post Private on the product page.
Read More for the details.
Today, AWS announced the opening of a new AWS Direct Connect location within the Coresite CH1 data center in Chicago, Illinois. By connecting your network to AWS at the new Illinois location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones. This is the fourth AWS Direct Connect site within Chicago Metropolitan area and the 44th site in the United States.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. The new Direct Connect location at Coresite CH1 offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.
For more information on the over 140 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex, M7i, C7i are available in the AWS GovCloud (US-East) Region. In addition, Amazon EC2 M7i-flex, M7i and R7i instances are available in the AWS GovCloud (US-West) Region. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids)custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads, and deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices.
M7i, C7i and R7i deliver up to 15% better price-performance compared to prior generation M6i, C6i and R6i instances. They offer larger instance sizes up to 48xlarge, can attach up to 128 EBS volumes and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.
Read More for the details.
AWS CloudFormation launches a new parameter called DeletionMode for the DeleteStack API. This new parameter allows customers to safely delete their CloudFormation stacks that are in DELETE_FAILED state.
Today, customers create, update, delete, and re-create CloudFormation stacks when iterating on their cloud infrastructure in their dev-test environments. Customers can use the DeleteStack CloudFormation API to successfully delete their stacks and stack resources. However, certain stack resources can prevent the DeleteStack API to successfully complete for e.g. when customers attempt to delete non-empty Amazon S3 buckets. The DeleteStack API can enter into the DELETE_FAILED state in such scenarios. With this launch, customers can pass FORCE_DELETE_STACK value to the new DeletionMode parameter and delete such stacks.
Read More for the details.
The first incarnation of search indexes in BigQuery focused on fast and efficient lookups on STRING data elements, either in standalone STRING scalar columns, or within an ARRAY, STRUCT, or JSON column. Our previous blog posts showcased the orders-of-magnitude performance gains achievable when utilizing indexes with the SEARCH function and other functions and operators.
Today, we are announcing the public preview of numeric search indexes, which enables optimized lookups on INT64 and TIMESTAMP data types. With this change, the EQUAL(=) and IN operations on these data types can utilize search indexes to reduce byte scans for improved performance. So now your lookups for account IDs or transactions IDs or log timestamps can get faster and cheaper.
In this blog, we demonstrate the gains on real data, showcasing index creation and queries on a 100TB log table called log_table that contains Google Cloud Logging data for an internal Google test project.
The base table details are as follows:
The table has the following columns of interest:
jsonPayload: type JSON
This jsonPayload has a leaf field named threadId of type JSON number.
sourceLocation: type RECORD (or STRUCT) with two sub-fields of interest:
file: type STRING, containing the name of the file producing the log entry
line: type INT64, containing the line number in the file where the log entry was produced.
By default, a search index is created for the STRING data only. If you want to index INT64 or TIMESTAMP, you need to provide them in the index option called data_types. In the following example, all data of type STRING and INT64 in the log_table table will be indexed.
JSON field search
In this first example, we want to search for log entries that have the thread ID 12104 in the JSON payload.
We compare between having a search index and having no index. Given that log entries with this thread ID are very rare, the results show dramatic improvements on all three metrics:
Metrics
Without Index
With Index
Improvement
Execution Time (ms)
48,790
4,664
10x
Processed Bytes
2,174,758,158,336
774,897,664
2,806x
Slot Usage (ms)
25,735,222
7,300
3,525x
STRUCT nested field search
In the second example, we count how many log entries are produced from a certain line of code (line 813 in the file borg/borgletlib/borgletlib.cc).
Note that sourceLocation.file is a STRING field. A search index on only STRING data type can already help improve the query performance as shown below. However, with indexing on also the INT64 data type, the performance can be further improved.
Metrics
No Index
With IndexSTRING-only
With IndexSTRING & INT64
Execution Time (ms)
57,169
11,571
4.9x
7,982
7.1x
Processed Bytes
1,703,843,725,312
976,230,547,456
1.7x
682,560,061,440
2.4x
Slot Usage (ms)
38,947,660
25,595,348
1.5x
8,256,218
4.7x
While partitioning and clustering can optimize filtering and lookups, they have certain limitations. For instance, partitioning can only be done on a single column, and clustering allows up to four columns per table. However, clustering is most effective when filtering on the first clustering column, as subsequent columns often provide minimal pruning power. Furthermore, both partitioning and clustering are limited to top-level columns.
Search indexes on INT64/TIMESTAMP complement these BigQuery features by enabling lookup optimizations on any number of columns. In addition, as demonstrated above, they cover struct nested fields, array elements, and JSON leaf fields.
This feature is currently in preview. For more information, refer to Optimize with numeric predicates.
Read More for the details.
Today, we’re announcing two new solar power purchase agreements in Japan that bring us closer to our goal to run on 24/7 carbon-free energy on every grid where we operate by 2030. These Power Purchase Agreements (PPAs) with Itochu’s partner Clean Energy Connect and Shizen Energy are our first in the country, and together they will add a combined 60 megawatts (MW) of new solar energy capacity to the Japanese grid. This will not only support our data centers in the region, but also align with Japan’s clean energy ambitions.
Our PPA with Clean Energy Connect, a partner of Itochu Corporation, involves constructing a network of roughly 800 small-scale solar plants across multiple grid regions in Japan. This novel, distributed approach is a creative solution to the challenge of limited land availability for large-scale solar projects in the country. It will generate a significant 40 MW of clean energy to support our operations in Japan.
The PPA with Shizen Energy, a leading renewable energy company in Japan, focuses on the development of a 20 MW utility-scale solar project situated in the same power grid as our recently opened data center in Inzai City, Chiba prefecture.
Through these agreements, we will procure the renewable energy generated from these solar farms across Japan, along with the associated energy attribute certificates. This will significantly reduce our carbon footprint in the region.
These projects are expected to be fully operational within four years and underscore our commitment to invest nearly $690 million (nearly 100 billion yen) into sustainable infrastructure in Japan.
Signing these PPAs is just the beginning of our decarbonization journey in Japan. We aim to continue our efforts in the region by collaborating with local partners and exploring even more innovative solutions to accelerate the country’s clean energy transition. To learn more about Google data centers, visit google.com/datacenters. To learn more about our sustainability work, visit sustainability.google.
Read More for the details.
AWS CloudFormation enhances the troubleshooting experience for stack operations with a new AWS CloudTrail deep-link integration. This feature enables quicker resolution of stack provisioning errors. It directly links stack operation events in the CloudFormation Console to relevant CloudTrail events. These links provide detailed visibility into the errors, thus speeding up the dev-test cycle for developers.
When you create, update, or delete a stack, your operation can encounter provisioning errors, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a stack provisioning error in the CloudFormation Console was a multi-step process. It involved opening the CloudFormation stack events tab, clicking ‘Detect Root Cause’ to highlight the likely root cause of the error, and then going to the AWS CloudTrail events dashboard. There, you had to manually set filters, such as the timestamp period, to find the detailed history of the stack provisioning API events. Now, clicking ‘Detect Root Cause’ highlights the likely root cause of a stack provisioning error and provides a pre-configured AWS CloudTrail deep-link to API events generated by your stack operation. This provides you with additional context to diagnose and resolve errors and and eliminates multiple manual steps from the troubleshooting process.
Read More for the details.
Google Cloud customers are increasingly moving their generative AI workloads from proof-of-concept into production, and are now seeing real-world business impact from their AI investments. Many of these customers have worked with Google Cloud Consulting to apply AI in important and helpful ways. For example, Bristol Myers Squibb developed a new AI-powered interface to help its clinical study teams more easily find important information and generate documents, and Palo Alto Networks launched several new AI tools that utilize Gemini to streamline and enhance user experience in its copilots, improving productivity of security practitioners.
Moving these workloads into production requires a deep understanding of generative AI systems design, large language model architectures, prompt engineering, evaluation, and much more. Now, we’re bringing Google’s expertise in these areas to our customers at-scale, with the launch of a new service offering: Generative AI Ops. This new offering, delivered by either Google Cloud Consulting or via our comprehensive partner ecosystem, will help organizations mature their gen AI prototypes into production-grade solutions and provide support in important areas like security, model tuning and feedback, and optimization.
With the launch of Generative AI Ops, Google Cloud now offers customers both an open and optimized technology stack for building AI, and a comprehensive set of services to support customers at every stage of their AI transformations — from exploration to production.
The new Generative AI Ops services offering moves customers through the steps required to make AI applications production-ready. These include:
Prompt engineering, design, and optimization: Designing well-optimized prompts is important to ensure models can provide high-quality outputs and build user trust. Using best practices for prompt engineering, and techniques such as ReAct, retrieval augmented generation (RAG), and chain of thought, Google Cloud Consulting can help customers build solutions to improve the performance of their gen AI applications and the outputs of models. Importantly, different models are often suited to different use cases, and each of these models may require different prompting structures. Our expert teams will also help customers apply the right model to the right use case, and to apply the right prompting technique to the right model.
Performance and system evaluation: Successfully putting AI into production requires constant evaluation and feedback to improve the performance of models and applications. This services offering helps customers design and deploy an evaluation framework tailored to their applications, and build mechanisms for automated evaluation metrics using tools like AutoSxS and GenAI Eval, human evaluation, as well as hybrid approaches.
Model optimization and continuous tuning: Once a framework for performance and system evaluation is in place, gen AI applications and models still require continuous tuning and optimization. Gen AI Ops provides solutions and managed services for optimizing and tuning models based on human feedback and benchmarking. This includes improving system architecture and model selection, reducing latency and costs, and incorporating the latest APIs and and available tools to orchestrate and build AI agents using LangChain or DIY orchestrators to ensure applications run optimally.
Monitoring and observability: Having a robust monitoring solution in place is critical to ensuring AI applications are production-ready. Google Cloud Consulting can help customers build observability solutions to constantly monitor the operations and performance of their gen AI applications on a wide variety of factors, like model accuracy and hallucination, latency, throughput, hardware utilization, model drift, traffic, and costs.
Business integration and testing: It is critical that a customers’ applications and models perform well in real-world scenarios and integrate well with their business processes. Google Cloud Consulting can help customers through the careful planning required to achieve this, including setting up a scalable and secure environment on Google Cloud, designing APIs to efficiently manage interactions with various models, and implementing rigorous unit, integration, and load testing to evaluate their models’ performance under various conditions.
On top of the business planning and technical steps required to bring AI applications into production, training and team enablement are also critical priorities for customers wanting to see success in their cloud deployments. Through the Google Cloud Skills Boost Platform, Google Cloud offers a broad range of trainings, hands-on labs, bootcamps, and coursework to help upskill teams on generative AI, to ensure that customer teams can build, deploy, use, and manage new AI applications.
Ready to learn more? Discover how Google Cloud Consulting can help you learn, build, operate and succeed.
Read More for the details.
Today, Google announced new investments in digital infrastructure and security initiatives designed to increase digital connectivity, accelerate economic growth, and deepen resilience across Africa.
To help increase the reach and reliability of digital connectivity for Africa, today we’re announcing Umoja, the first ever fiber optic route to directly connect Africa with Australia.
Anchored in Kenya, the Umoja cable route will pass through Uganda, Rwanda, Democratic Republic of the Congo, Zambia, Zimbabwe, and South Africa, including the Google Cloud region, before crossing the Indian Ocean to Australia. Umoja’s terrestrial path was built in collaboration with Liquid Technologies to form a highly scalable route through Africa, including access points that will allow other countries to take advantage of the network.
Umoja, which is the Swahili word for unity, joins Equiano in an initiative called Africa Connect. Umoja will enable African countries to more reliably connect with each other and the rest of the world. Establishing a new route distinct from existing connectivity routes is critical to maintaining a resilient network for a region that has historically experienced high-impact outages.
We are grateful for the partnership from leaders across Africa and Australia to deliver Africa Connect to people, businesses, and governments in Africa and around the world.
“Access to the latest technology, supported by reliable and resilient digital infrastructure, is critical to growing economic opportunity. This is a meaningful moment for Kenya’s digital transformation journey and the benefits of today’s announcement will cascade across the region.” – Meg Whitman, U.S. Ambassador to Kenya
“I am delighted to welcome Google’s investment in digital connectivity, marking a historic milestone for Kenya, Africa, and Australia. The new intercontinental fiber optic route will significantly enhance our global and regional digital infrastructure. This initiative is crucial in ensuring the redundancy and resilience of our region’s connectivity to the rest of the world, especially in light of recent disruptions caused by cuts to sub-sea cables. By strengthening our digital backbone, we are not only improving reliability but also paving the way for increased digital inclusion, innovation, and economic opportunities for our people and businesses.” – H.E. Dr. William S. Ruto, President of the Republic of Kenya
“Diversifying Australia’s connectivity and supporting digital inclusion across the globe are both incredibly important objectives, and Google’s Umoja cable will help to do just that. Australia welcomes Google’s investment and congratulates all those involved in undertaking this crucial initiative.” – Hon Michelle Rowland MP, Australian Minister for Communications
“Africa’s major cities including Nairobi, Kampala, Kigali, Lubumbashi, Lusaka, and Harare will no longer be hard-to-reach endpoints remote from the coastal landing sites that connect Africa to the world. They are now stations on a data superhighway that can carry thousands of times more traffic than currently reaches here. I am proud that this project helps us deliver a digitally connected future that leaves no African behind, regardless of how far they are from the technology centers of the world.” – Strive Masiyiwa, Chairman and founder of Liquid
In addition to today’s infrastructure announcement, Google will sign a Statement of Collaboration with Kenya’s Ministry of Information Communications and The Digital Economy to accelerate joint efforts in cybersecurity, growing data-driven innovation, digital upskilling, and responsibly and safely deploying AI for societal benefits.
As part of the collaboration, Google Cloud and Kenya are announcing that they intend to work together on strengthening Kenya’s cybersecurity. The Department of Immigration & Citizen Services is evaluating Google Cloud’s CyberShield solution and Mandiant expertise to strengthen the defense of its eCitizen platform. CyberShield enables governments to build enhanced cyberthreat capabilities, protect web-facing infrastructure, and helps teams develop skills and processes that drive effective security operations.
Google has long recognized the critical role investments in secure technology infrastructure have on connecting communities, expanding education, and driving healthy economic development within Africa and around the world.
Since Google opened our first Sub-Saharan Africa office in Nairobi in 2007, we have partnered with governments from countries across Africa on numerous digital initiatives. In 2021, we committed to invest $1 billion in Africa over five years to support a range of efforts, from improved connectivity to investment in startups, to help boost Africa’s digital transformation. Since then, Google has invested more than $900 million in the region, and we expect to fulfill our commitment by 2026. The collaboration introduced this week is the latest step towards delivering on our broader commitment to support Africa’s digital transformation, continued economic growth, and innovation.
Supporting economic growth: Between 2021 and 2023, third party estimates show that Google’s products and services provided more than $30 billion of economic activity across Sub-Saharan Africa. Africa’s internet economy has the potential to grow to $180 billion by 2025 – 5.2% of the continent’s GDP, according to a report by the International Finance Corporation. Investments like Umoja, coupled with developing talent who can benefit from, and add to, this growing digital economy, will help ensure citizens can access government services and critical information, while enabling businesses to thrive and generate durable economic growth for the local economy.
Skilling: Our training and certification initiatives help entrepreneurs get more out of the web, using digital technologies to build and sustain businesses and, in doing so, help generate durable economic growth for the local economy. For example, the Google Hustle Academy, a five-day bootcamp launched in 2022 focusing on subjects including leadership, business strategy, and e-commerce, has supported the growth of more than 3,500 small businesses in Kenya.
AI innovations created in Africa, for Africa: Through our AI Research Centers in Ghana and Nairobi, as well as the Product Development Center in Kenya, Google is continuing to build products and services to help tackle challenges across the continent. For example, in Kenya, Google partnered with Jacaranda Health to improve maternal health outcomes with expanded access to ultrasounds. Google is also working with Kenyan health organizations, including IntelliSOFT, Ona, and Medtronic Labs, to enhance the interoperability of digital health solutions. In addition, Google is running workshops with Kabarak University as part of Google’s efforts to support digital health innovation in Kenya.
Google is as committed as ever to partnering with communities, businesses, and governments in Africa to help foster even more innovation across the continent, and we are excited about this next chapter for Kenya and the region.
Read More for the details.
Editor’s note: Today’s post is authored by Jeff Nichols, Chief Technology Officer at Alden Network, a senior care provider with nearly 50 locations in Illinois and Wisconsin. Alden Network chose ChromeOS devices for their 1,000 clinicians who need secure access to patient records on the go.
In Alden Network’s senior and therapeutic care centers, our goal is to enable residents to live independent lives while ensuring the care they need. Equipped with ChromeOS devices, our providers can spend more time providing attentive care to patients and less time learning or troubleshooting technology.
Removing barriers between caregivers and patients
Before ChromeOS, we had Windows devices that were meant to be shared. The Windows machines took a long time to boot up, provided an inconsistent user experience, and the machines required a lot of maintenance and troubleshooting. To the extent that if a user found one they liked, they would hold onto it instead of returning it to the cart. We wanted technology that supported caregivers, instead of being a barrier to doing their jobs.
Quick and easy deployment
Faced with the high cost of buying and managing more Windows laptops, we turned to ChromeOS. We started small, with a deployment in just one of our rehabilitation and healthcare centers. We replaced the Windows laptops with Chromebooks all at once—about 20 to 30 machines per building, with multiple floors and multiple nursing stations. Equipped with the positive feedback from users, we expanded our Chromebook deployments. It was important for us to complete the Chromebook adoption quickly to realize the savings and simplify and streamline the user experience.
More uptime, one-third of the cost
The Chromebooks cost about one-third of the price of PC laptops, saving the company more than $460,000. We also saw that we didn’t have to deal with the complications of patching or updating since ChromeOS updates automatically.
Easy to use, for IT administrators and employees
With ChromeOS device management, the deployment and adoption proceeded smoothly. For existing and new employees, we simply give them a Chromebook which is ready to use in just a few minutes. We add the device to the correct profile in the Google Admin console and new devices join our centrally managed fleet. This process couldn’t be easier. With the Windows laptops, the process would take hours—a big burden on the IT staff. Not only has deployment been simplified, but the effort required to support the ChromeOS devices has reduced drastically as shown through fewer support calls.
Attentive care and a lean IT team
No matter which device caregivers find themselves on, their experience is consistent. They sign in and are taken straight to Chrome browser and the PointClickCare sign on. With ChromeOS device management, we set all Chromebooks to guest mode through the Managed Guest Sessions setting. This prevents Chrome browser from saving browser activity, yet if caregivers switch devices they can add or access EMR data just as they did before.
ChromeOS devices give caregivers more confidence in their work. We’re less likely to hear frustrations about technology savviness, such as, this device hates me, or I’m just not very good at tech stuff. ChromeOS is simple to use and that’s changing how the caregivers feel. The technology can help them focus on what they’re best at—giving attentive care to patients.
To learn more about ChromeOS, you can visit our website or get in touch with a ChromeOS expert today.
Read More for the details.
View your key performance metrics for your AKS resources in context with the Azure Portal
Read More for the details.
Container insights now supports out of the box visualizations using only managed Prometheus.
Read More for the details.
Today, Amazon Simple Email Service (SES) announces the general availability of Mail Manager, a suite of email management features designed to streamline complex email operations for businesses of all sizes. With Mail Manager, companies can centralize their email infrastructure, applying unified policies and rules to manage both inbound and outbound email flows through a single interface.
Mail Manager allows organizations to set up dedicated email ingress endpoints, enforce sophisticated email traffic filtering policies such as IP filters, and utilize a powerful rules engine to process and route emails to intended destinations. Mail Manager provides customers archiving capabilities to meet their compliance needs with records retention and data protection.
At launch, Mail Manager will offer three initial Email Add Ons, developed with Spamhaus, Abusix, and Trend Micro, to provide email security features. These add-ons offer additional layers of protection and control, enhancing the overall security posture of your email operations.
Mail Manager is generally available, and you can use it in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt), Asia Pacific (Tokyo, Sydney).
To learn more, see the documentation on Mail Manager in the Amazon SES Developer Guide and the blog post. To start using Mail Manager, visit the Amazon SES console.
Read More for the details.
Starting today, Amazon Redshift is making snapshot isolation as the default for provisioned clusters when you create a new cluster or restore a cluster from a snapshot. The database isolation level will remain unchanged on your existing provisioned clusters unless explicitly changed. You can switch to serializable at any time if it is your preferred database isolation level. This change makes the product experience consistent for both Provisioned and Serverless which already uses snapshot isolation as default.
Amazon Redshift offers two database isolation levels — serializable and snapshot — to handle concurrent transactions within your data warehouse. Serializable isolation provides strict correctness guarantees equivalent to running your operations serially. Most data warehousing applications do not need these strict guarantees that limit concurrency on operations. Unlike serializable, snapshot isolation gives you better performance by allowing for more concurrency of operations on the same table when processing large volumes of data. You can change the isolation level for your database using CREATE DATABASE or ALTER DATABASE commands.
Read More for the details.
Now in its fourth year, the Google Cloud Research Innovators program is proud to announce 23 new participants who have been selected for their innovative ideas and commitment to solving some of today’s most difficult challenges using AI and cloud computing technology. This year’s cohort will collaborate with each other and Google experts to accelerate their most groundbreaking projects. Participants will get access to Google Cloud technology and a community of support, including cloud credits and networking opportunities. By working together across disciplines, program participants can amplify their research, accelerate discoveries–and inspire new ways to advance scientific computing.
This year’s cohort includes researchers from a wide variety of institutions across the nation–from the University of Houston to Florida International University. They meet quarterly with mentors and participate in research conferences and special training days. Mentors for this cohort include Alexander Titus, Principal Scientist for Transformative AI at the University of Southern California; Somalee Datta, Director of Research IT, Technology, and Digital Solutions at Stanford University; and Astitva Chopra, AI Science and Sustainability Program Manager at Google.
The new Research Innovators will focus on solving real-world problems with Google Cloud. For example,
Jacob Fisher at Michigan State University plans to use neuroscience and machine learning to better understand the neural underpinnings of attention to messages and digital environments.Haiying Shen at the University of Virginia hopes to improve the performance of cloud computing by focusing on resource management and job scheduling.Weiqiang Zhu at University of California at Berkeley aims to use AI and ML analytics to gain insights from a 800T database of seismic data and show how cloud computing can better predict earthquakes.
This year’s Research Innovators join an esteemed list of researchers who previously participated in this program and have developed technology solutions that improve everyday life for researchers, consumers, and commuters: Mohammad Shahrad at The University of British Columbia developed a new framework for serverless computing that saves costs and resources for researchers across domains. Ignacio Carlucho at Heriot-Watt University used Google Cloud’s graphic neural network models to train robots to collaborate. Abhishek Dubey of Vanderbilt University worked with Tennessee public transit agencies to make regional transportation systems more efficient with AI and real-time data analytics.
Congratulations to the 2024 Research Innovators:
Opeyemi Emmanuel Ajibuwa, North Carolina A&T State University
Zeynettin Akkus, Mayo Clinic
Spencer A. Bruce, NY State Department of Health
Jacob Fisher, Michigan State University
Alasdair Gent, Duke University
Sishuai Gong, Purdue University
Steven N. Hart, Mayo Clinic
David Jimenez-Morales, Stanford University
Rabimba Karanjai, University of Houston
Suresh Kondeti, University of Nebraska Medical Center
Ying Mao, Fordham University
Marc Melcher, Stanford University
Ronald Metoyer, University of Notre Dame
Mike Mylrea, University of Miami
Giri Narasimhan, Florida International University
Ramesh Natarajan, Yeshiva University
Rahul Suryakant Sakhare, Purdue University
Haiying Shen, University of Virginia
Cheng Tan, Northeastern University
Gautam Malviya Thakur, Oak Ridge National Laboratory
Yue Zhao, University of Southern California
Mikhail Zhizhin, Colorado School of Mines
Weiqiang Zhu, University of California at Berkeley
If you’re a researcher interested in exploring the benefits of the cloud for your projects, apply here for access to the Google Cloud research credits program in eligible countries. If you’d like more information about Google Cloud’s Research Innovator program, see here. If you want to read more about how Google Cloud’s technologies are transforming research and education, you’ll find more case studies here.
Read More for the details.
Amazon Managed Workflows for Apache Airflow (MWAA) now offers Federal Information Processing Standard (FIPS) 140-2 validated endpoints to help you protect sensitive information. These endpoints terminate Transport Layer Security (TLS) sessions using a FIPS 140-2 validated cryptographic software module, making it easier for you to use Amazon MWAA for regulated workloads.
Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. FIPS compliant endpoints on Amazon MWAA helps companies contracting with the US and Canadian federal governments meet the FIPS security requirement to encrypt sensitive data in supported Regions.
FIPS 140-2 compliant endpoints for Amazon MWAA are available in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), and Canada (Central) Regions. To learn more about Amazon MWAA visit the Amazon MWAA documentation.
Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
Read More for the details.