Azure – General availability: LDAP signing
Feature has reached general availability.
Read More for the details.
Feature has reached general availability.
Read More for the details.
We are excited to announce the general availability of hardware connectivity modules powered by AWS IoT ExpressLink, which are developed and offered by AWS Partners such as Espressif, Infineon, and u-blox. These modules enable easy AWS cloud-connectivity and implement AWS-mandated security requirements for device to cloud connections. Integrating these wireless modules into their hardware design, customers can now accelerate the development of their Internet of Things (IoT) products, including consumer products, industrial and agricultural sensors and controllers.
Read More for the details.
Anyone can now search and find publicly available data sets on AWS Data Exchange along with more than 3,000 existing data products from category-leading data providers across industries, all in one place.
Read More for the details.
AWS App2Container (A2C) now supports Azure DevOps for setting up a CI/CD pipeline to automate building and deploying container applications on AWS. With this release, customers can leverage App2Container to automate the setup of Azure DevOps service pipeline for managing automated build and deployment of containerized applications. App2Container automates the build pipeline setup by installing the required tooling such as AWS Toolkit and the Docker engine. In addition, App2Container also sets up the release pipeline using existing Azure DevOps Service accounts to deploy the containerize image to AWS container services. This is in addition to AWS CodePipeline and Jenkins support already included in App2Container.
Read More for the details.
AWS Well-Architected Tool now allows customers to preview custom lens content before publishing, add additional URLs to helpful resources and improvement plans, and use tags to assign metadata to their custom lenses.
Read More for the details.
Amazon QuickSight launches custom subtotals at all levels on Pivot Table. QuickSight authors can now customize how subtotals are displayed in Pivot Table, with options to display subtotals for last level, all levels or selected level. This customization is available for both rows and columns. To learn more about custom subtotals, see here.
Read More for the details.
AWS WAF Captcha is now available for all customers. AWS WAF Captcha helps block unwanted bot traffic by requiring users to successfully complete challenges before their web request are allowed to reach AWS WAF protected resources. You can configure AWS WAF rules to require WAF Captcha challenges to be solved for specific resources that are frequently targeted by bots such as login, search, and form submissions. You can also require WAF Captcha challenges for suspicious requests based on the rate, attributes, or labels generated from AWS Managed Rules, such as AWS WAF Bot Control or the Amazon IP Reputation list. WAF Captcha challenges are simple for humans while remaining effective against bots. WAF Captcha includes an audio version and is designed to meet WCAG accessibility requirements.
Read More for the details.
Today, Amazon Elastic Container Registry (Amazon ECR) launched the support for AWS PrivateLink in the Asia Pacific (Osaka) Region. Now you can access Amazon ECR API from your Amazon Virtual Private Cloud (Amazon VPC) in Osaka region without using public IPs and without requiring the traffic to traverse across the internet.
Read More for the details.
AWS WAF now supports evaluating multiple headers in the HTTP request, without the need to specify each header individually in AWS WAF rules. You can also use this new capability to easily inspect all cookies in the HTTP request, without the need to specify each cookie in WAF rules. This capability helps you protect your applications or API endpoints from attacks that try to exploit a custom header or cookie, or a common header for which you may not have created a WAF rule. You can also limit the scope of inspection to only included or excluded headers, and inspect only the keys or only the values for the headers or cookies you want to inspect.
Read More for the details.
Visualization is the key to understanding massive amounts of data. Today we have BigQuery and Looker to analyze petabytes scale data and to extract insights in a sophisticated way. But how about monitoring data that actively changes every second? In this post, we will walk through how to build a real-time dashboard with Cloud Run and Firestore.
There are many business use cases that require real-time updates. For example, inventory monitoring in retail stores, security cameras, and MaaS (Mobility as a Service) applications such as share ride. In the MaaS business area, locations of vehicles are very useful in making business decisions. In this post, we are going to build a mobility dashboard, monitoring vehicles on a map in real-time.
The dashboard should be accessible from the web browser without any setups on the client side. Cloud Run is a good fit because it can generate URLs, and of course, scalable that can handle millions of users. Now we need to implement an app that can plot geospatial data, and a database that can broadcast its update. Here are my choices and architecture.
Cloud Run — Hosting a web app (dashboard)(streamlit — a library to visualize data and to make web app)(pydeck — a library to plot geospatial data)Firestore — a full managed database that keeps your data in sync
The diagram below illustrates a brief architecture of the system. In the production environment, you may also need to implement a data ingestion and transform pipeline.
Before going to the final form, let’s take some steps to understand each component.
streamlit is an OSS web app framework that can create beautiful data visualization apps without knowledge of the front-end (e.g. HTML, JS). If you are familiar with pandas DataFrame for your data analytics, it won’t take time to implement. For example, you can easily visualize your DataFrame in a few lines of code.
Making this app runnable on Cloud Run is easy. Just add streamlit in requirements.txt, and make Dockerfile from a typical python webapp image. If you are not familiar with Docker, buildpacks can do the job. Instead of making Dockerfile, make Procfile with just 1 line as below.
To summarize, the minimum required files are only as below.
Deployment is also easy. You can deploy this app to Cloud Run with just a command.
This command will build and make your image with buildpacks and Cloud Build, thus you don’t need to set up a build environment in your local system. Once deployment is completed, you can access your web app with the generated URL like https://xxx-[…].run.app. Copy and paste the URL into your web browser, and you will see your first dashboard webapp.
In the STEP 1, you can visualize your data with fixed conditions or interactively with UI functions on streamlit. Now we want it to update by itself.
Firestore is a scalable NoSQL database, and it keeps your data in sync across client apps through real-time listeners. Firestore is available on Android and iOS, and also provides SDKs in major programming languages. Since we use streamlit in Python, let us use a Python client.
In this post we don’t cover detailed usage of Firestore though, it is easy to implement a callback function that is called when a specific “Collection” has been changed. [reference]
In this code, on_snapshot callback function is called when users Collection has been changed. You can also watch changes of Document.
Since Firestore is a fully managed database, you would not need to provision the service ahead. You only need to choose “mode” and location. To use real-time sync functionality, select “Native mode”. Also select nearest or desired location.
Using Firestore with streamlit
Now let’s implement Firestore with streamlit. We add on_snapshot callback and update a chart with the latest data sent from Firestore. Here is one quick note when you use the callback function with streamlit. on_snapshot function is executed in a sub thread, instead UI manipulation in streamlit must be executed in a main thread. Therefore, we use Queue to sync the data between threads. The code will be something like below.
Deploy this app and write something in the collection you refer to. You will see the updated data on your webapp.
We learned how to host web apps on Cloud Run, then how to update data with Firestore. Now we want to know how to plot geospatial data with streamlit. streamlit has multiple ways to plot geospatial data which includes latitude and longitude, we here used pydeck_plot(). This function is a wrapper of deck.gl, a geospatial visualization library.
For example, provide data in latitude and longitude as to plot, add layers to visualize them.
pydeck supports multiple map platforms. We here chose CARTO. If you would like to know more about great examples using CARTO and deck.gl, please refer to this blog.
We are very close to the goal. Now we want to plot locations of vehicles. pydeck supports some ways to plot data, and TripsLayer would be a good fit to plot mobility data.
TripsLayer can visualize location data in time sequential. That means, when selecting a specific timestamp, it plots lines from location data in the time including last n periods. It also draws like an animation when you change the time in sequential order.
In the final form, we also add IconLayer to identify the latest location. This layer is also useful when you want to plot a static location, and it just works like a “pin” on Google Maps.
Now we need to think about how to use this plot with Firestore. Let’s make Document per vehicle, and only save the latest latitude, longitude, and timestamp of every vehicle. Why not save the history of locations? In that case, we should rather use BigQuery. We just want to see the latest locations that update in realtime.
Firestore is useful and scalable, yet NoSQL. Note that there are some good fits and bad fits in NoSQL.
Finally, we are here. Now let’s ride in a car and record data… if possible.
For demo purposes, now we ingest dummy data into Firestore. It is easy to write data by using a client library.
With writing dummy data, open the web page hosted on Cloud Run. you will see the map is updated upon new data coming.
Note that we used dummy data and manipulated the timestamps. Consequently, the location data updates much faster than actual time. This can be fixed once you use proper data and update cycle.
In this post, we learned how to build a dashboard updated in real-time with Cloud Run and Firestore. Let us know when you find other use-cases with those nice Google Cloud products.
Find out more automotive solutions here.
Haven’t used Google Cloud yet? Try it from here.
Check out the source code on GitHub
Read More for the details.
Indonesia’s largest hyperlocal company, Gojek has evolved from a motorcycle ride-hailing service into an on-demand mobile platform, providing a range of services that include transportation, logistics, food delivery, and payments. A total of 2 million driver-partners collectively cover an average distance of 16.5 million kilometers each day, making Gojek Indonesia’s de-facto transportation partner.
To continue supporting this growth, Gojek runs hundreds of microservices that communicate across multiple data centers. Applications are based on an event-driven architecture and produce billions of events every day. To empower data-driven decision-making, Gojek uses these events across products and services for analytics, machine learning, and more.
To make sense of large amounts of data — and to better understand customers for the purpose of app development, customer support, growth, and marketing purposes — data must first be ingested into a data warehouse. Gojek uses BigQuery as its primary data warehouse. But ingesting events at Gojek’s scale, with rapid changes, poses the following challenges:
With multiple products and microservices offered, Gojek releases new Kafka topics almost every day and they need to be ingested for analytical purposes. This can quickly result in significant operational overhead for the data engineering team that is deploying new jobs to load data into BigQuery and Cloud Storage.
Frequent schema changes in Kafka topics require consumers of those topics to load the new schema to avoid data loss and capture more recent changes.
Data volumes can vary and grow exponentially as people start building new products and logging new activities on top of a new topic. Each topic can also have a different load during peak business hours. Customers need to handle the rising volume of data to quickly scale per their business needs.
Firehose and Google Cloud to the rescue
To solve these challenges, Gojek uses Firehose, a cloud-native service to deliver real-time streaming data to destinations like service endpoints, managed databases, data lakes, and data warehouses like Cloud Storage and BigQuery. Firehose is part of the Open Data Ops Foundation (ODPF), and is fully open source. Gojek is one of the major contributors to ODPF.
Sinks – Firehose supports sinking stream data to the log console, HTTP, GRPC, PostgresDB (JDBC), InfluxDB, Elastic Search, Redis, Prometheus, MongoDB, GCS, and BigQuery.
Extensibility – Firehose allows users to add a custom sink with a clearly defined interface, or choose from existing sinks.
Scale – Firehose scales in an instant, both vertically and horizontally, for a high-performance streaming sink with zero data drops.
Runtime – Firehose can run inside containers or VMs in a fully-managed runtime environment like Kubernetes.
Metrics – Firehose always lets you know what’s going on with your deployment, with built-in monitoring of throughput, response times, errors, and more.
Using Firehose for ingesting data in BigQuery and Cloud Storage has multiple advantages.
Reliability
Firehose is battle-tested for large-scale data ingestion. At Gojek, Firehose streams 600 Kafka topics in BigQuery and 700 Kafka topics in Cloud Storage. On average, 6 billion events are ingested daily in BigQuery, resulting in more than 10 terabytes of daily data ingestion.
Streaming ingestion
A single Kafka topic can produce up to billions of records in a day. Depending on the nature of the business, scalability and data freshness are key to ensuring the usability of that data, regardless of the load. Firehose uses BigQuery streaming ingestion to load data in near-real-time. This allows analysts to query data within five minutes of it being produced.
Schema evolution
With multiple products and microservices offered, new Kafka topics are released almost every day, and the schema of Kafka topics constantly evolves as new data is produced. A common challenge is ensuring that as these topics evolve, their schema changes are adjusted in BigQuery tables and Cloud Storage. Firehose tracks schema changes by integrating with Stencil, a cloud-native schema registry, and automatically updates the schema of BigQuery tables without human intervention. This reduces data errors and saves developers hundreds of hours.
Elastic infrastructure
Firehose can be deployed on Kubernetes and runs as a stateless service. This allows Firehose to scale horizontally as data volumes vary.
Organizing data in cloud storage
Firehose GCS Sink provides capabilities to store data based on specific timestamp information, allowing users to customize how their data is partitioned in Cloud Storage.
Built for flexibility and reliability, Google Cloud products like BigQuery and Cloud Storage are made to support a multi-cloud architecture. Open source software like Firehose is just one of many examples that can help developers and engineers optimize productivity. Taken together, these tools can deliver a seamless data ingestion process, with less maintenance and better automation.
Development of Firehose happens in the open on GitHub, and we are grateful to the community for contributing bug fixes and improvements. We would love to hear your feedback via GitHub discussions or Slack.
Read More for the details.
In order to better serve their customers and users, digital applications and platforms continue to store and use sensitive data such as Personally Identifiable Information (PII), genetic and biometric information, and credit card information. Many organizations that provide data for analytics use cases face evolving regulatory and privacy mandates, ongoing risks from data breaches and data leakage, and a growing need to control data access.
Data access control and masking of sensitive information is even more complex for large enterprises that are building massive data ecosystems. Copies of datasets often are created to manage access to different groups. Sometimes, copies of data are obfuscated while others copies aren’t. This creates an inconsistent approach to protecting data, which can be expensive to manage. Our automatic DLP can help to scan your data across your entire organization to give you general awareness of what data you have, and specific visibility into where sensitive data is stored and processed, but to fully address these concerns, sensitive data needs to be protected with the right defense mechanism at the base table itself so that data can be kept secure throughout its entire lifecycle.
Today, we’re excited to introduce two new capabilities in BigQuery that add a second layer of defense on top of access controls to help secure and manage sensitive data.
BigQuery column-level encryption SQL functions enable you to encrypt and decrypt data at the column level in BigQuery. These functions unlock use cases where data is natively encrypted in BigQuery and must be decrypted when accessed. It also supports use cases where data is externally encrypted, stored in BigQuery, and must then be decrypted when accessed. SQL functions support industry standard encryption algorithms AES-GCM (non-deterministic) and AES-SIV (deterministic). Functions supporting AES-SIV allow for grouping, aggregation, and joins on encrypted data.
In addition to these SQL functions, we also integrated BigQuery with Cloud Key Management Service (Cloud KMS). This gives you additional control, and allows you to manage your encryption keys in KMS and enables on-access secure key retrieval as well as detailed logging. An additional layer of envelope encryption enables generations of wrapped key sets to decrypt data. Only users with permission to access the Cloud KMS key and the wrapped keyset can unwrap the keyset and decrypt the ciphertext.
“Enabling dynamic field level encryption is paramount for our data fabric platform to manage highly secure, regulated assets with rigorous security policies complying with several regulations including FedRAMP, PCI, GDPR, CCPA and more. BigQuery column-level encryption capability provides us with a secure path for decrypting externally encrypted data in BigQuery unblocking analytical use cases across more than 800+ analysts,” said Kumar Menon, CTO of Equifax.
Users can also leverage available SQL functions to support both non-deterministic encryption and deterministic encryption to enable joins and grouping of encrypted data columns.
The following query sample uses non-deterministic SQL functions to decrypt ciphertext.
The following query sample uses deterministic SQL functions to decrypt ciphertext.
Extending BigQuery’s column-level security, dynamic data masking allows you to obfuscate sensitive data and control user access while mitigating the risk of data leakage. This capability selectively masks column level data at query time based on the defined masking rules, user roles and privileges. Masking eliminates the need to duplicate data and allows you to define different masking rules on a single copy of data to desensitize data, simplify user access to sensitive data, and protect against compliance, privacy regulations, or confidentiality issues.
Dynamic data masking allows for different transformations of underlying sensitive data to obfuscate data at query time. Masking rules can be defined on the policy tag in the taxonomy to grant varying levels of access based on the role and function of the user and the type of sensitive data. Masking adds to the existing access controls to allow customers a wide gamut of options around controlling access. An administrator can grant a user full access, no access or partial access with a particular masked value based on data sharing use case.
For the preview of data masking, three different masking policies are being supported:
ALWAYS_NULL. Nullifies the content regardless of column data types.
SHA256. Applies SHA256 to STRING or BYTES data types. Note that the same restrictions apply to the SHA256 function.
Default_VALUE. Returns the default value based on the data type.
A user must first have all of the permissions necessary to run a query job against a BigQuery table to query it. In addition, for users to view the masked data of a column tagged with a policy tag they need to have a MaskedReader role.
Common scenarios for using data masking or column level encryption are:
protect against unauthorized data leakage
access control management
compliance against data privacy laws for PII, PHI, PCI data
create safe test datasets
Specifically, masking can be used for real-time transactions whereas encryption provides additional security for data at rest or in motion where real-time usability is not required.
Any masking policies or encryption applied on the base tables are carried over to authorized views and materialized views, and masking or encryption is compatible with other security features such as row-level security.
Dynamic data masking is available in preview in the U.S. and E.U. regions, and will soon be available in Asia-Northeast. Our introductory guide to dynamic data masking will help you get started. To learn more about the general availability of our BigQuery column-level encryption SQL functions, check out the documentation and try it out now.
Read More for the details.
June is Pride Month—a time for us to come together to bring visibility and belonging, and celebrate the diverse set of experiences, perspectives, and identities of the LGBTQ+ community. This month, Lindsey Scrase, Managing Director, Global SMB and Startups at Google Cloud, is showcasing conversations with startups led by LGBTQ+ founders and how they use Google Cloud to grow their businesses. This feature highlights bunny.money and its founders, Fabien Lamaison, CEO, Thomas Ramé, Technology Lead, and Cyril Goust, Engineering Lead.
Lindsey: Thanks Fabien, Thomas, and Cyril. It’s great to connect with you and talk about bunny.money. I love how you’re bringing a creative twist to fintech and giving back to communities. What inspired you to found the company?
Fabien: One of my favorite childhood toys was an old-fashioned piggy bank. I remember staring at it and trying to figure out how much of my allowance should be saved, spent, or given to charity. As you can imagine, there were lots of ideas racing through my mind but saving and giving back were always important to me. Years later, I realized I could combine my passions for banking, technology, and helping others by creating a fintech service that makes it easy for people to save while donating to their favorite causes.
Lindsey: My brothers and I did something similar where we allocated a portion of any money we made as kids to giving. And I too had a piggy bank – a beautiful one that could only be opened by breaking it. Needless to say it was a good saving mechanism! It’s inspiring to see you carrying your personal value forward into bunny.money to help others do the same. Tell us more about bunny.money?
Fabien: bunny.money plays with the concept of reimagining saving—and offers a way to positively disrupt conventional banking. For us bunnybankers, financial and social responsibility go hand in hand. We empower people to build more sustainable, inclusive financial futures. Looking ahead, we not only want to help people set up recurring schedules for saving and donating, but also offer more options for socially responsible investing and help companies better match employee donations to charitable causes and build out retirement plans.
Lindsey: It sounds like you’re not only disrupting traditional banking services but also how people manage their finances. How does bunny.money serve its customers?
Fabien: bunny.money is a fintech company founded on the principles of providing easy, free, and ethical banking services. Our comprehensive banking platform enables customers to quickly open savings wallet and schedule recurring deposits.
Thomas: bunny.money is also a fintech bridge that connects people and businesses to the communities and causes they care about. With bunny.money, customers can make one-time or recurring donations to the nonprofits of their choice. bunny.money doesn’t charge recipients fees to process donations. We give customers the option of offering us a tip, but it’s not required.
Lindsey: So with bunny.money, what are some of the nonprofits people can donate to?
Fabien: Over 30 organizations have already joined bunny.money’s nonprofit marketplace, includingStartOut,TurnOut,Trans Lifeline, and Techqueria. Some are seeing donations increase by up to 20 percent as they leverage bunny.money to gamify fundraising, promote social sharing, and encourage micro-donations from their members and supporters.
Cyril: bunny.money also helps people discover local causes and nonprofits such as food banks requesting volunteers, parks that need to be cleaned, and mentoring opportunities. I’m particularly excited to see bunny.money help people build a fairer, greener society by donating to environmental nonprofits, including, Carbon Lighthouse,Sustainable Conservation, Public Land Water Association, back2earth andFARMS. We also decided to “lead by the example” and pledge to give 1% of our revenues to 1% for the Planet.
Lindsey: Given your business and the services you offer, I imagine you’ve encountered immense complexity along the way. What were some of the biggest challenges that you had to overcome?
Fabien: One of our biggest challenges was helping people understand saving for good, and purpose-led banking, which is a relatively new idea in fintech. Although there are plenty of mobile banking apps, most don’t offer an easy way for people to improve their personal finances and donate to their favorite causes in one convenient place.
Cyril: On the technical side, we needed to comply with strict industry regulations, including all applicable requirements under the Bank Secrecy Act and the USA PATRIOT Act. These regulations protect sensitive financial data and help fight against fraudulent activities such as money laundering.
Lindsey: Can you talk about how Google Cloud is helping you address these challenges?
Thomas: Protecting client data is a top priority for us, so we built bunny.money on thehighly secure-by-design infrastructure of Google Cloud. Google Cloud automatically encrypts data in transit and at rest, and the solutions comply with all major international security standards and regulations right out of the box. Although we serve customers in the U.S. today, Google Cloud distributed data centers will allow us to meet regional security requirements and eventually reach customers worldwide with quality financial services.
Thomas Ramé, Technology Lead at bunny.money
Fabien: We wanted to build a reliable, feature-rich fintech platform and design a responsive mobile app with an intuitive user interface (UI). We knew from experience that Google Cloud is easy to use and offers integrated tools, APIs, and solutions. We also wanted to tap into the deep technical knowledge of theGoogle for Startups team to help us scale bunny.money and affordably trial different solutions with Google for Startups Cloud Program credits.
Cyril: As aCertified Benefit Corporation™ (B Corp™), it is also important for us to work with companies that align with the values we champion such as diversity and environmental sustainability. Google Cloud iscarbon neutral and enables us to accuratelymeasure, report, and reduce our cloud carbon emissions.
Lindsey: This is exactly how we strive to support startups at all stages – with the right technology, offerings, and support to help you scale quickly and securely, all while being the cleanest cloud in the industry. Can you go into more detail about the Google Cloud solutions you use—and how they all come together to support your business and customers?
Fabien: Our save for good® mobile app enables customers to securely create accounts, verify identities, and connect to external banks in just under four minutes.
Thomas: With Google Cloud, bunny.money consistently delivers a reliable, secure, and seamless banking experience. Since recently launching our fintech app, we’ve already seen an incredible amount of interest in our services that enable people to grow financially while contributing to causes they are passionate about. Right now, we’re seeing customers typically allocate about 10 percent of each deposit to their favorite charities.
Cyril: The extensive Google Cloud technology stack helps us make it happen. We can useBigQuery to unlock data insights,Cloud SQL to seamlessly manage relational database services, andGoogle Kubernetes Engine (GKE) to automatically deploy and scale Kubernetes. These solutions enable us to cost-effectively scale bunny.money and build out a profitable fintech platform.
Thomas: In addition to the solutions Cyril mentioned, we useCloud Scheduler to manage cron job services,Dataflow to unify stream and batch data processing, andContainer Registry to securely store Docker container images. We’re always innovating, and Google Cloud helps our small team accelerate the development and deployment of new services.
Lindsey: It’s exciting to hear your story and the many different ways that Google Cloud technology has been able to support you along the way. You’re creating something that affects change on many levels—from how people save and give to how businesses and nonprofits can engage.
Since it is also Pride month, I want to change focus for a minute and talk about how being part of the LGBTQ+ community impacted your approach to starting bunny.money?
Fabien: I believe we all belong to several communities (family, friends “tribes,” sports, group of interests) that are different layers of our own identity and way of life. I’m part of the LGBTQ+ community, and I’m also an immigrant for example. I’m now a French-American, as is my husband, and we live in San Francisco. But even as a couple, we still had to live apart for several years—he in Paris and I in San Francisco—as we worked through issues with his U.S. work visa (same sex weddings were not possible at that time at the federal level, we couldn’t be under the same visa application).
Fortunately, the LGBTQ+ community can be like an extended family, both professionally and personally. Personally, I’ve had the support of friends as my husband and I dealt with immigration and work challenges. And professionally, I’ve experienced incredible support in the startup world with nonprofits such asStartOut, which provides key resources to help LGBTQ+ entrepreneurs grow their businesses.
Lindsey: I can only imagine the emotional toll that being apart created for you and your husband and I’m so glad that it eventually worked out. My wife is Austrian and while we are fortunate to be here together, this intersectionality has created an additional layer of complexity for us over the years as we have started a family.
Do you have any advice for others in the LGBTQ+ community looking to start and grow their own companies? You mentioned StartOut, and I know there are additional organizations LGBTQ+ entrepreneurs can turn to for help, includingLesbians who Tech,Out in Tech,High Tech Gays (HTG) – Queer Silicon Valley, andQueerTech NYC (Meetup).
Fabien: I would suggest really exploring what you’re passionate about. I’ve enjoyed focusing on saving and finances since I was young and have always been passionate about giving back. Being part of the LGBTQ+ community—or really any community that’s viewed as an “outsider”—gives you the opportunity to think differently. When you bring your passion and life experiences together, you can start to imagine new ways of doing things. By engaging in your communities, it can be easier to find others who share your experiences, interests, and even values. You bring the best from each world.
Since LGBTQ+ founders and entrepreneurs might belong to several groups, it’s good to explore all available avenues and resources, including the organizations you mentioned earlier. We can always learn and accomplish more when we work together. I’ve experienced that both in the LGTBQ+, immigrant and Fintech communities.
Lindsey: The importance of community underlies so many aspects of your identity as a founder, as someone who has moved to the US from France, and as a member of the LGBTQ+ community. I’m so glad that you’ve sought out – and received – support along the way. I agree it’s so important for others to seek out this community and support.
And to close, would you be able to share any next steps for bunny.money?
Fabien: We’re looking forward to helping customers build more sustainable and inclusive financial futures on our platform. We’ll continue contributing to positive change in the world by rolling out new AI-powered services to enable ethical investing and personalized giving and impact programs. As we build this first banking app for personal and workplace giving, our goal is to benefit all communities by bridging the gap between businesses and people—which is why we’re excited to continue working with partners like Google for Startups andGV (GV offers us valuable mentor sessions during our accelerator program at StartOut).
If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.
Read More for the details.
In recognition of Global Accessibility Awareness Day last month, I wanted to provide a follow-up to last year’s work and share more recent updates to improve accessibility in the Maps JavaScript API and Maps Embed API.
Our work since last year has continued to focus on some fundamental improvements in the Embed API, including ‘tab’ order, keyboard and screen reader interactivity, adding accessibility labels, and increasing color contrast of various map controls. These updates enable more inclusive Maps for vision-impaired users, along with anyone using a screen reader or keyboard navigation. Here’s a deeper look at a few of the improvements we’ve been able to achieve.
UI visibility in high contrast mode
We also improved maps in high color contrast mode to help make some buttons and checkboxes easier to see and more visible. We implemented this by making changes to the CSS in our codebase to help adapt the map to high contrast scenarios.
InfoWindow improvements
Moreover, we continued adding improvements to one of the most-used UI components on maps: InfoWindow. Developers now have the ability to set an accessibility label and programmatically set focus on InfoWindows when they become visible.
Screen reader support for markers
And finally, we added screen reader instructions for markers keyboard navigation. This is especially useful for first-time users who don’t know upfront how to navigate through interactive markers (those who have a registered click event listener) using a keyboard. See our “Make a marker accessible” guide and “Marker Accessibility” code sample to learn about how to make markers more accessible.
Help us improve accessibility
We hope you will try out these new improvements, give us feedback on the changes, and file new bugs to help us prioritize the areas that will have the most impact. Please +1 existing bugs that impact your websites and file new bug reports.
Accessibility is a complex topic that affects many different people and communities in a variety of ways, and we rely on your feedback to help guide our efforts to make Google Maps Platform features more accessible for everyone. Please also stay informed to get up-to-date information about accessibility features and improvements in the Maps JavaScript and Embed APIs.
What’s Next
Every day across the web, millions of people around the world use the Google Maps basemap provided by the Maps JavaScript API. Our goal is to give developers the tools they need to ensure the map is built for everyone.
We plan to continue making accessibility improvements to the Maps JavaScript and Embed UIs and APIs, and we know that we still have much more work to do. You can track the progress of the Maps JavaScript and Embed APIs on our Release Notes page.
For more information on Google Maps Platform, visit our website.
Read More for the details.
This is part one of a two-part series with practical tips to start your AI/ML journey.
Machine learning (ML) and artificial intelligence (AI) are creating more personalized and easier digital experiences for constituents. According to recent studies, 92% of U.S. citizens1 report that improved digital services would positively impact their view of government. At the same time, automation of federal government employee tasks could save between 96.7 million and 1.2 billion hours annually2. So the question for many public servants is, “How do I get started with AI/ML?”
In part one of this blog, I’ve outlined three key steps to start your journey.
1. Get trained. Invest in a couple of training sessions and consider joining an online community. I highly recommend taking our Machine Learning Crash Course and diving into our Machine Learning Universal Guides that provide a great set of guidelines and application worksheets. To learn from others, join our Public Sector Community, where government leaders and technologists meet to share their own AI best practices and lessons learned.
2. Pick a use case. Identify a use case where AI can help scale to provide immediate value to your team (and your entire organization). From a technical perspective, consider the following:
Does this use case have a lot of rules and/or using unstructured data like video?Would it benefit from one of the AI building blocks such as vision for detecting objects, conversation, translation, text analysis or tabular data with lots of rules.Do you have existing data (preferably labeled) that you can use to build an AI / ML model? For instance, imagine taking all of the information from a Frequently Asked Questions section of your website, and using that to create a virtual agent to proactively serve your constituents. Does the use case fit your organization’s privacy and ethical principles? AI has significant potential to solve some of the greatest challenges and to realize this potential, it is important to apply it responsibly. As an example, here is a link to our AI principles.
3. Experiment. Once you’ve selected your use case, it’s time to take the data you have available and create a machine learning model. Not sure how to build a model? With the recent availability of low to no-code tools, it’s become much easier for anyone to get started building. With Google Cloud’s Vertex AI, you can train and compare models in a simple workflow using our no-code tool AutoML. This process works by simply loading your data, defining your goal and budget, and letting Google take care of all the other steps (feature engineering, architecture design, hyperparameter tuning, …) to build an optimized model ready for deployment. As an example, the U.S. Navy, along with our partner, Simple Technology Solutions, rapidly built an AI-based corrosion-detection and analysis system with AutoML. Additionally, for a quick and fun experiment, try Teachable Machine and build a model in a few minutes.
AI in Action: City of Memphis and State of Illinois
Another example is the City of Memphis, which used AI to automatically detect potholes, helping to create safer streets for residents and visitors of the city. The Memphis team used unstructured video data from their public buses. Meanwhile, the State of Illinois used Contact Center AI to rapidly deploy virtual agents to help more than one million residents file unemployment claims.
Other examples may also include automating a Help Desk assignment/resolution time, extracting key information from documents, or communicating with your constituents in natural language via a chatbot.
AI/ML can be a powerful toolset enabling agencies of all sizes to address many of their short and long term challenges. I hope this short blog helps you to take the first step and get started to scale your mission. And look for my upcoming blog where I will be discussing some key questions that you want to explore before moving your model(s) to production.
References
2022 digital trends: The rise of the citizen consumerAI-augmented government: Using cognitive technologies to redesign public sector work
Read More for the details.
Beyond running your mission-critical serverless apps at global scale, Google Cloud provides a vast array of products that you can leverage to add valuable features to your apps. Use Node.js Cloud Client Libraries to reduce and simplify the amount of JavaScript or TypeScript code you need to write for accessing a product through its Application Programming Interface (API).
You want to create a Node.js service that uses a Google Cloud API.
Use the appropriate library from a list of more than a hundred available Node.js Cloud Client Libraries for connecting to a specific API.
The Google Cloud Client Libraries team crafts libraries for Node.js to:
Efficiently handle low-level communication details (including authenticating API requests)
Provide an idiomatic and consistent “best-practices” JavaScript and TypeScript programming experience
You can access a Cloud API with just a few lines of code using one of the Node.js Cloud Client Libraries. Each library for each Cloud API follows the same pattern for initialization.
Start with the Node.js Cloud Client Libraries reference page to find the library for a specific API.
For this demo, use the Cloud Translation library. You can find the link in the Libraries table or in the left sidebar (shortened to “translate”) to navigate to the specific reference page.
Each library reference page will have a Before you begin section. Here are the first three steps:
Select or create a Cloud Platform project.
Enable billing for your project (learn more about Google’s Free Tier).
Enable the Cloud Translation API.
Skip step #4. We won’t cover local testing in this article.
Enable the following APIs to be able to build and store the container image for your app, and then run the app container on Cloud Run.
Enable the Artifact Registry API
Enable the Cloud Run Admin API
Tip
You can find and enable Cloud APIs for your project on this console page.
The library reference page tells you which library you need to install. For the Translate client enter the following:
npm install @google-cloud/translate
const {Translate} = require(‘@google-cloud/translate’).v2;
Or using import syntax with TypeScript (or modern JavaScript modules support):
import {v2} from ‘@google-cloud/translate’;
const {Translate} = v2;
You’ll need the Project ID from the Google Cloud project you created above in Step 2.1. If you need to find it again, go to your cloud project dashboard.
Create an API client instance.
const projectId = ‘tpujals-node-client-demo’;
const client = new Translate({projectId});
Now you can invoke methods on the client. For the Translate client, there are a number of translate methods. Here’s a very simple translate method you can try:
const text = ‘Hello, world!’;
const targetLanguage = ‘ja’;
const [translation] = await client.translate(text, target);
console.log(`${text} => ${translation}`);
Tip
It might not be immediately obvious, but whenever you’re looking for reference documentation for different versions of clients and any related classes, go to the specific API reference page and look at the items under the expanded library selection in the left sidebar. Aside from Quickstart and Overview, the other items are the class documentation for each client version.
Cloud Translation is available in a Basic edition (v2) and an Advanced edition (v3). To demonstrate using the client library, we’re using the Basic edition. If you want to know more, see Compare Basic and Advanced.
As you can see from the reference documentation, the translate method returns a Promise (all of the client’s methods do) for obtaining a result asynchronously.
In this example, the result you’re interested in is the first element of a tuple (the translated string). You await completion of the promise and use destructuring assignment syntax to set the translation variable.
There are a number of things that can go wrong here (from being unable to connect, failing authentication, to passing bad arguments). For production code, you’ll want to wrap your calls in appropriate try/catch statements.
After following the getting started steps in the previous section, you’re ready to launch the demo. This GitHub repo for the demo shows how to build an app that uses the Node.js client library for the Translation API.
The app backend (under src/server) is a Node.js server that uses the Express framework and the frontend (under src/client) is an Angular client.
The bulk of the application is standard boilerplate. The essential parts to understand are the following:
The Express server serves the user interface and also an API (/api/translate) for making translation requests. Aside from API request handling boilerplate, the translation code is essentially the same that was shown in the Getting started section.
The frontend implements an Angular service (translate.service.ts) used by the user interface to communicate with the backend to request translations. The frontend is served as a static Single Page Application (SPA) from the backend server’s public directory. Note the use of location.origin to build the actual URL for requesting a translation so that the app works whether served from localhost or Cloud Run.
The arguments posted from the frontend are the text and the target (language code) for translation. Depending on the response status, the response data contains the translation or an error message.
Using your terminal, clone the demo repo and then change directory into it.
git clone https://github.com/subfuzion/demo-nodejs-cloud-client.git
cd demo-nodejs-cloud-client
You only need to enter the following command if you modify the client UI. Otherwise, you can skip this step.
./scripts/ng-build
You need the Project ID, region, and service name that you want to use. For example, to deploy to demo-project in us-central1 to create a Cloud Run service called translate-demo, enter the following commands in your terminal:
export PROJECT=demo-project
export REGION=us-central1
export SERVICE=translate-demo
./scripts/run-deploy
You will be prompted to create an Artifact Registry for storing your app image:
Deploying from source requires an Artifact Registry Docker repository to store built containers. A repository named [cloud-run-source-deploy] in region
[us-central1] will be created.
Do you want to continue (Y/n)?
Answer Y (or just press Enter) to continue.
You should see output that looks something like this:
Building using Dockerfile and deploying container to Cloud Run service [translate-demo] in project [demo-project] region [us-central1]
✓ Building and deploying new service… Done.
✓ Uploading sources…
✓ Building Container…
Logs are available at
[https://console.cloud.google.com/cloud-build/builds/f9eeb697-…?project=…].
✓ Creating Revision…
✓ Routing traffic…
✓ Setting IAM Policy… Done.
Service [translate-demo] revision [translate-demo-00001-pid] has been deployed and is serving 100 percent of traffic.
Service URL: https://translate-demo-ao23awv5ca-uc.a.run.app
The Service URL contains the link to the running app, where you should now be able to see it in action.
Yes. However, using the appropriate library from the Node.js Cloud Client Libraries is the recommended way to access a Google Cloud API. A dedicated team at Google Cloud focuses on optimizing, testing, and maintaining these libraries.
These libraries can access Google Cloud APIs using gRPC under the hood for communication efficiency, while simplifying authentication and communication details. Most Cloud APIs offer significantly better performance in throughput and CPU usage over a gRPC interface. Accessing an API using gRPC can increase throughput per CPU by as much as a factor of ten compared to the standard REST API. Using the Node.js Cloud Client Libraries is the easiest way to leverage these peformance gains.
If you need to access an API not available as one of the supported Cloud Client Libraries (such as Maps or YouTube), then you might be able to use the Google APIs Node.js Client instead. This is an older (but still actively maintained) client that is auto-generated from Google API endpoint specifications (see Google APIs Explorer).
However, keep in mind that the Google APIs Node.js Client only supports communication over REST, not gRPC, interfaces. Furthermore, since the REST interface code is auto-generated, working with this client is generally not quite as intuitive or idiomatic as working with dedicated, hand-crafted Cloud Client Libraries.
You can write your own JSON over HTTP code to access the REST interface exposed by different Cloud APIs. For gRPC-enabled Cloud APIs, you can generate your own gRPC client using the API’s protocol buffers service definition (check the GitHub repository).
However, since Cloud APIs only accept secure requests using TLS encryption, you will be responsible for authenticating with Google and handling many of the low level communication details that are automatically handled for you by Cloud Client Libraries. See HTTP guidelines and gRPC Authentication for relevant details.
Node.js Cloud Client Libraries are tuned for performance and simplify underlying low level protocol, authentication, reliability, and error handling management.
Pro tip
Using the Node.js Cloud Client Libraries is a best practice for accessing Google Cloud APIs. They generally offer the best performance, while saving you coding time and effort.
Read More for the details.
For over a decade, the cloud has presented developers, data scientists, and engineers an incredible opportunity to deploy and run applications faster, while maintaining developer-centric tooling, higher-level abstractions, and click-to-deploy solutions. At Google Cloud, we’ve continued to design, refine, and iterate our experience so that our customers can be as productive as the world demands. That has meant designing a user experience rich with cloud tooling, a great CLI experience, and simple workflows. Often that means balancing reducing the steps needed to get to your destination without sacrificing the information needed to make informed decisions.
We’ve heard from you all that the Google Cloud dashboard could use some improvements. Our existing dashboard presents graphs, links to documentation, and news. Though the information is helpful, we know we could do better. We’ve heard the homepage needs to be fast, simple and helpful for getting back to recently used products. As they say, sometimes “less is more.”
We’re thrilled to introduce a new homepage on Google Cloud. We have redesigned the page for simplicity, performance, and navigation. The new page gives you a clean and streamlined experience that helps you navigate to your most relevant destinations. The streamlined UI also helps you stay organized and is designed to help you complete your task. You can find this page by heading straight to console.cloud.google.com and signing in.
The new homepage includes:
A clean header that clearly displays the project you’re working in, with the ability to copy the project number and ID, with a click of a button.
Actionable recommendations to better secure your environment or help reduce spending.
Key actions to quickly create a Compute Engine VM, GKE cluster, or Storage bucket, or, run a query in BigQuery.
Quick access to head straight to your most used product pages. This is based on featured products or content you recently viewed across all your projects, enabled by Google Cloud ML-powered Active Assist.
A link to the All products page that showcases all of the Google Cloud products and key partner products in one, easy-to-navigate place.
Links to the original dashboard page.
Our goal is to give you clear, actionable information and reduce the time it takes to get to your most relevant destinations. With the new page layout, you’ll know exactly which organization and project you are working in, which is useful for those of you operating under multiple organizations/projects or in larger resource hierarchies.
With the key action buttons at the top, you no longer need to go to the product page first to do common tasks like spinning up a VM or GKE cluster.
The new homepage is faster in multiple ways, saving you time. The new homepage loads approximately 40% faster than the dashboard. The homepage also reduces the time for you to navigate. We are seeing people navigate up to 43% faster. This means from the time you head to the homepage, you’re able to head to your destination/task faster than with the project dashboard.
To give you a customized experience1, we’re surfacing your most relevant product pages based on what you’ve recently viewed via quick access powered by Active Assist. Quick access gets you back to your previous task or to a portion of the console you visit frequently. You can quickly head straight to those pages without scrolling through the side menu. Quick access is a cross project view to help you switch back to a page you were in possibly in another project. This is one of our first cross project views so let us know how you like it!
When you want to explore more products not listed in quick access cards, you still have access to the side menu or All products page and can pin products to the top of your side menu.
To further simplify your experience and ensure consistency across our products, Google Cloud Platform is now called Google Cloud. In addition, the mobile app name (previously called Cloud Console app) is renamed to Google Cloud app, and you’ll see this update reflected across most of our website and documentation.
If you have any feedback or suggestions, reach out to me at @stephr_wong. And join us on a Twitter Space @googlecloudtech on June 23, 202 to learn more about this new page from our product team.
Interested in developer news and tools? Check out our Developer Center and join Google Cloud Innovators.
Footnote:
1: Quick access is only personalization for those with personalization toggled on. For quick access to be personalized to you, both your Google Cloud personalization and Web & App Activity controls need to be toggled on.
Read More for the details.
Digital channels and on-demand banking have led customers to expect instant and helpful access to managing their finances, with minimal friction. Google Cloud built Contact Center AI (CCAI) and DialogFlow CX to help banks and other enterprises deliver these services, replacing phone trees or sometimes confusing digital menus with intelligent chatbots that let customers interact conversationally, just as they would with human agents.
Leaders at Germany-based Commerzbank, which operates in over 50 countries, saw potential for these technologies to enhance customer experiences, providing more curated and helpful interactions that would build trust in and satisfaction with their brand. Commerzbank’s implementation speaks to how conversational artificial intelligence (AI) services can help businesses better serve customers, and in this article, we’ll explore their story and what their example means for your business.
Tokyo, 7:00 AM. Vanessa is on a business trip in Japan, closing a new deal for her company, one of Commerzbank´s more than 30,000 corporate customers throughout Germany. She has been preparing for weeks, and is going through her points a final time in a downtown coffee shop. Glancing at her watch, she realizes she must leave immediately to get to the meeting.
Intending to pay, she realizes the chip in her credit card is not functioning. Due to the time difference with Germany, Vanessa is now concerned she will not be able to contact someone from customer support. She opens the Commerzbank mobile app and contacts the customer center through chat. The access point she needs is available, but how can it help her most efficiently?
Customers like Vanessa need an answer right away. With that in mind, Commerzbank aims to provide customers with integrated support via the use of chatbots in the quest to deliver efficiency, high quality, and information consistency. This goal is where the Google Cloud virtual agent platform Dialogflow CX comes into play, providing us with an enormous number of features to build conversation dialogue through accurate intent recognition, a robust visual flow creator, and automated testing—all while significantly improving our time to market.
In just nine weeks, the Commerzbank team set-up an agile proof-of-value project by developing a chatbot solution designed to deliver a reliable conversation experience. Commerz Direktservices Chatbot Agent is now able to identify the touchpoint the customer is using (App or Web) and detect more than 100 suitable FAQs and answer them properly. The Chatbot Agent also identifies leads and sales prospects, enabling it to provide support on open questions in relation to products and services, thus performing a graceful handover to the human agent with the enrichment of value parameters. Commerz Direktservices has also broadened the ability of the Chatbot to handle different customer types (keyword-based vs. context-based customers) by constructing an intelligent dialog architecture that lets the Chatbot Agent flow elegantly through prompts and intent questioning.
Commerzbank has integrated Google Dialogflow CX with Genesys Platform, helping to make use of the full capabilities of the existing contact center infrastructure and more efficiently orchestrate the incoming interactions. A very versatile architecture bridges the potential of Google Cloud with a variety of on-premise applications and components, while also providing system resiliency and supporting data security compliance. The support of the entire Google team has been invaluable to accelerate the bank’s journey to the cloud. Commerzbank is seeing a number of benefits as it expands its AI platform, including:
Enhanced ability to deliver innovation
Improved operational efficiencies
Better customer experience through reduced wait times and self-serve capabilities, leading to reduced churn
Greater productivity for CommerzBank employees who are able to support customer queries with enriched Google CCAI data
The creation of an integrated cross-channel strategy
Now, Commerzbank wants to move beyond great customer support to continue to increase the value-add to the customer. Customers like Vanessa are looking for their bank to go the extra mile by optimizing their finances, providing personalized financial products and solutions, and offering more control over their investment portfolio, among other needs. With this in mind, Commerzbank aims to continue moving away from a scenario where chatbots are only passive entities waiting to be triggered, into a new and more innovative one whereby they become an active key enabler of enhanced customer interactions across the customer value chain.
Commerzbank is already mapping active dialog paths to:
Make tailored product suggestions to prospects, giving them the possibility to acquire a product that suits their particular needs
Identify customer requirements for financing or investment, inviting them to get advice and benefit from the existing opportunities
Generate prospects based on the business potential, thus providing the human agents with a framework to prioritize their interactions
Commerzbank leaders anticipate the impact of this solution will be significant. It will let the company fulfill the first advisory touchpoint for financial needs and perform a fast conversation hand-over to specialists as soon as the customer requires it. As a result, leaders expect to exponentially increase conversion rates via more fruitful customer journeys.
Going back to Vanessa’s example: how can Commerzbank help Vanessa efficiently? When she contacts support through chat, the chatbot welcomes her and offers help with any question she may have. Vanessa explains the situation and the digital agent explains that delivering a replacement card would take many days, and that the most practical solution would be to activate a virtual debit card, e. g., with Google Pay on her phone. Vanessa gladly accepts this solution, prompting the Chatbot to deliver a short explanation on how to carry out the process, as well as two additional links: one for downloading the Google Pay App from the Google Play Store and another for digital self-service in the Commerzbank App, which she can intuitively use to synchronize the Commerzbank App and Google Pay. After just 5 minutes, Vanessa is able to pay comfortably using her phone and get to her meeting in time.
This engagement is how Commerzbank wants to deliver digital customer experiences that fascinate their customers, allowing their customers to perform their daily banking activities faster, better, and easier. To learn more about how Google Cloud AI solutions can help your company, visit the product page or check out this report that explores the total economic impact of Google Cloud CCAI.
Read More for the details.
We are excited to announce general availability of automatic chatbot designer in Amazon Lex, enabling developers to automatically design chatbots from conversation transcripts in hours rather than weeks. Introduced at re:Invent in December 2021, the automated chatbot designer enhances the usability of Amazon Lex by automating conversational design, minimizing developer effort and reducing the time it takes to design a chatbot.
Read More for the details.
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) now provides you the flexibility to update your directory settings. This makes it easier to meet your specific security and compliance requirements across all new and existing directories. Starting today, you can update your directory settings and AWS Managed Microsoft AD applies the updated settings to all domain controllers, automatically. You accomplish this using the AWS console or automating with AWS Command Line Interface (AWS CLI) and/or API.
Read More for the details.