GCP – Creating marketing campaigns using BigQuery and Gemini models
Creating marketing campaigns is often a complex and time-consuming process. Businesses aim to create real-time campaigns that are highly relevant to customer needs and personalized to maximize sales. Doing so requires real-time data analysis, segmentation, and the ability to rapidly create and execute campaigns. Achieving this high level of agility and personalization gives businesses a significant competitive advantage.
Successful marketing campaigns have always hinged on creativity and data-driven insights. Generative AI is now amplifying both of these elements, and advancements in generative AI have the potential to revolutionize the creation of marketing campaigns. By leveraging real-time data, generative AI generates personalized content such as targeted emails, social media ads or website content that’s targeted to the situation and available images, all at scale. This is in contrast to the current state of affairs, where marketers are constrained by manual processes and have limited creative resources at their disposal.
While traditional marketing methods have their place, the sheer volume of content needed in today’s landscape demands a smarter approach. Generative AI can help marketing teams launch campaigns quickly, efficiently, and with a level of personalization that was previously impossible, leading to increased engagement, conversions, and customer satisfaction.
In this blog, we will go through the various steps of how data and marketing teams can harness the power of multimodal large language models (LLMs) in BigQuery to create and launch more effective and intelligent marketing campaigns. For this demonstration, we reference Data Beans, a fictional technology company that provides a SaaS platform built on BigQuery to coffee sellers. Data Beans leverages BigQuery’s integration with Vertex AI to access Google’s AI models like Gemini Pro 1.0 and Gemini Vision Pro 1.0 to accelerate creative workflows while delivering customized campaigns at scale.
Demonstration overview
This demonstration highlights three steps of Data Beans’ marketing launch process that leverages Gemini models to create visually appealing, localized marketing campaigns for selected coffee menu items. First, we use Gemini models to brainstorm and generate high-quality images from the chosen menu item, ensuring the images accurately reflect the original coffee items. Next, we use those same models to craft tailored marketing text for each city in their native language. Finally, this text is integrated into styled HTML email templates, and the entire campaign is then stored in BigQuery for analysis and tracking.
Step 1 and 1.1: Craft the prompt and create an image
We start the marketing campaign by creating the initial image prompt and the associated image using Imagen 2. This generates a fairly basic image that might not be totally relevant, as we have not supplied all the necessary information to the prompt at this stage.
<ListValue: [StructValue([(‘code’, ‘picture_description = “Mocha Pancakes with Vanilla Bean ice cream”rnrnrnllm_human_image_prompt = f”Create an image that contains a {picture_description} that I can use for a marketing campaign.”rnhuman_prompt_filename = ImageGen(llm_human_image_prompt)rnimg = Image.open(human_prompt_filename)rnllm_human_image_json = {rn “gcs_storage_bucket” : gcs_storage_bucket,rn “gcs_storage_path” : gcs_storage_path,rn “llm_image_filename” : human_prompt_filenamern}rnimg.thumbnail([500,500])rnIPython.display.display(img)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec791784b20>)])]>
Step 1.2. Enhance the prompt
Once we create the initial image we now focus on improving our image by creating a better prompt. To do this we use Gemini Pro 1.0 to improve on our earlier Imagen 2 prompt.
<ListValue: [StructValue([(‘code’, ‘example_return_json = ‘[ { “prompt” : “response 1” }, { “prompt” : “response 2” }, { “prompt” : “A prompt for good LLMs” }, { “prompt” : “Generate an image that is awesome” }]’rnrnrnllm_generated_image_prompt=f”””For the below prompt, rewrite it in 10 different ways so I get the most creative images to be generated.rnTry creative thinking, generate innovative and out-of-the-box ideas to solve the problem.rnExplore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality.rnEmbrace unconventional ideas and mutate the prompt in a way that surprises and inspires unique variations.rnThink outside the box and develop a mutator prompt that encourages unconventional approaches and fresh perspectives.rnReturn the results in JSON with no special characters or formatting.rnLimit each json result to 256 characters.rnrnrnExample Return Data:rn{example_return_json}rnrnrnPrompt: “Create a picture of a {picture_description} to be used for a marketing campaign.”rn”””rnrnrnllm_success = Falserntemperature=.8rnwhile llm_success == False:rn try:rn llm_response = GeminiProLLM(llm_generated_image_prompt, temperature=temperature, topP=.8, topK = 40)rn llm_success = Truern except:rn # Reduce the temperature for more accurate generationrn temperature = temperature – .05rn print(“Regenerating…”)rnrnrnprint(llm_response)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec791784040>)])]>
Step 1.3. Generate images from LLM-generated prompts
Now that we have enhanced prompts, we will use these to generate images. Effectively, we are using LLMs to generate prompts to feed into LLMs to generate images.
<ListValue: [StructValue([(‘code’, ‘llm_json = json.loads(llm_response)rnrnrn# Add an image to the generation that will not contain any food items. We will test this later.rnllm_json.append({‘prompt’ : ‘Draw a coffee truck with disco lights.’})rnrnrnimage_files = []rnrnrnfor item in llm_json:rn print(item[“prompt”])rn try:rn image_file = ImageGen(item[“prompt”])rn image_files.append ({rn “llm_prompt” : item[“prompt”],rn “gcs_storage_bucket” : gcs_storage_bucket,rn “gcs_storage_path” : gcs_storage_path,rn “llm_image_filename” : image_file,rn “llm_validated_by_llm” : Falsern })rn except:rn print(“Image failed to generate.”)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec7917844c0>)])]>
Steps 2-3. Verify images and perform quality control
We are now going to use LLMs to verify the output that was generated. Effectively, we ask the LLM to check if each of the generated images contains the food items we asked for. For example, does the image contain the coffee or food items from our prompt? This will help us not just to verify if there is something abstract, but will also help us to verify the quality of the image. In addition, we can check if the image is visually appealing — a must for a marketing campaign.
<ListValue: [StructValue([(‘code’, ‘example_return_json = ‘{ “response” : true, “explanation” : “Reason why…” }’rnrnrnllm_validated_image_json = []rnnumber_of_valid_images = 0rnrnrnllm_validated_image_prompt = f”””Is attached image a picture of “{picture_description}”?rnRespond with a boolean of true or false and place in the “response” field.rnExplain your reasoning for each image and place in the “explanation” field.rnReturn the results in JSON with no special characters or formatting.rnrnrnPlace the results in the following JSON structure:rn{example_return_json}rn”””rnrnrnfor item in image_files:rn print(f”LLM Prompt : {item[‘llm_prompt’]}”)rn print(f”LLM Filename : {item[‘llm_image_filename’]}”)rn imageBase64 = convert_png_to_base64(item[‘llm_image_filename’])rnrnrn llm_success = Falsern temperature = .4rn while llm_success == False:rn try:rn llm_response = GeminiProVisionLLM(llm_validated_image_prompt, imageBase64, temperature=temperature)rn print(f”llm_response : {llm_response}”)rn llm_json = json.loads(llm_response)rn llm_success = Truern except:rn # Reduce the temperature for more accurate generationrn temperature = temperature – .05rn print(“Regenerating…”)rnrnrn # Mark this item as useablern if llm_json[“response”] == True:rn item[“llm_validated_by_llm”] = Truern number_of_valid_images = number_of_valid_images + 1rnrnrn item[“llm_validated_by_llm_explanation”] = llm_json[“explanation”]rnrnrn llm_validated_image_json.append( {rn “llm_prompt” : item[“llm_prompt”],rn “gcs_storage_bucket” : item[“gcs_storage_bucket”],rn “gcs_storage_path” : item[“gcs_storage_path”],rn “llm_image_filename” : item[“llm_image_filename”],rn “llm_validated_by_llm” : item[“llm_validated_by_llm”],rn “llm_validated_by_llm_explanation” : item[“llm_validated_by_llm_explanation”]rn })’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec7917845b0>)])]>
Step 4. Rank images
Now that we have gone through verification and quality control, we can choose the best image for our needs. Through clever prompting, we can use Gemini Pro 1.0 again to do this for us for the thousands of images we generated. To do this, we ask Gemini to rank each image based on its visual impact, clarity of message, and relevance to our Data Beans brand. We will then select the one with the highest score.
<ListValue: [StructValue([(‘code’, ‘image_prompt = []rnrnrnfor item in image_files:rn if item[“llm_validated_by_llm”] == True:rn print(f”Adding image {item[‘llm_image_filename’]} to taste test.”)rn imageBase64 = convert_png_to_base64(item[‘llm_image_filename’])rn new_item = {rn “llm_image_filename” : item[‘llm_image_filename’],rn “llm_image_base64” : f”{imageBase64}”rn }rn image_prompt.append(new_item)rnrnrnexample_return_json='[ {“image_name” : “name”, “rating” : 10, “explanation”: “”}]’rnrnrnllm_taste_test_image_prompt=f”””You are going to be presented with {number_of_valid_images}.rnYou must critique each image and assign it a score from 1 to 100.rnYou should compare the images to one another.rnYou should evaluate each image multiple times.rnrnrnScore each image based upon the following:rn- Appetizing images should get a high rating.rn- Realistic images should get a high rating.rn- Thought provoking images should get a high rating.rn- Plastic looking images should get a low rating.rn- Abstract images should get a low rating.rnrnrnThink the rating through step by step for each image.rnrnrnPlace the result of the scoring process in the “rating” field.rnReturn the image name and place in the “image_name” field.rnExplain your reasoning and place in the “explanation” field in less than 20 words.rnrnrnPlace the results in the following JSON structure:rn{example_return_json}rn”””rnrnrnllm_success = Falserntemperature=1rnwhile llm_success == False:rn try:rn llm_response = GeminiProVisionMultipleFileLLM(llm_taste_test_image_prompt, image_prompt, temperature = temperature)rn llm_success = Truern except:rn # Reduce the temperature for more accurate generationrn temperature = temperature – .05rn print(“Regenerating…”)rnrnrnrnrnprint(f”llm_response : {llm_response}”)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec791391820>)])]>
Step 5. Generate campaign text
Now that we have selected the best image, let’s generate the best marketing text. We are storing all the generated data in BigQuery, so we generate text in JSON format. We are generating text to incorporate promotions and other relevant items.
<ListValue: [StructValue([(‘code’, ‘example_return_json_1 = ‘{ “city” : “New York City”, “subject” : “email subject” , “body” : “email body” }’rnexample_return_json_2 = ‘{ “city” : “London”, “subject” : “marketing subject” , “body” : “The body of the email message” }’rnimageBase64 = convert_png_to_base64(highest_rated_llm_image_filename)rnrnrnllm_marketing_campaign = []rnrnrn# Loop for each cityrnfor city_index in range(0, 4):rn print(f”Processing city: {city_names[city_index]}”)rnrnrn prompt=f”””You run a fleet of coffee trucks in {city_names[city_index]}.rn We need to craft a compelling email marketing campaign for the below image.rn Embrace unconventional ideas and thinking that surprises and inspires unique variations.rn We want to offer exclusive discounts or early access to drive sales.rn Look at the image and come up with a savory message.rn The image contains “{picture_description}”.rnrnrn The marketing campaign should be personalized for {city_names[city_index]}.rn Do not mention the weather in the message.rn Write the response using the language “{city_languages[city_index]}”.rn The campaign should be less than 1500 characters.rn The email message should be formatted in text, not HTML.rn Sign the email message “Sincerely, Data Beans”.rn Mention that people can find the closest coffee truck location by using our mobile app.rnrnrn Return the results in JSON with no special characters or formatting.rn Double check for special characters especially for Japanese.rn Place the value of “{city_names[city_index]}” in the “city” field of the JSON response.rnrnrn Example Return Data:rn {example_return_json_1}rn {example_return_json_2}rnrnrn Image of items we want to sell:rn “””rnrnrn llm_success = Falsern temperature = .8rn while llm_success == False:rn try:rn llm_response = GeminiProVisionLLM(prompt, imageBase64, temperature=.8, topP=.9, topK = 40)rn llm_json = json.loads(llm_response)rn print(llm_response)rn llm_success = Truern except:rn # Reduce the temperature for more accurate generationrn temperature = temperature – .05rn print(“Regenerating…”)rnrnrn llm_marketing_campaign.append(llm_json)rnrnrn llm_marketing_prompt.append(prompt)rn llm_marketing_prompt_text.append(llm_response)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec791b745b0>)])]>
In addition, notice how we can use Gemini Pro 1.0 to localize the marketing messages for different countries’ native languages.
Step 6. Create an HTML email campaign
The generated artifacts are displayed as part of a web application. To simplify distribution, we need to to create an HTML email with embedded formatting as well as embedding our image. Again, we use Gemini Pro 1.0 to author our marketing text as HTML based on the images and text we created in the previous steps.
<ListValue: [StructValue([(‘code’, ‘imageBase64 = convert_png_to_base64(highest_rated_llm_image_filename)rnrnrnfor item in llm_marketing_campaign:rn print (f’City: {item[“city”]}’)rn print (f’Subject: {item[“subject”]}’)rnrnrn prompt=f”””Convert the below text into a well-structured HTML document.rnRefrain from using <h1>, <h2>, <h3>, and <h4> tags.rnCreate inline styles.rnMake the styles fun, eye-catching and exciting.rnUse “Helvetica Neue” as the font.rnAll text should be left aligned.rnAvoid fonts larger than 16 pixels.rnDo not change the language. Keep the text in the native language.rnrnrnInclude the following image in the html:rn- The image is located at: https://REPLACE_MErn- The image url should have a “width” of 500 and “height” of 500.rnrnrnDouble check that you did not use any <h1>, <h2>, <h3>, or <h4> tags.rnrnrnText:rn{item[“body”]}rn”””rnrnrn llm_success = Falsern temperature = .5rnrnrn while llm_success == False:rn try:rn llm_response = GeminiProLLM(prompt, temperature=temperature, topP=.5, topK = 20)rnrnrn if llm_response.startswith(“html”):rn llm_response = llm_response[4:] # incase the response is formatted like markdownrnrnrn # Replace the image with an inline image (this avoids a SignedURL or public access to GCS bucket)rn html_text = llm_response.replace(“https://REPLACE_ME”, f”data:image/png;base64, {imageBase64}”)rn item[“html”] = html_textrnrnrn #print (f’Html: {item[“html”]}’)rn filename= str(item[“city”]).replace(” “,”-“) + “.html”rn with open(filename, “w”, encoding=’utf8′) as f:rn f.write(item[“html”])rn print (“”)rnrnrn llm_success = Truern llm_marketing_prompt_html.append(prompt)rn llm_marketing_prompt_html_text.append(html_text)rnrnrn except:rn # Reduce the temperature for more accurate generationrn temperature = temperature – .05rn print(“Regenerating…”)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ec790f55a90>)])]>
Conclusions and resources
The integration of LLMs into creative workflows is revolutionizing creative fields. By brainstorming and generating numerous pieces of content, LLMs provide creators with a variety of quality images for their campaigns, speed up text generation that’s automatically localized, and analyze large amounts of data. Moreover, AI-powered quality checks ensure that generated content meets desired standards.
While LLMs’ creativity can sometimes produce irrelevant images, Gemini Pro Vision 1.0 “taste test” functionality lets you choose the most appealing results. Additionally, Gemini Pro Vision 1.0 provides insightful explanations of its decision-making process. Gemini Pro 1.0 expands audience engagement by generating content in local languages, and with support for code generation, eliminates the need to know HTML.
To experiment with the capabilities showcased in this demonstration, please see the complete Colab Enterprise notebook source code. To learn more about these new features, check out the documentation. You can also use this tutorial to apply Google’s best-in-class AI models to your data, deploy models, and operationalize ML workflows without ever having to move your data from BigQuery. Finally, check out this demonstration on how to build an end-to-end data analytics and AI application directly from BigQuery while harnessing the potential of advanced models such as Gemini. As a bonus, you’ll get a behind-the-scenes take on how we made the demo.
Googlers Luis Velasco, Navjot Singh, Skander Larbi, and Manoj Gunti contributed to this blog post. Many Googlers contributed to make these features a reality.
Read More for the details.