GCP – Game-changing assets: Making concept art with Google Cloud’s generative AI
Developing games is unique in that it requires a large variety of media assets such as 2D images, 3D models, audio, and video to come together in a development environment. However, in small game teams, such as those just getting started or “indie” teams, it’s unlikely that there are enough people to create such a wide variety and amount of assets. The lack of assets can create a bottleneck, throttling the entire game development team.
In this blog, we demonstrate how easy it is for gaming developers to deploy generative AI services on Google Cloud, showcase the available tooling of Model Garden on Vertex AI (including partner integrations like Hugging Face and Civitai), and highlight their potential for scaling game-asset creation.
Solution
Google Cloud offers a diverse range of generative AI models, accessible to users for various use cases. This solution focuses on how game development teams can harness the capabilities of Model Garden on Vertex AI, which incorporates partner integrations such as Hugging Face and Civitai.
Many artists run these models on their local machine, e.g., Stable Diffusion on a local instance of Automatic 1111. However, considering the cost of high-end GPUs, not all people have access to hardware required to do so. Therefore, running these models in the cloud is a way to access the compute needed while mitigating the need to invest in high-end hardware upfront.
Our primary objective is to explore how these tools can streamline and scale game-asset creation.
Concept or pre-production assets
Assets are the visual and audio elements that make up a game’s world. They have a significant impact on the player’s experience, contributing to the creation of a realistic and immersive environment. There are many different types of game assets, including:
2D and 3D models
Textures
Animations
Sounds and music
Here’s a typical life journey of a typical 3D game asset, such as a character:
Concept art: Initial design of the asset
3D modeling: Creation of a three-dimensional model of the asset
Texturing: Adding color and detail to the model in alignment with the game’s style
Animation: Bringing movement to the asset (if applicable)
Sound effects: Adding audio elements to enhance the asset
Import to game engine: Integration of the asset into the game engine that powers the gameplay
Generative AI can streamline the asset-creation process by generating initial designs, 3D models, and high-quality textures tailored to the game’s style. In this way, game artists can quickly provide assets that unlock the rest of the game team in the short term, while allowing them to focus on long term goals like art direction and finalized assets.
Read on to learn how to accomplish the first step of game asset creation – generating concept art – on Google Cloud using Vertex AI and Model Garden with Stable Diffusion. We’ll cover how to access and download popular LoRA (Low-Rank Adaptation) adapters from Hugging Face or Civitai, and serve them alongside the stabilityai/stable-diffusion-xl-base-1.0 model (from Model Garden) on Vertex AI for online prediction. The resulting concept art images will be stored in a Google Cloud Storage bucket for easy access and further refinement by artists.
Infrastructure setup
1. Prerequisites:
Google Cloud Project: Select or create a project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Billing enabled: Verify that billing is enabled for your project.
APIs enabled: Enable both the Vertex AI and Compute Engine APIs.
2. Storage and authentication:
Cloud Storage Bucket: Create a bucket to store downloaded LORAs and experiment outputs.
Service Account: Create a service account with the following roles:
Vertex AI User
Storage Object Admin
We’ll use this service account with our Python notebook for model creation and storage management.
3. Colab Enterprise setup:
Runtime remplate: Create a runtime template in Colab Enterprise following the instructions at https://cloud.google.com/vertex-ai/docs/colab/create-runtime-template
Runtime instance: Create a runtime instance based on your Runtime template created above. Follow instructions at https://cloud.google.com/vertex-ai/docs/colab/create-runtime.
Upload notebooks: Download these three notebooks from git and upload them to the Colab Enterprise.
4. Running your notebooks:
Connecting notebooks: Once you’ve uploaded the notebooks, ensure they are connected to the runtime you created in step 3 above. This ensures your notebooks have access to the necessary resources for execution.
Cloud NAT: If your runtime environment requires internet access to download packages, you can create a Cloud NAT following these instructions.
This completes the infrastructure setup. You’re ready to run your Jupyter notebooks to deploy a LoRA model with stabilityai/stable-diffusion-xl-base-1.0 on a Vertex AI prediction endpoint.
ExecutionUpon successful execution of all the above steps, you should see three Jupyter notebook files in Colab Enterprise as follows:
1. Create_mg_pytorch_sdxl_lora.ipynb
This notebook contains steps to download popular LoRA (Low-Rank Adaptation) adapters from either huggingface.co or civitai.com. It then serves the adapter alongside the stabilityai/stable-diffusion-xl-base-1.0 model on Vertex AI for online prediction.
In this notebook, set the following variables to begin:
HUGGINGFACE_MODE: If enabled, the LoRA will be downloaded from Hugging Face. Otherwise, it will be downloaded from Civitai.
Upon successful execution, this notebook will print “Model ID” and “Endpoint ID.” Save these values for use in the following notebooks.
If HUGGINGFACE_MODE is unchecked or disabled, ensure you update the Civitai variables within the notebook.
2. GenerateGameAssets.ipynb
This notebook contains code to convert text to images. Set the following variables to begin:
ENDPOINT_ID: Obtained from successful execution of “1.Create_mg_pytorch_sdxl_lora.ipynb”.
Upon successful execution, you should see the following results:
Concept art images will be uploaded to your configured GCS storage bucket.
Images will be displayed for reference.
3. CleanupCloudResources.ipynb
Execute this notebook to clean up resources, including the endpoint and model.
Before executing, set the following variables:
MODEL_ID and ENDPOINT_ID: Obtained from successful execution of “1.Create_mg_pytorch_sdxl_lora.ipynb”.
Congratulations! You’ve successfully deployed the stabilityai/stable-diffusion-xl-base-1.0 model from Model Garden on Vertex AI, generated concept art for your games, and responsibly deleted models and endpoints to manage costs.
Final thoughts
Integrating Stable Diffusion-generated images into a game requires careful planning:
Legal rights: Ensure you have the necessary permissions to use generated images. Always consult a legal professional if you have any questions about image usage rights.
Customization: Edit and refine the images to match your game’s style and technical needs.
Optimization: Optimize images for in-game performance and smooth integration into your game engine.
Testing: Thoroughly test for quality and performance after incorporating the assets.
Ethics and compliance: Prioritize ethical considerations and legal compliance throughout the entire process.
Documentation and feedback: Maintain detailed records, backups, and be responsive to player feedback after your game’s release.
References
Explore AI models in Model Garden https://cloud.google.com/vertex-ai/docs/start/explore-models
Your guide to generative AI support in Vertex AI https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-model-garden-and-generative-ai-studio
Read More for the details.