NASA Hackathon

Style transfer with Stable diffusion

On October 2nd and 3rd I participated in NASA spaceApps, a NASA Hackathon in which during 36 hours more than 30,000 people around the world participated at the same time solving different challenges proposed by NASA and other space agencies.

The proposed challenges consisted of solving all kinds of space-related problems, using real NASA data, such as preventing a future Carringtong event, creating a map of moonquakes, creating a game to learn about the James Webb telescope, and up to 23 different challenges.

We participated in the challenge "The art in our worlds", which consisted of presenting images of space in a creative and artistic way using Machine Learning and Artificial Intelligence techniques. We had the intention to use generative models of images, but we were not sure what to do with them.

During the day of the presentation we met Lola Cadierno, who worked for NASA and when we told her about the challenge we wanted to do, she told us that she knew an artist who specialized in making art of the cosmos, often using NASA photographs as inspiration. At that point she pulled out her phone and called her, we talked to her and vaguely told her about the idea we had. Use her pictures for an artificial intelligence model to learn from them and be able to create more images of space in her unique style. JuliaArt loved the idea and sent us a lot of material to work with.

The next day we spent in "The ship", very appropriate the name of the coworking space. We spent the whole day, there was very little time and we had a lot of work, in an ideal world we wanted to make a complete application in which through a text input you could create images of the space, in the style of JuliaArt, we had to make a frontend, a server, finish the model, make a presentation, a video...

For the model we used Stable Diffussion, a image generation model, similar to dalle-2, Imagen or midjourney, but with the great advantage of being open source, so we can modify it and do finetuning with our own data, for this we use Dreambooth, a new method from Google to perform finetuning to diffusion models

At night we got the model to generate images in the style of juliaArt, a React front end with the NASA logo, a small server, the slides and the video half done.

The next day, we had to present the project at 16:00 so we didn't have much time to finish it, although we managed to get a slightly improved model using a larger amount of images. We had to focus on the presentation, and on finishing the deliverables.

What matters is not so much that everything is finished and perfect, but to have a prototype that can validate an idea and communicate it as well as possible.



Real artwork by JuliaArt:

NGC 1977 Corredor (from JuliaArt)

NGC 7380 MAGO (from JuliaArt)

NGC 6357 Guerra y Paz (from JuliaArt)

NGC 2174 Cabeza de mono (from JuliaArt)

AI generated Images in the style of JuliaArt:

milky way in the style of sks juliaart

deep space in the style of sks juliaart

cosmos in the style of sks juliaart

a constelation in the style of sks juliaart

Forest in the style of sks juliaart

dog in the style of sks juliaart

City skyline in the style of sks juliaart

Surface of mars in the style of sks juliaart

In the end we didn't make it to the final, there were a lot of very good projects, but we are very happy with what we got. I hope you also liked it and are curious to create your own custumized models!


More info about the project:

Github Repo: https://github.com/MrRobert91/NASASpaceApp_Challenge_2022

Slides: https://github.com/MrRobert91/NASASpaceApp_Challenge_2022/blob/develop/space_artai_documents/Space%20Artai.pdf

Team Menbers:

Rebeca Olcina

David Leirado

Miguel Hidalgo

David Robert

Juliaart