But when you look closely, what do you see?
There's no single creature in these images.
And AI augments my creative process by allowing me to distill and recombine textures.
And that's something that would otherwise take me months to draw by hand.
Plus I'm actually terrible at drawing.
So you could say, in a way, what I'm doing is a contemporary version of something that humans have already been doing for a long time, even before cameras existed.
In medieval times, people went on expeditions, and when they came back they would share about what they saw to an illustrator.
And the illustrator, having never seen what was being described, would end up drawing based on the creatures that they had previously seen and in the process creating hybrid animals of some sort.
So an explorer might describe a beaver, but having never seen one, the illustrator might give it the head of a rodent, the body of a dog and a fish-like tail.
In the series "Artificial Natural History", I took thousands of illustrations from a natural history archives, and I fed them to a neural network to generate new versions of them.
But up until now, all my work was done in 2D.
And with the help of my studio partner, Feileacan McCormick, we decided to train a neural network on a data set of 3D scanned beetles.
But I must warn you that our first results were extremely blurry, and they looked like the blobs you see here.
And this could be due to many reasons, but one of them being that there aren't really a lot of openly available data sets of 3D insects.
And also we were repurposing a neural network that normally gets used to generate images to generate 3D.
So believe it or not, these are very exciting blobs to us.
But with time and some very hacky solutions like data augmentation, where we threw in ants and other beetle-like insects to enhance the data set, we ended up getting this, which we've been told they look like grilled chicken.
But hungry for more, we pushed our technique, and eventually they ended up looking like this.
We use something called 3D style transfer to map textures onto them, and we also trained a natural language model to generate scientific-like names and anatomical descriptions.
And eventually we even found a network architecture that could handle 3D meshes.
So they ended up looking like this.