An example of my workflow – Scientific American illustrations


Earlier this year I contributed to Scientific American’s article “Rise of the Mammals” by illustrating various key animals in the mammalian family tree for the story’s internal spread (see their June 2016 edition). This spread ranges from the stem mammaliaform Morganucodon, to one of the largest early mammal, Repenomamus, to an early primate relative, Torrejonia, with examples of early gliders, swimmers, and diggers in between. The well known and well respected James Gurney of Dinotopia fame did the cover illustrations both for the June edition and the article. I regard James Gurney as a role model and am delighted to have worked on the same project as him.


I always enjoy reading about other artists’ workflows, so here I am with a brief show and tell as to how I went about these illustrations.


All of my flesh reconstructions start out as a list of measurements taken from the specimen itself or published scientific literature. Sometimes these lists are nicely complete, but not always. In the case of Repenomamus, there were several specimens to choose from, and you can see in the below animated GIF where I distinguish between the specimens.




Rough Sketch
The rough sketch is to get an idea for proportion and posture and is indeed very rough. I usually hand draw these, scan them, alter them, print them back out, draw over them, etc.


Refined sketch
The sketch may go through several series of scanning, digital altering (this leg is too short, it’s neck is too long, etc), printing, redrawing, and double checking measurements. At this point I seek feedback/critique on the sketch and it may go through another round of alteration.


Flat color
Flat color is pretty strait forward. I choose a color palette and decide what markings the animal may have.


Shadows and Form Light
Light and shadow are what gives an image form. To make it easy, I paint the form light layer on a clipping mask with the mode set to Screen. I do the same with the shadow layer with the mode set to Multiply or Color Burn.


Fur, Details, and Ambient Occlusion
It is good practice to work with the image as a whole first, then work your way down to the details. This is the fun part where I start giving the fur texture and refining the facial features. Ambient occlusion is the shadows that occur where two object meet – such as in the mouth, ears, and nose, or where it’s feet meet the ground – and this is also a multiply layer.


Reflected sky light and other effects
All that is left at the end is artistic touches. For these illustrations I decided to add a reflected sky light – that bit of blue reflecting off the back. I did this because I felt that it created more dimensionality. I also opted to include an ‘atmospheric’-like effect, making the legs that were on the far side of the animal a little lighter, so that they would recede a bit.


If you enjoy these illustrations and would like a print, sticker, or even a phone case featuring one of these images, visit my RedBubble page! I have several other animals from the Scientific American project available there too.

Workflow Part 2: preparing a model for 3D printing


Before I begin Part 2, I’d like to make some updates, since such a long time has passed since I made Part 1 and I now have much more experience with using our GE v|tome|x s industrial CT scanner.


First, MakerBot has moved on to the 5th generation printers, so you can no longer purchase a Replicator 2, like we have. There are some pros and cons to this. For most users, the 5th generation machines are more likely to function directly out of the box. The con is that you cannot open up and fix your 5th generation machine without voiding the warranty. Our Replicator 2, on the other hand, can be taken apart and put back together without much difficulty, so I foresee it working for years to come, even if parts need to be occasionally replaced.


Second, I understand that Osirix is no longer a free option for segmenting CT data. However, Drishti and SPIERS are free software options.


Lastly, I have realized that there are some terms used in CT scanning have very different meanings for medical CT users and industrial CT users. The most problematic of which is ‘resolution.’ In medical CT scanning you can’t change the voxel size (are there exceptions to this?) and so resolution refers to the grey value range. That range of grey values is called contrast in industrial CT scanning. Because you can change the voxel size in industrial CT scanners (by adjusting the distance between the specimen, source, and detector), this is referred to as resolution. Because I use an industrial CT scanner, this is the terminology I will use throughout. Hopefully this prevents future confusion.


On to how to prepare a model for 3D printing!


There are two basic rules to making a printable model. First, the model must be water-tight, aka, it has no holes, and second, it must have “clean” geometry. In this case, clean requires that any given edge has only two polygons attached, normals all face the same direction, and all polygons have an area. In the animation and gaming industry, they often require a “clean” model to made of polygons that are quads instead of triangles. This is not necessary for 3D printing and all of the segmentation softwares that I have seen will export models using triangles. Unfortunately, some if not most segmentation softwares will not necessarily export clean models (omitting the quad rule). The type of geometry that is most likely, in my experience, to result in unclean geometry, is geometry that is swiss-cheesy, like bone matrix.

This complex, swiss cheese-like anatomy results in a messy model with several problems: 1) non-manifold (messy) geometry, 2) discontinuous geometry (floaters), and 3) more polygons than are necessary

In cases like this, you will need to decide where the boundary should be for your 3D print. You can clean this up at the segmentation stage, but changing the thresholding for that region or going through slice-by-slice and filling in the air. Much of the time, the human eye is better at detecting where an edge is than the software is, especially when you have two object pressed up against one another and there isn’t enough contrast for the software to detect the edge.


You can also clean up the geometry once a model has been generated. The best tool that I have seen yet is only available in Mimics or 3-matic, called the Wrap tool. In this tool, you specify what size of triangles you want the model to be wrapped in and how much of a gap closing distance you want it to detect. This eliminates internal geometry and I have not once in four years seen this result in messy geometry. Reducing tools are also really good to use for preparing a model for printing. These will reduce the number of polygons, making it easier and faster to “slice” the model for printing. Also keep in mind the resolution of the printer that you are using. If the resolution of the printer is .2mm, then it doesn’t make sense to print a model that is accurate to 20 microns.


You can also use generic 3D modeling softwares like Maya, Blender, or 3ds max (not to be confused with CT software VGStudio MAX) to clean up your geometry. Meshlab and Meshmixer offer some automated tools to clean models too.


The next step is slicing the model, which converts the polygon model into the path that the 3D printer will travel. MakerBot provides a free software formerly called MakerWare, now called MakerBot Desktop, but there are several alternatives as well. Generally these will display a virtual representation of the print platform. You drop in your model and place and orient it on this platform. Consider how much support material different orientations will require. Meshmixer can automatically calculate the orientations that will require the least amount of support material. You may also want some delicate parts of your model to be oriented up, so that you don’t risk breaking it when removing support material. After this, you can slice and print your model! If you decide that you want to share your model with the world, consider uploading it to Thingiverse or Shapeways and let me know if you do! With Shapeways you have the added benefit of being able to order your model in many different materials, even if you decide not to make the models public.


Happy printing!

3D printed Echidna shoulder girdle, with supports intact

PaleoArt Community List – building a paleoart community

For as small and specialized as the field of paleo art and paleo illustration is, it is both surprising and a shame that there isn’t a stronger community among the people involved in this field. At last year’s SVP (Society of Vertebrate Paleontology) conference I introduced myself to several other artists and most agreed with this sentiment. In an attempt to start the community building process, using the preparators group within SVP as a model, Erin Fitzgerald and I have started a PaleoArt Community List listserv.


I copy the group’s statement below, but the foundation of this group is that it is a place to build connections, ask questions, and share insights and resources. It is not a place to criticize people or their work, though constructive criticism is welcome when asked.


PaleoArt Community List Statement:
Welcome to the Paleoart Community List, We are glad that you are interested in this community of artists around the world. Our goal is to bring Paleoartists, illustrators, painters, sculptors, both professional and amateur, together seeking support, advice and interest in Paleoart. Over the years myself and other colleagues have found a lack of support in the Paleoart community and would like to see some change take place. For the respect of others, we ask that this list not be a place for showcasing art, but rather for giving advice and helping those finding dead end roads to questions a place they can turn for guidance. That being said, we strongly encourage conversations that pertain to the advancement of skills and the support of learning a main objective. This list is a private and monitored place; racial, gender, sexual and any other form of harassment will not be tolerated and is strictly prohibited and will result in being removed from the group. Personal attacks against and negative criticism against others and their artwork will also not be tolerated. This a place for respectful conversations only! Feel free to post a paragraph about yourself so we get to know you a little better.


If you would like to join, email a request to

2015 Sketches: May, June, and part of July


Sketches from Lincoln Park Zoo. Birds and rhinos.


Some reference sketches for skeletal reconstructions done at the Field Museum of Natural History.


A study of fur texture focusing on the transition of lightly haired skin to full fur.


Quick color sketch of Brushtail Possum.


Dog-rhino sketch done while enjoying the beach.


Just for fun creature sketch.

The perfect marriage of CT scanning and 3D printing

3D printing is steadily becoming more available to amateur users, with units becoming available in schools and public libraries across the country. At the same time, industrial CT scanners are becoming more accessible and cost effective for researchers. With a good CT scan, a paleontologist can observe detailed anatomy, such a bone matrix, or otherwise hidden anatomy, such as developing teeth. Taken together, these two technologies make a powerful team.

Our lab got a Makerbot Replicator 2 in 2013. Its inaugural project was to print a Hadrocodium skull for the documentary, “Your Inner Fish”. Following that, the Replicator was frequently used to print models generated from CT scans. These included models that were magnified replicas of the fossil, anatomical components digitally dissected out from the scan, or reconstructions of bones that were broken or warped in the specimen. These models were useful for observing details on small and delicate bones, demonstrating anatomy to students, and testing possible articulation.

In 2014 our department got an industrial CT scanner, perfect for scanning small to medium sized fossils and bones. Surprisingly, there are very few resources available addressing the problem of mounting specimens. The scanner itself is equipped with a lathe chuck, which is not very good for attaching samples directly to. Instead, a range of attachments are needed to accommodate different sized specimens. This is where a 3D printer comes in very handy. Falcon tubes are useful for small samples. More complex clamps could be made in a machine shop. With a 3D printer, custom built platforms/containers/cradles can be produced quickly and cheaply. What follows are some examples of the attachments we have made in all or in part using a 3D printer.


The lineup of the mounting materials. Top shelf- bisected water cooler, middle shelf- foams, bottom shelf- holders. Not shown- variety of tapes, hot glue gun, more foams.


This holder was custom printed to hold multiple small bird skulls.


These U-shaped holders have been printed in a variety of sized to accommodate slab specimens or anything else that will fit in them and are very versatile.


These holders are more generic and use a combination of found materials (a clear plastic cylinder, a falcon tube, and a tupperware container) and 3D printed parts. The hot glue gun is very useful for these.

2015 Sketches: March and April

In February I decided to take a short animal illustration class at Lillstreet Art Center and found the experience rewarding, mostly for the community, inspiration, and motivation that I got from it. The first two illustrations here came from that class. I plan on staying involved in this lively community and have signed up for an embroidery class for May, which I plan on using as another way to express biological visualization and for fun. The second two images here are samplings of sketches I did on trips to the Lincoln Park Zoo. Though it can be difficult to capture animals in motion, I found the experience very relaxing, and enjoyed spending time observing the way the animals move and behave.
LSS_aligator LSS_shark zooSketches_20150321 zooSketches_20150412

Sketches: 2015 thus far

Some of you may be familiar with the recent proliferation of the five day art challenge on Facebook. If not, it is, as the name states, a five day challenge to post 3 or more pieces of new or old art. When I was nominated, I found that the combination of looking through my old work and sharing new work energized me to sketch and create more. It was a valuable experience, I think. Whereas I don’t plan on posting work here everyday, I will make it a goal to post some work (sketches, work in progress, even past work) monthly or bi-monthly. Today, I’d like to share some sketches that I’ve played around with since the beginning of the year.







Golden Lion Tamarin: a study in light and color

I started this illustration as a from a sketchbook sketch to practice color and light, and fur techniques. I believe that as a result of the initial painting phase being fast, this illustration has a more painterly style than I normally achieve, though I’m pleased with the texture. I plan on doing more small illustrations like this one in the future to explore color, composition, movement, story, etc.



Chrysochloris Anatomy Reference

About the scan
    This CT scan was provided to the Luo Lab by the folks at Digimorph, located at the University of Texas Austin. Many thanks to them and their public database. Chrysochloris (golden mole) has pronounced digging adaptations, such as broad but short hands, powerful forelimbs (note the large lever arm created by the olecranon process and large scapula), and a shovel-like snout. The original information about this scan can be found here. These images were created using the segmentation software Mimics.
Why I’m sharing
    I’m sharing this because I have found it very useful, both from an artistic and an anatomical point of view. I find it particularly interesting to see how the muscle mass sits on the bone, particularly around the snout. It is also good to note the volume of hair and skin. I hope others can find these following images useful.
ant_01 lat_04 lat_03 lat_02 lat_01 ant_04 dors_02 dors_01 ant_03 ant_02

Workflow Part 1: CT scan to 3D model

Our lab bought a MakerBot Replicator2 a couple months back. I was super excited about using it, and still am excited about maker technology, but quickly realized that this was not like a regular printer; you could not simply take it out of the box, plug it in and expect it to work. It has been a steep learning curve getting it to cooperate and I think I’m getting closer to having a more consistent success rate with the prints. Since taking charge of the printer, I have walked a couple of people through the process of creating a 3D print from a CT scan, which is what has inspired this series of posts. Hopefully someone finds these helpful for their own endeavors.

Part1: CT scan to 3D model

Fossils are usually scanned using an industrial scanner that can pump a higher than medically allowed dose of radiation into the specimen. Denser materials need a higher amount of radiation to penetrate them and, obviously, fossils are far more dense than flesh and bone. If it is a micro CT scanner (microtomography), the resulting images can also have a great deal more resolution that a typical medical scanner.


A fossil sits on the rotating table in front of the X-ray tube.

After the fossil has been scanned, the outcome is usually a stack of .tiff or .jpg files. A medical scanner would produce a DICOM file. The difference is this: a DICOM file comes with metadata that describes the voxel size and orientation of the scan, whereas for a .tiff stack, the voxel size and orientation needs to be entered manually. A voxel is a three dimensional pixel. To produce an accurate model, you need to know the exact dimensions of the voxels, otherwise the models may be squished or stretched. The X and Y  voxel size (which should always be the same) can also be calculated by dividing the field of view (aka the size represented by the width) by the number of pixels in that width.

There are several programs you can use once you have obtained the dataset to extract the information (aka make a 3D model from it). The more common ones I have encountered are Mimics, Amira, Avizo, Osirix, and VG Studio. The bulk of my experience is with Mimics, as this is what we use in the lab. However, Mimics is a professional piece of software and likely out of the budget for a casual user. If you happen to be extracting CT data for your own use and use a Mac, I recommend Osirix, which has a free license. By the way, if you have ever had a CT scan done of yourself, you have the right to ask for that data from your doctor, but make sure you ask for the DICOM dataset. You can also request that your patient information is removed from the file that you receive, for privacy purposes. This is just one situation that the casual user might find the need for Osirix.

Next you need to section out the relevant geometry. This process obviously differs between software, but all are based on selecting a portion of data based on the density of the sample. This can be tough if the fossil and matrix have similar densities. Samples that contain metallic elements can also be problematic. These high density irregularities can cause flaring and makes it difficult to get a wide range of grays (as is desirable) for the fossil and matrix. In effect, this throws an outlying cluster of high density that tips the histogram in that direction.

Once you have your model sectioned out, it gets exported as an .stl file. In the next post, I will discuss how to clean and process the .stl file for 3D printing.

Click here for Part 2: preparing a model for 3D printing