Skip to content

Digitizing Apollo 17 Part 16 – New Apollo17.org, 44th Anniversary Edition

Updated: at 06:00 PM

Table of contents

Open Table of contents

New Features

Recreating the Moon – Lunar Reconnaissance Orbiter

This spring, after the wonderful attention died down from the release of the site last December, I came across some incredible data pertaining to the Apollo landing sites that was recently published by the LRO mission team.
The data includes very high-resolution photography and topology data of the valley of Taurus-Littrow, where Apollo 17 landed 44 years ago today.
I realized that I could use this data to recreate the landing area in 3D and possibly simulate some of the segments of the Apollo 17 mission that did not include television transmissions.

During the three-day stay on the lunar surface, the remotely controlled TV camera only transmitted a signal when the Lunar Roving Vehicle (LRV) was stationary (the camera and transmission antenna were both attached to the LRV).
When they drove from place to place on the lunar surface (they traveled a total distance of 35.9 km), the high-gain television antenna needed for the video transmission was bouncing around too much to attempt to maintain a transmission to Earth.
Because of this, only audio transmissions are available for much of the crew’s time working on the surface.

I contacted Dr. Noah Petro, Deputy Project Scientist of the LRO mission at Goddard, and asked for his assistance in understanding the data available online.
Dr. Petro and his colleagues very generously provided me with even higher resolution data and photography and were hugely helpful in helping me to better understand how to work with it.
I pulled this data into Cinema4D, a piece of professional 3D rendering software, to see if I could create a workable simulation of being in close proximity with the Apollo 17 astronauts as they drove around on the surface.

High-resolution topology data of the Taurus Littrow valley High-resolution topology data of the Taurus Littrow valley

Test displacement mesh using topology data Test displacement mesh using topology data

The first step was to displace a 3D mesh using the LRO topology data.
This, in theory, would recreate the texture of the lunar surface, down to individual craters.
The image above shows one of my first tests.
You can see the valley floor and, if you know where to look, the landing site of the Lunar Module.
I was on to something.
The topography data provided to me by Noah and his team is at 16-bit resolution—this means that the height of each point on the ground is represented with extreme precision.

High-resolution photography of the Taurus Littrow valley High-resolution photography of the Taurus Littrow valley

Close-up of Apollo 17 landing area Close-up of Apollo 17 landing area

The next step was to texture the surface using the high-resolution photography taken by the LRO.
This would have the added benefit of including real sun shading from the lunar surface contained in the photos themselves.
I wouldn’t have to artificially light the artificial 3D scene at all.

The images captured by the LRO are in such high resolution that you can see the lander and even the footprints left behind by the crews of all of the Apollo missions.
The close-up image above shows the Apollo 17 landing site.
Note the footprints running east and west from the lander.

Top: extured surface mesh previsualization; Bottom: view from the Lunar Rover previsualization Top: textured surface mesh previsualization; Bottom: view from the Lunar Rover previsualization

The result of combining the displaced mesh with the photography is a textured mesh that is a replica of the real lunar surface to an accuracy of approximately 60 cm per pixel.
The dataset is so large (and therefore so accurate) that it requires 64 GB of RAM to open the texture file alone.
In the image above, you can see the terrain mesh textured with the lunar photography.
Here I’m showing the surface mesh lines to illustrate how the data works together to create a realistic reconstruction of the lunar surface.
In the photo, you can also see small models of the Lunar Module and the LRV.
I found these models online and scaled them appropriately to the image data.
In fact, I used the actual tracks on the lunar surface left by the real LRV to verify that I got the scale right!
Incredible.

Actual photo taken by the crew of the same view as the 3D model above Actual photo taken by the crew of the same view as the 3D model above

To test the accuracy of the result, I placed a virtual camera in 3D space into the LRV at the approximate position of Jack Schmitt’s Hasselblad camera.
Schmitt was tasked with taking LRV traverse photos as they drove around on the Moon.
Each of these photos has the TV camera that was mounted at the front of the LRV at the bottom center of the frame.
For this test, I pointed the LRV northeast from the landing site and compared the virtual 3D “photo” with a real mission photo.
You can see the same hills in the background at roughly the same scale.
The lighting is different, but that’s because the LRO scanned the area at a time of lunar day that was different than when they visited the valley in 1972.

Follow the Path

Apollo 17 Traverse Map Apollo 17 Traverse Map

The next step was to understand, in detail, where exactly the crew traveled on the lunar surface. Using original post-mission analysis documents from the 70s (left), I roughly plotted out the path followed from station to station. This wasn’t detailed enough to account for every small deviation and turn made by Cernan as he drove. To reach the next level of granular detail, I “timed” out the traverses using the transcript and audio recordings I had restored for apollo17.org previously.

Here is an example of a timing transcript. This is the first traverse from around the Lunar Module to Station 1.

Here is an example of a timing transcript. This is the first traverse from around the Lunar Module to Station 1.

ALSEP to Station 1
0:00:00 - 121:36:14 - TV off
0:07:29 - 121:43:43 - LRV starts moving first turns northwest a bit, then east
0:08:32 - 121:44:46 - avoiding a crater
0:09:29 - 121:45:43 - LRV at LM
0:09:45 - 121:45:59 - LRV starts moving to SEP site
0:11:10 - 121:47:24 - Stopped at SEP bearing 278, 003
0:14:48 - 121:51:02 - On the move to Station 1, turning to 181
0:15:27 - 121:51:41 - Trident east
0:16:07 - 121:52:21 - "whoa, watch it" (falling in crater)
0:16:26 - 121:52:40 - "you've got another hole on your right here"
0:16:33 - 121:52:47 - "why don't you go left there"
0:17:03 - 121:53:17 - "we need to head south"
0:17:31 - 121:53:45 - headed due south
0:18:00 - 121:54:14 - abreast of, or just above Trident crater
0:18:00 - 121:54:14 - 330, 0.3
0:19:48 - 121:56:02 - turning to 181 degrees
0:22:28 - 121:58:42 - 0.7km from steno
0:23:24 - 121:59:38 - abreast of Powell
0:23:56 - 122:00:10 - 342, 0.9
0:24:57 - 122:01:11 - Steno at 9 o'clock
0:26:47 - 122:03:01 - 346, 1.1
0:27:48 - 122:04:02 - Parked at 180
0:30:35 - 122:06:49 - TV on

This shows that a 30 minute, 35 second video is required and includes all of the movements mentioned by the crew during the traverse. Using this as the basis for a 3D animation, I plotted timing of the LRV’s movement according to this using the 70s mapped course as the basis. This first round of timed movement would show the rover moving at a constant rate from timed event to timed event. The speed of the LRV’s movement is known to have averaged 13 km/h (max 18km/h going down hill). I could error check the timed course animation I created from the raw data by making sure it corroborated the LRV’s actual historical speed. Many errors in course were caught this way and corrected. Further, the photography taken by Schmitt on each traverse could be directly compared to the animated orientation of the rover as the traverse progressed. These snapshots captured not only terrain in front of the rover, but also the orientation of the rover on the surface. Having timed these photos last year, I could use them as a basis for understanding how the LRV zigged and zagged across the surface from station to station.

Rendering the Moon

Rendering the Moon Rendering the Moon

Rendering 3D animation is very computer-intensive work—especially an animation with such high-resolution textures and meshes. Cinema 4D has a great feature: the ability to distribute the workload across machines. I borrowed as many computers as I could muster from friends and colleagues. These ranged in capability and age, but every little bit of CPU resources helped. With this makeshift render farm, I heated my basement for about 6 weeks this summer rendering out the animations of each traverse—14 in all.

The resulting animations far exceeded my expectations. The last step was to drop them into the video projects I had used to originally reconstruct the transcripts and re-export them to YouTube.

Here are links to the LRO animated LRV traverses for those interested:

Comments


Previous Post
From Apollo 17 to NASA
Next Post
Digitizing Apollo 17 Part 15 – Apollo17.org v1.0 Launched for the Mission’s 43rd Anniversary