At the end of 2017 I was brought in to another HTC Vive project with Dot Dot Dash. This time, Vive was announcing a new device for virtual reality called the Vive Tracker which could be attached any physical object and turn it into a controller.
The software development team was split two ways. Zach Krausnick was tasked with the development of the virtual experience, while I developed the electrical engineering and software that would make a custom controller possible.
Exoplanet launched at the start of CES 2017.
February 2016
Portland-based experience design company Dot Dot Dash brought me in to work on their Court Vision project with with Mountain Dew and HTC Vive. For this first-ever Vive exhibit outside of a tech conference, I served as the lead Virtual Reality technologist. I developed a method of capturing and viewing paintings in Tilt Brush using a touch screen, as well as a dynamically updating overlay to help viewers queueing for a chance to experience virtual reality better understand how long the wait was until their turn was up.
March-August 2015
About a year after I arrived at Wild Blue Technologies, they launched a side business called Avatarium 3D, whose primary focus was creating 3D models through the use of photogrammetry. The studio contained an array of 88 SLR camera that could capture a 360x180 degree snapshot in an instant.
The system worked well, but required customers to travel to the main office in order to be scanned. With the rise in popularity of 3D printing, Avatarium 3D decided to expand its availability by offering 3D prints of their scans and develop the capability to take scanning on the road.
With this shift in interest I was given the opportunity to design the electronic infrastructure and operating software capable of capturing over 100 images at once in a space small enough to be packed up, moved, and installed within a couple of days.
Many of the scanning mobile systems on the market at the time used Kinects or scanning wands to capture information, but required the subject to sit very still while their image was being take. Using a combination of Python and PHP, I managed to set up a system that mirrored the instantaneous quality of the larger SLR array, but could be controlled via a custom API over a closed network.
January-August, 2105
While working at Wild Blue Technologies I was part of a creative team responsible for designing and developing novel hands-on advertisement experiences.
We wanted to find a self-guided way to allow a customer to try a new product without utilizing store staff to watch over the display. The result was the Smile Sampler. When a guest approaches the kiosk, they are prompted to smile. Upon doing so, the kiosk dispenses a predetermined confection.
The smile detection serves a two-fold purpose. First, in the mind of the customer smiling becomes associated with the received candy. Second, it provides a primary reason to have a video feed because when a product is dispensed, a snapshot of the users face is processed into biometric information. Storing this information allows the system to lock that person out, preventing them from getting another sample for a limited amount of time.
In this version of the Smile Sampler, I developed the backend hardware and software that allowed a Bluetooth enabled device to dispense products with a simple API call.
March 2015
A classic background prop for any science fiction movie, the mysterious flashing panel is ubiquitous among blockbuster and B films alike.
This little light box was designed for our office Secret Santa gift exchange (although the exchange didn't happen until March 5th). I wasn't sure what to get my giftee at first, but then I realized that with a little imagination, I could give them a device that could do everything from drive a space station, to fly the Enterprise, or even keep Darth Vader alive. It could probably walk the dog and bathe the kids for all we know. It's so mysterious!
Ideally, gifts are supposed to be made/bought for around $20, so this baby really low tech. After Christmas Target was selling all of their small Christmas electronics for about $3, so I picked up these twinkling lights, which were battery operated, and some USB LED Christmas trees. I took it all apart and made the lights run on USB. The white screen is actually the diffuser panel from a broken LCD they were trashing at work. The frame is hand cut black-on-black gator board.
Total size is 6 x 6 x 1.75 inches and it weighs 5.5oz.
Battery Powered LEDs: $3
USB Christmas Tree: $2
Gatorboard: $5 (I used about 1/4 of it)
Diffuser: Free
I have plans to remake one of these for myself that will use the FadeCandy LED's, laser cut plastic, and some kind of wireless interface for updating images, possibly an Arduino Yun. That'll be far later down the line though.
February 23rd, 2015
This project aims to help users create their own printable sound portraits either through a pre-selected sound file or by recording their own sound entirely in a web browser. Currently the project is able to output SVG files, but I feel these are unfamiliar to a typical user, so I am working to also have the project output the PDF files as well. The P5.js, Paper.js, and Greensock javascript libraries were all used in various aspects of this ongoing project.
January 12th, 2015
A few years back I began working on creating tools for Adobe Illustrator using a plugin called Scriptographer. With this plugin, I could manipulate Illustrator files and objects using Java Script. The voronoi portion of this was originally written by Jonathan Puckey; I have repurposed his translation of Raymond Hill’s work into a tool that allows a user to quickly sketch out shapes and then fill them with voronoi patterns.
Today, Scriptographer no longer works with Illustrator, so I am working on recreating this process using Paper.js, Scriptographer's web based sibling. Working this way, I will be able to use any browser to create my jewelry as well as allow guests to my site to design their own pieces.
November 8th, 2014
Inspired by various object-based portraits around the internet (legos, candy,Rubik’s Cubes), I wanted to try my hand at it using dice. After some prep work in Photoshop, a program I wrote using Processing converts the image to a CSV file. Opening this .CSV in Excel, I can color and size the cells in the spread sheet (seen at the bottom of the image) to print out a guide to the exact size of the final portrait.
June 28th, 2013
As a continuing part of the National Aquarium’s renovations, a new exhibit was created to allow guests to share reflections about what they learned about their relationship with water. Using a magnetic poetry style of interface, users can create a phrase from a set of preselected words. Once completed, they are given the option of taking their photo or choosing from several photos of animals that can then be posted to the larger 22ft screen.
The interactive kiosks are programmed with Flash while the main screen is created using Unity 3d. While not the primary programmer for this exhibit, I did write the 2D and 3D mechanics for animating the ripple motion of the words on the larger screen. I also did the primary software installation and quality assurance for this exhibit.
November 9th, 2012
During the recent renovation of the National Aquarium in Baltimore the primary fish tank was drained and rebuilt. In order to fill this space with content, the Richard Lewis Media Group was hired to create three animations to be projected onto the construction space. As part of this project, I wrote software to keystone and synchronize these animations in real-time using Processing and OSC.
October 6th, 2012
“Health Happens Here” was an educational exhibit aimed at teaching children how to live better lives by making healthy decisions. As a member of this project, I was the sole software developer for the “Si Se Puede” cooperative video game pictured above. I, along with another team member, also shared responsibility for the installation of all media pieces.
“Si Se Puede” was made with four computers running a Flash front-end interface. All computers were networked using Open Sound Control and synchronized by taking commands from a single station. This primary station also controlled lights within the physical game construction through commands sent to a show controller.
For the last semester of my graduate work, I spent my time as a producer for a team working with Microsoft and the Xbox Live team on a project pitched to the ETC. Not only was I responsible for leading the team and keeping them on track, I was also responsible for creating 2D and 3D assets. Microsoft’s idea was to have students from the ETC come to Redmond, Washington and create content for the large format video walls found in every Microsoft store. We were initially supposed to produce four separate products that were supposed to build off of each other, but as we progressed, we realized the four concepts could be distilled down into two main goals.
The first goal was to create a theme for the wall that showcased animated versions of Xbox live avatars. This was a unique step because the avatars are typically only seen moving on the Xbox or Windows base phones. While we were still learning the technical end of how to render and manipulate avatars, one of out test themes got picked up for use at Microsoft store openings.
Our second goal was unique for both the Microsoft video walls and the Xbox Live avatars. Our objective was to create some kind of interactive piece that could be played on the wall and used avatars. We came up with a game that could be played using a Windows Phone 7. When a user entered their gamertag (a unique ID for every player) and a special code found only in the store into a custom application on their phone, they would then be able to play a game using their phone as the controller.
I was on the S.C.I.F.I (the Society for Collaborative Investigation into Futuristic Inventions) team in the Fall semester of 2010. Here, I served as both a hardware designer and programmer. This was a project sponsored by Lockheed Martin. As pitched by the client, “the goal of this project is to take a science fiction story (of the teams choosing) and create a prototype of any three items or technologies in that book, movie, radio show, etc (also of the teams choosing)”. As a team of four, we were responsible for not only coming up with the concepts for the project, but also creating the prototypes of these ideas. After quite a few rounds of brain storming, we came up with these three ideas:
The Arrow Of Apollo (inspired by the 2004 Battlestar Galactica remake)
A shoulder-mounted, hands-free location-finding device that projects an arrow shaped laser mark onto the ground in front of the wearer. The arrow will point to a pre-selected way-point, and will automatically adjust itself based on the wearer’s own directional heading.
The Cardio Gauntlet (inspired by Star Trek tricorders)
A glove with built in ECG (electrocardiography) capabilities. The wearer simply touches the patient with ECG enabled fingers and the patient’s heart rate is displayed via an android phone mounted on the wearer’s wrist.
The Dradis (also inspired by the 2004 Battlestar Galactica remake and maps seen in first-person shooting games)
This device consists of two separate components. One is a hardware device that gathers data on local atmospheric conditions as well as relays information to other individuals who have a Dradis. The second component is software for a Droid X phone that controls the Dradis, Cardio Gauntlet, and Arrow of Apollo via Bluetooth.
To give these devices a little more purpose, we also came up with a simple scenario that helped describe the real-world functions of these devices. The basic premise is that these devices are designed for use as aids in an above ground search and rescue situation or for disaster response. We designed the function of these gadgets under the assumption that there roads are blocked or non-existent and cellular phone service is unavailable.
The first job of our hypothetical rescue worker is to get them to their destination. If the user puts a predetermined gps location into the Arrow of Apollo using their Droid X, they will be pointed in the direction they need to walk in order to arrive at the given coordinates via the Arrow’s Laser.
Once they arrive at the disaster/search area, the aid person may have to give medical aid to injured individuals at the scene. A very quick assessment of the patient’s heart condition could be obtained by firing up the Cardio Gauntlet’s software and placing the ECG components embedded in glove on a bare patch of the patients skin.
Finally, if the rescue worker needs to mark a location they have found as hazardous, or share a point of interest they have found, all they have to do is mark it with the Dradis, which will then relay this new information to other individuals also carrying a Dradis device. Besides relaying marked locations to other teammates, the Dradis is also capable of relaying the wearer’s location to the rest of the team, allowing all teammates to keep track of each other in real time.
This project was a student pitch project that I came on to because they were in need of a hardware designer. Our aim was to create a low-budget, motion-controlled robot that was capable of time-lapse photography and reproducing motions in three dimensions accurately (aka Motion Controlled Time Lapse or MoCoTiLa). Early tests involved using a micro-controller to trigger a camera’s shutter. This worked well for us at the beginning, but we soon found that we needed more control over the camera and decided to upgrade from a Canon EOS Rebel to a Canon 5D. Not only did this camera have the capability of much higher resolution images, it could also shoot video and relay preview videos back to a computer in real-time.
As the semester went along, we continued to develop various parts of the robot simultaneously. Tom Corbett, one of the team’s producers, and myself worked on finding the components we would need to make the robot work properly while Charles Daniel, our lead programmer, worked on controlling these parts from a computer.
At the summation of the semester, we did have a working model, albeit of prototype quality. Our machine was able to move along standard movie production track and repeat motions created through a custom web interface. Our biggest downfall was a lack of stability and resolution in the rotary joints. We also experienced environmental interference when filming outside. Our motors were simply not strong enough to withstand anything more than a light breeze.
More extensive documentation can be found on the MoCoTiLa Project’s Website.
April 11th, 2010
Tools: Arduino, Processing, Ableton Live
As part of my Building Virtual Worlds class at the Entertainment Technology Center I built a laser harp using an Arduino and some Sharp range sensors. Initially, the laser harp was intended to be run through a MIDI interface and control Ableton Live. I did get this working, but due to complications with the sensors, I needed more processing power than the Arduino readily provided.
The second round of the harp yielded software written in Processing that pulled data from the sensors and hosted the cleaned up information to a network viaOpen Sound Control. This was then picked up by a program build in Panda 3D and Python by Danielle Holstine made specifically for a performance piece, which, unfortunately never made it into the BVW show. Instead, the harp, and a third program was presented at the BVW after-party. This program was also written in Processing by myself. It allowed users to control the height of fountains of color, the gravity that affects them, and the volume of music notes (this is seen in the images presented above as well as the video below, sound was removed).
The video to the right demonstrates the use of OSC to control a separate program (in this case, the same program used at the BVW after party). The window on the left gets information from the laser harp, interprets it, and then hosts it to the network using the OSC protocol (indicated by green “laser” lines). When the laser harp is unavailable, the lasers appear red, indicating that the program is in its laser harp simulation mode. In this mode, the software generates messages as if it were hooked to the harp, where these messages are controlled by the sliders below lasers.
March 3rd, 2010
Tools: Processing
This was an attempt at creating an interface that would “spin” the users drawing around a defined object. I drew inspiration from the way a spider spins silk around it’s prey.
February 17th, 2010
Tools: Processing
The concept behind this project was to explore a more dynamic method of brush painting. There are controls for the shape of the brush, as well as the overall width (the program is pressure sensitive when used with a tablet), length, and total number of segments between the start and end of the brush.
May13, 2006
Tools: Maya
In the sophomore year of my undergraduate degree (2005), I was recruited to be a texturer, renderer, rigger, and programmer for the senior project of animator/designer Greg Bliss. These videos showcase some of the work I did during that time.
Modeling by Pieter Wessels
Animation by Greg Bliss
All other work by myself