The interns turn up the heat and create a Rubens' tube karaoke game. Start with a long metal tube, drill holes every few centimeters, then connect a speaker to one side and pump propane in the other. Add a spark and you get a sonic wave of fire!
The goal of this project was to create a karaoke system with a Rubens' tube fire visualizer. Rubens' tubes have been around since 1905, so they are nothing new, but we wanted to see how we could bring it into the 21st century.
To make a Rubens' tube, you need to drill regularly spaced holes in a metal pipe. Propane feeds into the tube through a valve at one end and slowly leaks out of holes along the top of the pipe. This gas is then ignited, forming a line of standing flames. A speaker sealed to the other end of the tube emits sound waves into the pipe. When sound is played through the pipe, it vibrates the horizontal column of air. When you play a sound wave with a wavelength that's a multiple of the length of the tube, then you create a standing wave, creating a sine pattern in the fire. For our system, we connected the speaker to the audio output of a NI myDAQ device so we could play any sound we wanted from the computer.
Our karaoke program utilizes three NI myDAQ devices, connected as seen in the diagram to the right. Two myDAQ devices serve as audio in/out for each player, connecting to a microphone and set of headphones. The third myDAQ device takes in the song to be played and outputs the music to the speaker attached to the Rubens' tube. The myDAQ’s then connect to the computer via USB.
LabVIEW 2011 or Later
DAQmx 9.3 or Later
The notes being sung is displayed right next to the LED’s. The plot on top is the frequency being detected. It also shows a history of frequencies that were sung as the X-axis is time. The next plot is the notes being sung. Keep in mind these plots show all three channels at once with the song in yellow, Player 1 in red, and Player 2 in blue. The chart below that is the high score chart, which holds a record of all player names and scores that have been sung in a text file saved on the computer.
This is the code for acquiring data from the NI myDAQ device. It uses producer consumer architecture to maximize performance while we collect, process, and display data. Below is the code where we retrieve the microphone data from the singers and place it in their respective queues.
Once we get this data we then queue it for the consumer loop to process it. The consumer loop, which runs in parallel to our producer loop shown above, dequeues and converts our time domain data to the frequency domain using the Fast Fourier Transform shown below. The scaling factor of 8 was optimized to the magnitude of the signal we were getting off of the microphones.
Once we get the fundamental frequency, f0, we use the above code to retrieve what note the player is singing as well as how far away they are from the note. We display the closest note the player is singing but we use the distance from the note for scoring.
We do this for both players and compare these notes to the calculated notes of the song to figure out the scoring while in VS mode. The scoring increases the closer and longer the player can match the note of the song they are singing. This is represented by the three LEDs by the players’ names.
For the duet scoring we use the same scoring algorithm except compare the two singers and score them based on proximity to the note to each other.
Once we are done singing we press the ‘Done’ key, which writes the players scores to the high score chart and ends the game.