Posts with «speech recognition» label

Start a 1976 Jeep with voice commands using a MacBook and an Arduino

After being given a 2009 MacBook, John Forsyth decided to use it to start a 1976 Jeep via voice control.

The build uses the laptop’s Enhanced Dictation functionality to convert text into speech, and when a Python program receives the proper keywords, it sends an “H” character over serial to an Arduino Uno to activate the vehicle.

The Uno uses a transistor to control a 12V relay, which passes current to the Jeep’s starter solenoid. After a short delay, the MacBook then transmits an “L” command to have it release the relay, ready to do the job again when needed!

As a fan of Iron Man, Forsyth channeled his inner Tony Stark and even programmed the system to respond to “JARVIS, let’s get things going!”

Arduino Clock Is HAL 1000

In the movie 2001: A Space Odyssey, HAL 9000 — the neurotic computer — had a birthday in 1992 (for some reason, in the book it is 1997). In the late 1960s, that date sounded impossibly far away, but now it seems like a distant memory. The only thing is, we are only now starting to get computers with voice I/O that are practical and even they are a far cry from HAL.

[GeraldF6] built an Arduino-based clock. That’s nothing new but thanks to a MOVI board (ok, shield), this clock has voice input and output as you can see in the video below. Unlike most modern speech-enabled devices, the MOVI board (and, thus, the clock, does not use an external server in the cloud or any remote processing at all. On the other hand, the speech quality isn’t what you might expect from any of the modern smartphone assistants that talk. We estimate it might be about 1/9 the power of the HAL 9000.

You might wonder what you have to say to a clock. You’ll see in the video you can do things like set and query timers. Unlike HAL, the device works like a Star Trek computer. You address it as Arduino. Then it beeps and you can speak a command. There’s also a real-time clock module.

Setting up the MOVI is simple:

 recognizer.init(); // Initialize MOVI (waits for it to boot)
 recognizer.callSign("Arduino"); // Train callsign Arduino (may take 20 seconds)
 recognizer.addSentence(F("What time is it ?")); // Add sentence 1
 recognizer.addSentence(F("What is the time ?")); // Add sentence 2
 recognizer.addSentence(F("What is the date ?")); // Add sentence 3
...

Then a call to recognizer.poll will return a numeric code for anything it hears. Here is a snippet:

// Get result from MOVI, 0 denotes nothing happened, negative values denote events (see docs)

 signed int res = recognizer.poll(); 

// Tell current time
 if (res==1 | res==2) { // Sentence 1 & 2
 if ( now.hour() > 12) 
 recognizer.say("It's " + String(now.hour()-12) + " " + ( now.minute() < 10 ? "O" : "" ) +
     String(now.minute()) + "P M" ); // Speak the time
...

Fairly easy.

HAL being a NASA project (USSC, not NASA, and HAL was a product of a lab at University of Illinois Urbana-Champaign – ed.) probably cost millions, but the MOVI board is $70-$90. It also isn’t likely to go crazy and try to kill you, so that’s another bonus. Maybe we’ll build one in a different casing. We recently talked about neural networks improving speech recognition and synthesis. This is a long way from that.


Filed under: Arduino Hacks, clock hacks

Speech Recognition and Synthesis with Arduino

In my previous post, I showed how to control a few LEDs using an Arduino board and BitVoicer Server. In this post, I am going to make things a little more complicated.

read more

Speech Recognition with Arduino and BitVoicer Server

In this post I am going to show how to use an Arduino board and BitVoicer Server to control a few LEDs with voice commands. I will be using the Arduino Micro in this post, but you can use any Arduino board you have at hand.

The following procedures will be executed to transform voice commands into LED activity:

read more

Roomba, I Command Thee: Use Raspberry Pi for Voice Control

Take advantage of these open source resources to set up voice control with Raspberry Pi and bark orders at your home appliances.

Read more on MAKE

The post Roomba, I Command Thee: Use Raspberry Pi for Voice Control appeared first on Make: DIY Projects, How-Tos, Electronics, Crafts and Ideas for Makers.

Speech / Voice Recognition – remix.

It’s about right time to release one more “remix” for one of my blog, published almost a year ago. I haven’t got much comments, not many as I was expecting, on a topic. There are some reasons, that would explain this phenomenon, but I would better to start outlining what I did in new release, and some of you who tried old version would be impressed by progress I’ve made!

Basic structure was left almost intact. Essential parts of the project: Filtering 2D and Cross-Correlation are the same, so please read my old post, if you come across this one w/o seen it first. What differs is “preprocessing”, before we get to the filtering stage. In first, I ported a code on Leonardo board.  The easiness of connecting electret mic to Leonardo, just didn’t give me a choice!  I played already with Leonardo ADC – electret mic’s in my previous post, and would assure you, that this guys were designed to work as a team. Uno followers have to solder a pre-amplifier, not big deal, the same time not really interesting.  Code would run on Uno, except Timer and ADC settings, which you could always “copy/paste” from the old version. Feel free, be my guest.

Analog front-end is absolutely the same as I used in Sound Localization project. You would need only one mic here, just reduce the number of electrical components down.

Sampling subroutine based on Timer 4, and arduino Leonardo internal PGA set to gain x40. There are a comments in the code, so you could adjust gain up or down depends on the sensitivity of mic you have.

Windowing LUT is slightly modified Hamming/Hann cosine function,  I’d say my table is an “intermediate” version of both mention inventors.

FFT is my best achievements of this year, RADIX-4.  Compare to old code, about 3x faster. This is why I was able to increase FFT_SIZE up to 128,  still having plenty of time.  Magnitude calculation is based on approximation, very fast, because no square root extraction required. Accuracy, in the worst case scenario ~95%, which is more than enough in this project.

I changed Non-Linear compression algorithm, as now there are more Bins – 64 to pack in 16 Bands. Math is simple, and hope doesn’t need an extensive comments. Packing is necessary due memory limits, 1 sec password in current configuration (16 bands with sampling rate 4 kHz) occupied 1 kByte, full size of EEPROM on Leonardo or Uno boards.

Command Line Interface is preserved. Here is the instructions, how to set everything up and running.

And here how spectrogram looks like in LibreOffice, test phrase “Front right”, (OS Linux Ubuntu, 12.04):

                   

Instructions.

 Get your microphone wired / connected? All checked at least a couple times with multimeter, voltages looks o’k? Good, now you are ready to start! ( Don’t forget to upload a sketch to your arduino / Leonardo -);

1. X. First of all, open serial monitor window, and check boadrate.
Should be 115200. Next, type “x” and Enter. Get some response? Does
it look like a table? Excellent, data you are looking at is “raw” 
sampling data. Probably, just noise, acoustical or electrical. 

2. F. Second test, type “f” - Enter. Again, arduino would print out 
a table. This time data represent “processed” by FFT analog signal. 
Each bin corresponds to frequency range 32 Hz. If you have a signal
generator , it's right time to do a detailed check up microphone, 
wiring, and software. If don't have, you can use your computer's 
sound card and some program, there are plenty of them available 
on-line free of charge. Connect generator to PC USB speackers, and
run a single tone, anything in audible range 16 – 2000 Hz. Check 
again using “f” command, if arduino registers a signal, and it's in
right bin. Due “windowing” function, even pure single tone would 
show up at least in 2-3 neighboring bins. Amplitude depends on 
sensitivity of the mic, and volume of the sound. Sending “x” you can 
confirm, that there is no “clipping” too many “– 511” or “+511”.

3. S. Next, if all goes well till this step, you probably already
notice, that whenever a mic picking-up a sound – yellow on-board LED
lights up. Now, it's better shutdown your TV, iPad, radio. 
Make your environment as quite as possible. Try to say something 
with your own voice, and see if led lighting up when you start 
talking, and than after ~1 sec it goes off. Repeat a few times, 
adjusting a volume and / or distance to the mic, so led goes “on/off”
reliable. Send “s” command when led is off. Don't worry, it should 
be no report on screen, till you say a word. Procedure is simple, 
send “s” - say something. After you talk, and get a printing, look 
carefully. Your objective, is get a “spectrogram” which consist of 
a few spots / blobs of digits, randomly distributed all over the 
“surface”. Repeat a few times with one word, than try another one 
and so on. With short words, very likely reporting data would be 
concentrated at the top ( you may need to scroll up to see the 
beginning), if this is a case, better to choose longer words or talk
in slower tempo. Just remember, there are three dimension, time, 
frequency and volume. You can use a signal generator here again, 
spectrogram should looks like a vertical line, may be 2 - 3 parallel
lines on low frequencies test tones. 
 Changing a frequency, you can even get some curves. What is 
important, the must be no negative numbers. If you see them – 
"overloading" happens, Decrease a volume. Dynamic range is limited 
by 127, more close you can get to this value w/o negatives, 
the better. And last dimension is a frequency. For man, it would 
be a little bit hard to “detach” a spectrogram from the left border.
It's true for everyone, who is not opera singer, me no exceptions.... 

4. R and G. After you practice enough, and received Manny nice 
looking spectrograms, it's time to check with arduino, if it's 
agree with you / thinks similar. Send “r” - recording, and say 
a word you've get your best spectrum with. Wait a few seconds, 
writing to EEPROM takes time. Now send a “g” and say the same word.
In the same manner, tonality and volume. See on the outputs, what 
is the cross-correlation factor you received. More than 50% - very 
good for beginning. Less – try again a few more times sending “g” and
repeating same word, maybe slightly varying pronunciation. Try to 
reach best recognition, “crack” your own password code! 
( note: Negatives number on “G” reports-form must be present.)
If no luck try another password code. Now you know the drill: 
S – repeat, repeat, repeat....., R, G – repeat, repeat, repeat..... 
(joke). Can't get good match? Try your computer's test sounds, 
beeps, horns, clicks, barks – whatever your OS has. It doesn't have
to be shorter than 1 sec, but only 1 second in the beginning 
would be stored / compared. My computer is able to repeat the same 
sound track (speakers test - "front right") with enormously high 
cross-factor 99 % ! I'm not a computer, my best short 86 % so far...

5. P. This command is simply reading the content of the EEPROM, 
so you always can verify, what you stored last time. 
Editing / formatting this data you can store a table in the arduino 
FLASH memory using PROGMEM. About 10 – 15 commands. Of course, 
storing data in external SD card or EEPROM, could greatly increase
the “vocabulary”, the same time design of fast cross-correlation 
algorithm with multiple “pattern”s would be another brain teasing
puzzle -);.

Have fun!

Link to arduino sketch, VOR (VOice Recognition).

 


Speech recognition on an Arduino

Speech recognition is usually the purview of fairly high-powered computers chugging along at hundreds of Megahertz with megabytes of RAM. Bringing speech recognition to the low-power microcontroller you’d find in an Arduino sounds like the work of a mad scientist or Ph.D. candidate, but that’s exactly what [Arjo Chakravarty] did. He developed the μSpeech library for the Arduino to allow for speech recognition for a limited set of voice commands.

Where most speech recognition systems use FFT and very fancy math to determine what phonemes a user is saying, [Arjo]‘s system does away with this unnecessary complexity in favor of using very, very basic integral and differential calculus.

From [Arjo]‘s user guide for μSpeech (PDF warning) we can see it’s possible to connect a small microphone to the analog input of an Arduino and accept voice commands such as ‘left’, ‘right’, and ‘stop’. The accuracy is pretty good, as well – 80% if μSpeech is trying to recognize words, and 30-40% if μSpeech is programmed to recognize single phonemes.

Sadly we couldn’t find a demo video of μSpeech in action, but you’re more than welcome to grab it via github for your own project. Send us a video of μSpeech in action and we’ll put it up.


Filed under: arduino hacks

Speech / Voice Recognition. Arduino project, next in a series FFT and Arduino.

 Finally, I’d like to present  the most sophisticated project I’ve done so far, build around the idea turning Arduino board into a DSP.  The results are really impressive for small microprocessor, with low memory size and low MIPS. IMHO, arduino provides better results, than Windows Vista VR system, with 1 GB / 2.2 GHz  hardware, for short one-two words commands, of course.
No HMM, neural networks, or other very popular and “scientifically sounding” theories, were considered to be implemented in the algorithm. Google brings up  millions links on a topic, just ask, but only few of them are designed on really scientific concept, rather than dumb data base “sharpening”. I’m not saying they are completely wrong, and I’m not an expert in the field, but they are not smart ether. My decision is simple 2D cross-correlation. Basically, the heart of the recognition algorithm is similar to an image matching program, which works the same way for voice/sound.  To create a Spectrogram image, arduino is continuously monitoring sound level via microphone, and start capturing data when VOX threshold is exceeded. After input array “X” filled up, data transfered on next level to calculate FFT. The same “conveyor belt” works between FFT and Filtering, flags raised when data is ready, and flags lowered when process finished. The only difference is a speed, conveyor belt is running faster passing data ADC-FFT, and slower at Filter-Correlation stage, as it requires 64 regular cycles to complete spectrogram image in one SuperCycle.  The most time consuming part is Edge Enhancement / HPF Filtering of the spectrogram. I’m still looking around to improve performance of this stage, as it holds all process back from to be fully “Real Time”.
 Specification:
-  4 kHz sampling rate:  2 kHz voice freq. range;
-  64 FFT subroutine,    62.5 Hz spectral resolution;
-  16 x 64 Spectrogram Image, around 1 second max voice password;
-  duration of the Cross-Correlation < 5 milliseconds;
-  duration of the FFT+SQRT+Compression < 4 milliseconds;
-  duration of the Edge Enhancement ~ 35 milliseconds;Main cycle time frame is 16 milliseconds, it’s defined by sampling rate x FFT size, 0.25 x 64 = 64 millisecond. Super-cycle 1.024 is needed only because EE prevents all processes to be completed in less than 16 milliseconds. There is a resources left, to increase sampling up to 8 or even 12 kHz, I just had no time to conduct experiments if it is beneficial.

There is a Command Line Interface, built-in the software, which control “record” and debug “print” functions, 7 commands for now:
if (incomingByte == ‘x’) {           // INPUT ADC DATA
if (incomingByte == ‘f’) {           // FFT OUTPUT
if (incomingByte == ‘s’) {           // SPECROGRAMM PRE  FILTERED
if (incomingByte == ‘g’) {           // SPECROGRAMM POST FILTERED
if (incomingByte == ‘r’) {           // RECORD SPECROGRAMM TO EEPROM
if (incomingByte == ‘p’) {           // PLAY SPECROGRAMM FROM EEPROM
if (incomingByte == ‘m’) {           // FREE MEMORY BYTES

Software is written for AtMega328p microprocessor, Arduino Uno board or similar. For others, all referenced registers has to be replaced with appropriate names for microprocessor.Compiles on 022 IDE, there are some conflicts with 1.0 IDE, that I was not feel myself right to troubleshoot yet. For better understanding some math background, have a look at my previous posts.

Link to download a sketch:   Voice_Recognition_24_01

Analog front-end is the same, as I used in my first project: Color Ogran
There is not much could be improved on this part, and I again used both inputs – from microphone to do tests with my own voice, and also from “line” input, for single tone test generated by computer during debugging. Next picture shows “s” command print-out in the serial monitor window, after I pronounce a word : “Spectrogram” . Due limited size of the window, data printed with 90 degree rotation, left-right is frequencies bands direction, and up-down is time. Lower freq. on left side (60 Hz) and higher (2 kHz) on the right.  The same time 3D images generated in right view angle.

This is how spectrogram looks like after “g” command entered in serial monitor and word sounds just right after that:

Next couple images created with single tone frequency  (320 Hz), just to show more clear “internal properties” of the filtering, again “s” and “g” commands were entered:

Well, as tone sounds continuously, it shows filtering in one direction only, and not the best tutorial on edge-enhancement theory. (“Home brew” lab limits). The same time last picture shows, that each “peek” on the original spectrogram, become surrounded by negative smaller peeks, resulting in “0″ overall sum  on 3×3 foot-print, and consequently on the whole map. In electronics it goes under HPF name, and essence of process is to remove DC component, plus attenuate  Low Frequencies.
Excelent on-line book

Short manual:
to be completed later


n/a

n/a