YouTuber “The Mixed Signal” has come up with a fun way to make music: spinning a gear-like ferromagnetic tonewheel next to a homemade coil pickup.
A stepper motor turns the wheel using a CNC shield under Arduino control. When set up, it’s simply a matter of programming in the proper speed via G-code to create the correct sound.
The concept isn’t entirely new, as this type of assembly was used in Hammond organs produced in the middle of the last century. The Mixed Signal’s project, however, is a very interesting take on this technology, with the use of 3D-printed parts including the iron-embedded tonewheel, as well as the integration of a MIDI keyboard.
Trigonometry is a struggle for some students. Perhaps one of the reasons for this is that instruction can be something of a one-way street, and concepts can be hard to grasp until more technical building blocks are learned.
As seen here, researchers at the Universidad del Desarrollo in Chile aim to change that with a trigonometry tabletop display called TAMI, or Tangible Mathematics Interface. This nearly horizontal screen shows mathematical relationships, while allowing students to interact with them using physical controls.
The most prominent controller here is a large rotary wheel. Students rotate this to modify the angle shown in the middle, and observe how concepts like sine and cosine react to this manipulation. An Arduino Leonardo takes input from this and other controls, and passes it along to a computer. This then handles on-screen info and even plays sounds as needed!
We’re just days away from Maker Faire Rome — The European Edition, taking place October 18-20th at Fiera di Roma. This year’s Arduino booth, which will be located inside Pavilion 8, will be broken up into three areas:
Makers: We will be showcasing the Dark Side Rover, the grand prize winner of the Ultimate Arduino Challenge contest. Nicolas Gilbert and his class, the authors of the project, will be at the booth and organize some practical demonstrations with the robots.
Arduino IoT Cloud and connected products: In this section, we’ll highlight two demos connected to the Arduino IoT Cloud. There will also be an exhibition of our MKR family boards, featuring a range of connectivity options like WiFi, Bluetooth, NB-IoT, Sigfox and LoRa.
The Arduino booth will include an interactive basketball installation as well — all visitors will have the opportunity to win some Arduino boards, goodies, and much more!
Finally, members of the Arduino team will join Maker Faire Rome’s program all weekend long.
On Saturday, October 19th, Massimo will give his traditional ‘State of Arduino,’ discussing the latest developments and future challenges for the company (4pm, Stage B in Pavilion 8).
If you are planning to attend, please visit Maker Faire’s Rome website to find the full agenda, plus other important information. Finally, don’t forget to invite your friends and post on social media tagging @Arduino and using the hashtag #MFR19.
Arduino is on a mission to make machine learning simple enough for anyone to use. We’ve been working with the TensorFlow Lite team over the past few months and are excited to show you what we’ve been up to together: bringing TensorFlow Lite Micro to the Arduino Nano 33 BLE Sense. In this article, we’ll show you how to install and run several new TensorFlow Lite Micro examples that are now available in the Arduino Library Manager.
The first tutorial below shows you how to install a neural network on your Arduino board to recognize simple voice commands.
Next, we’ll introduce a more in-depth tutorial you can use to train your own custom gesture recognition model for Arduino using TensorFlow in Colab. This material is based on a practical workshop held by Sandeep Mistry and Dan Coleman, an updated version of which is now online.
If you have previous experience with Arduino, you may be able to get these tutorials working within a couple of hours. If you’re entirely new to microcontrollers, it may take a bit longer.
We’re excited to share some of the first examples and tutorials, and to see what you will build from here. Let’s get started!
Note: The following projects are based on TensorFlow Lite for Microcontrollers which is currently experimental within the TensorFlow repo. This is still a new and emerging field!
Microcontrollers and TinyML
Microcontrollers, such as those used on Arduino boards, are low-cost, single chip, self-contained computer systems. They’re the invisible computers embedded inside billions of everyday gadgets like wearables, drones, 3D printers, toys, rice cookers, smart plugs, e-scooters, washing machines. The trend to connect these devices is part of what is referred to as the Internet of Things.
Arduino is an open-source platform and community focused on making microcontroller application development accessible to everyone. The board we’re using here has an Arm Cortex-M4 microcontroller running at 64 MHz with 1MB Flash memory and 256 KB of RAM. This is tiny in comparison to Cloud, PC, or mobile but reasonable by microcontroller standards.
There are practical reasons you might want to squeeze ML on microcontrollers, including:
Function – wanting a smart device to act quickly and locally (independent of the Internet).
Cost – accomplishing this with simple, lower cost hardware.
Privacy – not wanting to share all sensor data externally.
Efficiency – smaller device form-factor, energy-harvesting or longer battery life.
There’s a final goal which we’re building towards that is very important:
Machine learning can make microcontrollers accessible to developers who don’t have a background in embedded development
On the machine learning side, there are techniques you can use to fit neural network models into memory constrained devices like microcontrollers. One of the key steps is the quantization of the weights from floating point to 8-bit integers. This also has the effect of making inference quicker to calculate and more applicable to lower clock-rate devices.
TinyML is an emerging field and there is still work to do – but what’s exciting is there’s a vast unexplored application space out there. Billions of microcontrollers combined with all sorts of sensors in all sorts of places which can lead to some seriously creative and valuable TinyML applications in the future.
Environmental – temperature, humidity and pressure
Light – brightness, color and object proximity
Unlike classic Arduino Uno, the board combines a microcontroller with onboard sensors which means you can address many use cases without additional hardware or wiring. The board is also small enough to be used in end applications like wearables. As the name suggests it has Bluetooth LE connectivity so you can send data (or inference results) to a laptop, mobile app or other BLE boards and peripherals.
Tip: Sensors on a USB stick – Connecting the BLE Sense board over USB is an easy way to capture data and add multiple sensors to single board computers without the need for additional wiring or hardware – a nice addition to a Raspberry Pi, for example.
TensorFlow Lite for Microcontrollers examples
The inference examples for TensorFlow Lite for Microcontrollers are now packaged and available through the Arduino Library manager making it possible to include and run them on Arduino in a few clicks. In this section we’ll show you how to run them. The examples are:
micro_speech – speech recognition using the onboard microphone
magic_wand – gesture recognition using the onboard IMU
person_detection – person detection using an external ArduCam camera
For more background on the examples you can take a look at the source in the TensorFlow repository. The models in these examples were previously trained. The tutorials below show you how to deploy and run them on an Arduino. In the next section, we’ll discuss training.
How to run the examples using Arduino Create web editor
Once you connect your Arduino Nano 33 BLE Sense to your desktop machine with a USB cable you will be able to compile and run the following TensorFlow examples on the board by using the Arduino Create web editor:
Focus on the speech recognition example: micro_speech
One of the first steps with an Arduino board is getting the LED to flash. Here, we’ll do it with a twist by using TensorFlow Lite Micro to recognise voice keywords. It has a simple vocabulary of “yes” and “no”. Remember this model is running locally on a microcontroller with only 256KB of RAM, so don’t expect commercial ‘voice assistant’ level accuracy – it has no Internet connection and on the order of 2000x less local RAM available.
Note the board can be battery powered as well. As the Arduino can be connected to motors, actuators and more this offers the potential for voice-controlled projects.
How to run the examples using the Arduino IDE
Alternatively you can use try the same inference examples using Arduino IDE application.
First, follow the instructions in the next section Setting up the Arduino IDE.
In the Arduino IDE, you will see the examples available via the File > Examples > Arduino_TensorFlowLite menu in the ArduinoIDE.
Select an example and the sketch will open. To compile, upload and run the examples on the board, and click the arrow icon:
For advanced users who prefer a command line, there is also the arduino-cli.
Training a TensorFlow Lite Micro model for Arduino
Next we will use ML to enable the Arduino board to recognise gestures. We’ll capture motion data from the Arduino Nano 33 BLE Sense board, import it into TensorFlow to train a model, and deploy the resulting classifier onto the board.
The idea for this tutorial was based on Charlie Gerard’s awesome Play Street Fighter with body movements using Arduino and Tensorflow.js. In Charlie’s example, the board is streaming all sensor data from the Arduino to another machine which performs the gesture classification in Tensorflow.js. We take this further and “TinyML-ifiy” it by performing gesture classification on the Arduino board itself. This is made easier in our case as the Arduino Nano 33 BLE Sense board we’re using has a more powerful Arm Cortex-M4 processor, and an on-board IMU.
We’ve adapted the tutorial below, so no additional hardware is needed – the sampling starts on detecting movement of the board. The original version of the tutorial adds a breadboard and a hardware button to press to trigger sampling. If you want to get into a little hardware, you can follow that version instead.
Setting up the Arduino IDE
Following the steps below sets up the Arduino IDE application used to both upload inference models to your board and download training data from it in the next section. There are a few more steps involved than using Arduino Create web editor because we will need to download and install the specific board and libraries in the Arduino IDE.
First, we need to capture some training data. You can capture sensor data logs from the Arduino board over the same USB cable you use to program the board with your laptop or PC.
Arduino boards run small applications (also called sketches) which are compiled from .ino format Arduino source code, and programmed onto the board using the Arduino IDE or Arduino Create.
We’ll be using a pre-made sketch IMU_Capture.ino which does the following:
Monitor the board’s accelerometer and gyroscope
Trigger a sample window on detecting significant linear acceleration of the board
Sample for one second at 119Hz, outputting CSV format data over USB
Loop back and monitor for the next gesture
The sensors we choose to read from the board, the sample rate, the trigger threshold, and whether we stream data output as CSV, JSON, binary or some other format are all customizable in the sketch running on the Arduino. There is also scope to perform signal preprocessing and filtering on the device before the data is output to the log – this we can cover in another blog. For now, you can just upload the sketch and get sampling.
To program the board with this sketch in the Arduino IDE:
Compile and upload it to the board with Sketch > Upload
Visualizing live sensor data log from the Arduino board
With that done we can now visualize the data coming off the board. We’re not capturing data yet this is just to give you a feel for how the sensor data capture is triggered and how long a sample window is. This will help when it comes to collecting training samples.
In the Arduino IDE, open the Serial Plotter Tools > Serial Plotter
If you get an error that the board is not available, reselect the port:
Tools > Port > portname (Arduino Nano 33 BLE)
Pick up the board and practice your punch and flex gestures
You’ll see it only sample for a one second window, then wait for the next gesture
You should see a live graph of the sensor data capture (see GIF below)
When you’re done be sure to close the Serial Plotter window – this is important as the next step won’t work otherwise.
Capturing gesture training data
To capture data as a CSV log to upload to TensorFlow, you can use Arduino IDE > Tools > Serial Monitor to view the data and export it to your desktop machine:
Reset the board by pressing the small white button on the top
Pick up the board in one hand (picking it up later will trigger sampling)
In the Arduino IDE, open the Serial Monitor Tools > Serial Monitor
If you get an error that the board is not available, reselect the port:
Tools > Port > portname (Arduino Nano 33 BLE)
Make a punch gesture with the board in your hand (Be careful whilst doing this!)
Make the outward punch quickly enough to trigger the capture
Return to a neutral position slowly so as not to trigger the capture again
Repeat the gesture capture step 10 or more times to gather more data
Copy and paste the data from the Serial Console to new text file called punch.csv
Clear the console window output and repeat all the steps above, this time with a flex gesture in a file called flex.csv
Make the inward flex fast enough to trigger capture returning slowly each time
Note the first line of your two csv files should contain the fields aX,aY,aZ,gX,gY,gZ.
Linux tip: If you prefer you can redirect the sensor log output from the Arduino straight to a .csv file on the command line. With the Serial Plotter / Serial Monitor windows closed use:
$ cat /dev/cu.usbmodem[nnnnn] > sensorlog.csv
Training in TensorFlow
We’re going to use Google Colab to train our machine learning model using the data we collected from the Arduino board in the previous section. Colab provides a Jupyter notebook that allows us to run our TensorFlow training in a web browser.
The colab will step you through the following:
Set up Python environment
Upload the punch.csv and flex.csv data
Parse and prepare the data
Build and train the model
Convert the trained model to TensorFlow Lite
Encode the model in an Arduino header file
The final step of the colab is generates the model.h file to download and include in our Arduino IDE gesture classifier project in the next section:
Create a new tab in the IDE. When asked name it model.h
Open the model.h tab and paste in the version you downloaded from Colab
Upload the sketch: Sketch > Upload
Open the Serial Monitor: Tools > Serial Monitor
Perform some gestures
The confidence of each gesture will be printed to the Serial Monitor (0 = low confidence, 1 = high confidence)
Congratulations you’ve just trained your first ML application for Arduino!
For added fun the Emoji_Button.ino example shows how to create a USB keyboard that prints an emoji character in Linux and macOS. Try combining the Emoji_Button.ino example with the IMU_Classifier.ino sketch to create a gesture controlled emoji keyboard ?.
It’s an exciting time with a lot to learn and explore in TinyML. We hope this blog has given you some idea of the potential and a starting point to start applying it in your own projects. Be sure to let us know what you build and share it with the Arduino community.
The unit, which is made out of a wine box, is unlocked by three servos that actuate rods to release a trio of clasps. His not-yet-fiancé had to first input the correct sequence on a keypad, then turn potentiometers to the right position, and finally traipse to the accurate location—sensed via GPS—for it to open up.
As the project’s I/O requirements went beyond a single Uno, Robertson linked a pair together using the I2C protocol, allowing the master to read GPS coordinates and control a small LCD screen, while the second Arduino takes care of user input and servo actuation.
Normally guitar pedals take in a signal from your instrument, then some modification to an amplifier. ElectroSmash’s open source device, however, looks like a guitar pedal, connects to a guitar and amp like a guitar pedal, but actually leaves the signal unmodified. Instead, it displays a variety of info about what you’re playing on its 16 x 16 LED matrix.
The Arduino Audio Meter uses an Uno for control and analysis, and acts as a VU meter by reading the incoming audio and creating LED animations. It also features a tuner function, visual metronome, frequency detector, and a simple lamp, which could all certainly be useful when playing.
User input (besides the1/4-inch audio jack) is via a potentiometer and encoder, and it even has a few games available for it if you need to blow off some steam between sets! Build kits are available here if you’d like to make your own.
Instead, they’re using a system of tubular actuators made out of heat-sensitive liquid crystal elastomer sheets. Heating elements are placed between two layers of elastomer, which is then rolled up into a cylinder, allowing the tubular digit to bend and contract.
With this novel method, they’ve been able to build a three-jaw gripper, as well as a robot that walks independently with four legs under Arduino control. While the grippers are slow at this point, taking 30 seconds to bend and minutes to return to their original position, the eventual goal is to have them react at the speed of human muscles.
As shown in the video below, Tristan Calderbank is a very talented singer and guitar player, but what’s perhaps most interesting about his performance is the percussion section. Instead of a person (or an entire band) standing beside him, a robotic shaker, tambourine, snare drum and bass drum all play together under MIDI control.
Each device is activated by an HS-311 servo—or two in the case of the snare—powered by an Arduino Uno and MIDI shield. Signals are sent to the Arduino by a laptop running Ableton Live, and servo velocity can be varied to further control sound.
To your own LoRa network using the Arduino Pro Gateway for LoRa
To existing LoRaWAN infrastructure like The Things Network
Or even to other boards using the direct connectivity mode
The latest low-power architecture has considerably improved the battery life on the MKR WAN 1310. When properly configured, the power consumption is now as low as 104uA! It is also possible to use the USB port to supply power (5V) to the board; run the board with or without batteries – the choice is yours.
Based on the Microchip SAM D21 low-power processor and a Murata CMWX1ZZABZ LoRa module, the MKR WAN 1310 comes complete with an ECC508 crypto chip, a battery charger and 2MByte SPI Flash, as well as improved control of the board’s power consumption.
Data logging and other OTA (Over-the-Air) functions are now possible since the inclusion of the on board 2MByte Flash. This new exciting feature will let you transfer configuration files from the infrastructure onto the board, create your own scripting commands, or simply store data locally to send it whenever the connectivity is best. While the MKR WAN 1310’s crypto chip adds further security by storing credentials and certificates in the embedded secure element.
These features make it the perfect IoT node and building block for low-power wide area IoT devices.
Planning to attend Maker Faire Rome later this month? We’re currently seeking volunteers to join our team during the event—staffing tables and displays, helping with demos, and providing technical assistance when necessary.
If you volunteer with us for one shift, you won’t leave empty-handed! You’ll receive a day pass; spend two days with us, and you’ll have a ticket for the entire weekend to explore the show. Water and snacks will be provided, of course, along with some Arduino goodies.
Interested? Please fill out this questionnaire and we’ll get back to you soon! If you are under the age of 18, we will need your parents’ permission.
Hai in programma di partecipare a Maker Faire Roma? Entra a far parte del team di volontari/e all’Arduino booth! Stiamo cercando appassionati/e di Arduino che ci aiutino durante l’evento dando il benvenuto ai visitatori e fornendo assistenza tecnica e supporto durante le demo.
Con un turno di volontariato allo stand Arduino, avrai a disposizione un pass per l’intera giornata; se, invece, sarai al nostro booth per almeno due turni avrai il pass per i tre giorni di evento. Sappiamo quanto sia importante il tuo tempo e quanto sia fondamentale il tuo aiuto al nostro booth, per questo motivo saremo felici di offrirti il pranzo e un piccolo regalo, ovviamente Arduino.
Ti interessa aiutarci al booth Arduino? Per favore completa questo form, ti faremo sapere prestissimo!
Se hai meno di 18 anni puoi partecipare, ma con il consenso firmato dei tuoi genitori!
Quando: 18 – 20 Ottobre 2019 (Venerdì, Sabato, Domenica)