Posts with «machine learning» label

AI System Drops a Dime on Noisy Neighbors

“There goes the neighborhood” isn’t a phrase to be thrown about lightly, but when they build a police station next door to your house, you know things are about to get noisy. Just how bad it’ll be is perhaps a bit subjective, with pleas for relief likely to fall on deaf ears unless you’ve got firm documentation like that provided by this automated noise detection system.

OK, let’s face it — even with objective proof there’s likely nothing that [Christopher Cooper] is going to do about the new crop of sirens going off in his neighborhood. Emergencies require a speedy response, after all, and sirens are perhaps just the price that we pay to live close to each other. That doesn’t mean there’s no reason to monitor the neighborhood noise, though, so [Christopher] got to work. The system uses an Arduino BLE Sense module to detect neighborhood noises and Edge Impulse to classify the sounds. An ESP32 does most of the heavy lifting, including running the UI on a nice little TFT touchscreen.

When a siren-like sound is detected, the sensor records the event and tries to classify the type of siren — fire, police, or ambulance. You can also manually classify sounds the system fails to understand, and export a summary of events to an SD card. If your neighborhood noise problems tend more to barking dogs or early-morning leaf blowers, no problem — you can easily train different models.

While we can’t say that this will help keep the peace in his neighborhood, we really like the way this one came out. We’ve seen the BLE Sense and Edge Impulse team up before, too, for everything from tuning a bike suspension to calming a nervous dog.

Full Self-Driving, on a Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

Machine Learning Robot Runs Arduino Uno

When we think about machine learning, our minds often jump to datacenters full of sweating, overheating GPUs. However, lighter-weight hardware can also be used to these ends, as demonstrated by [Nikodem Bartnik] and his latest robot.

The robot is charged with autonomously navigating a simple racetrack delineated by cardboard barriers. The robot is based on a two-wheeled design with tank-style steering. Controlled by an Arduino Uno, the robot uses a Slamtec RPLIDAR sensor to help map out its surroundings. The microcontroller is also armed with a Bluetooth link and an SD card for storage.

The robot was first driven around the racetrack multiple times under manual control, all the while collecting LIDAR data. This data was combined with control inputs to help create a data set that could be used to train a machine learning model. Feature selection techniques were used to refine down the data points collected to those most relevant to completing the driving task. [Nikodem] explains how the model was created and then refined to drive the robot by itself in a variety of race track designs.

It’s a great primer on machine learning techniques applied to a small embedded platform.

Wearable Sensor Trained to Count Coughs

There are plenty of problems that are easy for humans to solve, but are almost impossibly difficult for computers. Even though it seems that with modern computing power being what it is we should be able to solve a lot of these problems, things like identifying objects in images remains fairly difficult. Similarly, identifying specific sounds within audio samples remains problematic, and as [Eivind] found, is holding up a lot of medical research to boot. To solve one specific problem he created a system for counting coughs of medical patients.

This was built with the idea of helping people with chronic obstructive pulmonary disease (COPD). Most of the existing methods for studying the disease and treating patients with it involves manually counting the number of coughs on an audio recording. While there are some software solutions to this problem to save some time, this device seeks to identify coughs in real time as they happen. It does this by training a model using tinyML to identify coughs and reject cough-like sounds. Everything runs on an Arduino Nano with BLE for communication.

While the only data the model has been trained on are sounds from [Eivind], the existing prototypes do seem to show promise. With more sound data this could be a powerful tool for patients with this disease. And, even though this uses machine learning on a small platform, we have seen before that Arudinos are plenty capable of being effective machine learning solutions with the right tools on board.

Hack a Day 16 Nov 00:00

Weather Station Predicts Air Quality

Measuring air quality at any particular location isn’t too complicated. Just a sensor or two and a small microcontroller is generally all that’s needed. Predicting the upcoming air quality is a little more complicated, though, since so many factors determine how safe it will be to breathe the air outside. Luckily, though, we don’t need to know all of these factors and their complex interactions in order to predict air quality. We can train a computer to do that for us as [kutluhan_aktar] demonstrates with a machine learning-capable air quality meter.

The build is based around an Arduino Nano 33 BLE which is connected to a small weather station outside. It specifically monitors ozone concentration as a benchmark for overall air quality but also uses an anemometer and a BMP180 precision pressure and temperature sensor to assist in training the algorithm. The weather data is sent over Bluetooth to a Raspberry Pi which is running TensorFlow. Once the neural network was trained, the model was sent back to the Arduino which is now capable of using it to make much more accurate predictions of future air quality.

The build goes into quite a bit of detail on setting up the models, training them, and then using them on the Arduino. It’s an impressive build capped off with a fun 3D-printed case that resembles an old windmill. Using machine learning to help predict the weather is starting to become more commonplace as well, as we have seen before with this weather station that can predict rainfall intensity.

Machine Learning Shushes Stressed Dogs

If there’s one demographic that has benefited from people being stuck at home during Covid lockdowns, it would be dogs. Having their humans around 24/7 meant more belly rubs, more table scraps, and more attention. Of course, for many dogs, especially those who found their homes during quarantine, this has led to attachment issues as their human counterparts have begin to return to work and school.

[Clairette] has had a particularly difficult time adapting to her friends leaving every day, but thankfully her human [Nathaniel Felleke] was able to come up with a clever solution. He trained a TinyML neural net to detect when she barked and used and Arduino to play a sound byte to sooth her. The sound bytes in question are recordings of [Nathaniel]’s mom either praising or scolding [Clairette], and as you can see from the video below, they seem to work quite well. To train the network, [Nathaniel] worked with several datasets to avoid overfitting, including one he created himself using actual recordings of barks and ambient sounds within his own house. He used Eon Tuner, a tool by Edge Impulse, to help find the best model to use and perform the training. He uploaded the trained network to an Arduino Nano 33 BLE Sense running Mbed OS, and a second Arduino handled playing sound bytes via an Adafruit Music Maker Featherwing.

While machine learning may sound like a bit of an extreme solution to curb your dog’s barking, it’s certainly innovative, and even appears to have been successful. Paired with this web-connected treat dispenser, you could keep a dog entertained for hours.

Making Minty Fresh Music With Markov Chains: The After Eight Step Sequencer

Step sequencers are fantastic instruments, but they can be a little, well, repetitive. At it’s core, the step sequencer is a pretty simple device: it loops through a series of notes or phrases that are, well, sequentially ordered into steps. The operator can change the steps while the sequencer is looping, but it generally has a repetitive feel, as the musician isn’t likely to erase all of the steps and enter in an entirely new set between phrases.

Enter our old friend machine learning. If we introduce a certain variability on each step of the loop, the instrument can help the musician out a bit here, making the final product a bit more interesting. Such an instrument is exactly what [Charis Cat] set out to make when she created the After Eight Step Sequencer.

The After Eight is an eight-step sequencer that allows the artist to set each note with a series of potentiometers (which are, of course, housed in an After Eight mint tin). The potentiometers are read by an Arduino, which passes MIDI information to a computer running the popular music-oriented visual programming language Max MSP. The software uses a series of Markov Chains to augment the musician’s inputted series of notes, effectively working with the artist to create music. The result is a fantastic piece of music that’s different every time it’s performed. Make sure to check out the video at the end for a fantastic overview of the project (and to hear the After Eight in action, of course)!

[Charis Cat]’s wonderful creation reminds us of some the work [Sara Adkins] has done, blending human performance with complex algorithms. It’s exactly the kind of thing we love to see at Hackaday- the fusion of a musician’s artistic intent with the stochastic unpredictability of a machine learning system to produce something unique.

Thanks to [Chris] for the tip!

Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Dr. Squiggles: An AI Rhythm Robot

Build a smart octopus drumbot that listens, learns, and plays along with you

Read more on MAKE

The post Dr. Squiggles: An AI Rhythm Robot appeared first on Make: DIY Projects and Ideas for Makers.

Generate Positivity with Machine Learning

Gesture recognition and machine learning are getting a lot of air time these days, as people understand them more and begin to develop methods to implement them on many different platforms. Of course this allows easier access to people who can make use of the new tools beyond strictly academic or business environments. For example, rollerblading down the streets of Atlanta with a gesture-recognizing, streaming TV that [nate.damen] wears over his head.

He’s known as [atltvhead] and the TV he wears has a functional LED screen on the front. The whole setup reminds us a little of Deep Thought. The screen can display various animations which are controlled through Twitch chat as he streams his journeys around town. He wanted to add a little more interaction to the animations though and simplify his user interface, so he set up a gesture-sensing sleeve which can augment the animations based on how he’s moving his arm. He uses an Arduino in the arm sensor as well as a Raspberry Pi in the backpack to tie it all together, and he goes deep in the weeds explaining how to use Tensorflow to recognize the gestures. The video linked below shows a lot of his training runs for the machine learning system he used as well.

[nate.damen] didn’t stop at the cheerful TV head either. He also wears a backpack that displays uplifting messages to people as he passes them by on his rollerblades, not wanting to leave out those who don’t get to see him coming. We think this is a great uplifting project, and the amount of work that went into getting the gesture recognition machine learning algorithm right is impressive on its own. If you’re new to Tensorflow, though, we have featured some projects that can do reliable object recognition using little more than a Raspberry Pi and a camera.

The HackadayPrize2020 is Sponsored by: