Posts with «featured» label

Arduino Uno controls a trio of singing pumpkins

Halloween is just around the corner, and to celebrate, fadecomic decided to set up a trio of singing animatronic pumpkins to belt out scary songs. 

The project uses a Raspberry Pi for high level control and browser interface, and sends animation commands to an Arduino Uno via USB serial.

The Uno takes this data and translates it into actual pumpkin movements coordinated with music. The resulting trio of pumpkins each use their own servo to lift the top of the foam gourd up like a gigantic mouth, and also feature PWM-driven LED eyes. A light show controlled by SSRs completes the spooky musical effect. 

Build info is available here and the Arduino code can be found on GitHub.

Paper-cut light box replicates the Philadelphia skyline and its actual moon phases

Rich Nelson wanted to make a unique gift for his brother, and decided on a paper-cut light box of Philadelphia’s skyline, the city where he lives. 

The resulting device is controlled by an Arduino Nano, and not only features a trio of lights and layers to represent buildings and foliage, but also a moving sun and moon display that changes depending on the actual time and date.

Timing is accomplished via an RTC module, while the sun/moon is displayed on a small TFT screen that moves across the sky using a servo motor and extension arm. The build can be seen in the video below, and code as well as CAD info is on GitHub for your perusal.

Arduino Blog 16 Oct 14:32

FacePush adds extra realism to your VR experience

Haptic feedback is something commonly used with handheld controllers and the like. However, in a virtual reality environment, it could also be used with the other interface surface attached to your body: the VR headset itself.

That’s the idea behind FacePush, which employs an Arduino Uno-powered pulley system to place tension on the straps of an HTC Vive headset. A corresponding pushing force is felt by the wearer through the headset in response to this action, creating yet another way to help immerse users in a virtual world. 

Applications tried so far include a boxing game, dive simulator, and 360-degree guidance You can check it out in a short demo below, and read more about it in the full research paper here.

Neon skulls illuminate to the MIDI beat

LEDs, whether single-color or programmable, have enabled makers to create a wide variety of vibrant projects at a reasonable price. Neon sign projects, which require sophisticated glass making techniques as well as high voltage for control aren’t as common, but do still have their adherents. Some have even experimented with making them sound reactive.

Up until now, sound control meant using a microphone to detect audio signals and flash accordingly. David Garges, however, is using an Arduino Leonardo equipped with an Olimex MIDI shield to individually activate three neon skulls, crafted by artist Dani Bonnet. 

His setup can be programmed via MIDI directly, or can use beat analysis software to activate the proper lights depending on audio output. 

There has been much desire in the Neon Art community for clean and responsive musical interaction with high-voltage Neon Signs. Currently, the existing infrastructure uses a microphone to detect audio and flash accordingly. Unfortunately, due to this method of processing the Neon always responds with a small delay. Clapping and shouting can also disrupt the interaction when using an on-board microphone.

This project solves that problem by transmitting musical data via MIDI protocol to a controller which activates then activates Neon Tubes accordingly. I have designed and built a system that takes a slightly different approach but accomplishes what the Neon Art community desires.

This project offers two performance modes: one that allows for electronic artists to perform seamlessly using MIDI instruments, and one that allows DJs to feed BPM analysis to the system to synchronize the Neon flashing with actual recorded music which enables Real-Time Audio-Controlled Neon.

Be sure to check out the demo in the video below!

Announcing Arduino’s Coordinated Vulnerability Disclosure Policy

A little less than a month ago, I joined Arduino as their Chief Information Security Officer. I’ve been in touch with the team for the past couple of months and feel incredibly lucky to be part of such a talented and driven group of people.

We’re working hard on developing a robust, well-rounded security program that fits our organisation and busy improving our security posture across all departments. I am a true believer that it all starts from introducing a strong culture of security awareness — where employees feel confident and empowered to take action against security issues.  

Today, I’m thrilled to announce the first release of Arduino’s Coordinated Vulnerability Disclosure (CVD) Policy.

We used some great references when putting it together and we’d like to give them a shout out here: HackerOne’s VDP guidelines, CEPS’ report on “Software Vulnerability Disclosure in Europe,” and the US DoJ Cyber Security unit’s VDP framework. We also took into consideration recent Senate testimony of experts in vulnerability disclosure in the role hackers can play in strengthening security, Dropbox’s announcement on protecting researchers and 18F’s own policy. I even wanted to publicly thank Amit Elazari Bar On, a doctoral law candidate (J.S.D.) at UC Berkeley School of Law and a Lecturer at UC Berkeley School of Information Master in Cybersecurity program for her useful advices and for providing the amazing “#legalbugbounty” standardisation project.

We’re also happy to announce that all of the text in our policy is a freely copyable template. We’ve done this because we’d like to see others take a similar approach. We’ve put some effort in to this across our teams and if you like what you see, please use it. Similarly, if you have improvements to suggest, we’d love to hear from you.

What is CVD?

Coordinated vulnerability disclosure (CVD) is a process aimed at mitigating/eradicating the potential negative impacts of vulnerabilities. It can be defined as “the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of vulnerabilities and their mitigation to various stakeholders, including the public.”

Figure 1: Relationships among actors in the CVD process. Source: “The CERT Guide to Coordinated Vulnerability Disclosure,” Software Engineering Institute, Carnegie Mellon University

Why is it important for us?

At Arduino, we consider the security of our systems and products a top priority. No technology is perfect, and Arduino believes that working with skilled security researchers across the globe is crucial in identifying weaknesses in any technology. We want security researchers to feel comfortable reporting vulnerabilities they’ve discovered, as set out in this policy, so that we can fix them and keep our information safe.

If you believe you’ve found a security issue in our products or services, we encourage you to notify us. We welcome working with you to resolve the issue promptly.

This policy describes how to send us vulnerability reports and how long we ask security researchers to wait before publicly disclosing vulnerabilities.

Where can I find it?

A copy of the policy is published on our Vulnerability Disclosure Policy page. The official document lives in GitHub. If you would like to comment or suggest a change to the policy, please open a GitHub issue.

Thank you for helping keep Arduino and our users safe!

— Gianluca Varisco

When in Rome: Join us at Europe’s Biggest Maker Faire!

We’re just days away from Maker Faire Rome — The European Edition, where we will be partnering with Microchip in Pavilion 8.  This year’s booth will be broken up into three areas:

  • Education: The Arduino Education team will be exhibiting the flagship CTC 101 program and the Engineering Kit. Starting at 11am, there will be 15-minute demos every hour that address the ways Arduino can be implemented as a learning tool from primary schools all the way up to universities.
  • Makers: We have been working on a pair of new projects to highlight the key specs and possible use cases of the Uno WiFI. Moreover, visitors will have the opportunity to meet the winner of the Arduino /Distrelec Robotics & Automation Contest.
  • Internet of Things: This section will be focused around a smart greenhouse connected to the Arduino IoT Cloud, along with two demos of the MKR Vidor 4000. Finally, we will be showcasing some practical demos on how startups and companies have turned to Arduino to bring their products and services to market.

The Arduino booth will also include a special station dedicated to the Arduino Store, where will be giving away 500 discount vouchers for online purchases on a first come, first serve basis.

But that’s not all! Members of the Arduino team can be found throughout Maker Faire Rome’s program all weekend long. The schedule is as follows:

Friday, October 12th

10:30am: Opening Conference (Pavilion 10 – Room 1/Sala Alibrandi): Massimo Banzi, Arduino co-founder, will join Maker Faire’s opening conference ‘Groundbreakers: Pioneers of the Future’ with the talk Democratizing Industry 4.0. Register here.


2:30pm – 5:30pm
(Room 17 SC3): Debugging with Arduino: A hands-on workshop with Microchip’s Wizard of Make, Bob Martin, and Arturo Guadalupi, Arduino Hardware Design Engineer, which will explore advanced debugging techniques for Arduino sketches. More info here.


2:30pm – 3:30pm
 (Pavilion 9 – Room 11): CTC: Bring Open-Source into Your Classroom: In partnership with Campus Store Academy, this informative workshop will walk you through implementing Arduino in the classroom with Arduino CTC 101. Register here.

Saturday, October 13th

11:30am – 12:30pm (Pavilion 7 – Room 7): Arduino MKR Vidor: Democratizing FPGA: Led by Martino Facchin, Arduino Senior HW Engineer, this session will discuss how the MKR Vidor combines the power and flexibility of an FPGA with the ease of use of Arduino. More info here.

11:45am – 12:45pm  (Pavilion 9 – Room 11): In partnership with Campus Store Academy, this informative workshop will walk you through implementing Arduino in the classroom with Arduino CTC 101. Register here.

2:15pm – 3:15pm (Pavilion 7 – Room 7) Arduino IoT Cloud: The  Internet of Things Revolution: Luca Cipriani, Arduino CIO, will focus on the potential of the Arduino IoT Cloud, the latest developments in the Arduino ecosystem, as well as how to build connected objects in a quick, easy, and secure manner. More info here.

4:15pm – 5:15pm ( Pavilion 9 – Room 13): Arduino Engineering Kit: Advanced Programming and Learning Applications: In collaboration with Campus Store Academy, this workshop is concentrated on helping tomorrow’s engineers approach mechatronics and automated control. Register here.

5:45pm – 6:45pm ( Pavilion 9 – Room 11): STEAM with Arduino: In collaboration with Campus Store Academy, this session will introduce you to the Arduino Starter Kit Classroom Pack and how Arduino is being used as a flexible learning tool. More info here.

Sunday, October 14th

2:45pm – 3:45pm: Shape Your Future with MATLAB and the Arduino Engineering Kit: In collaboration with the MathWorks team and Jose Garcia, HW Engineer at Arduino, this talk will feature live demos of a robot designed and controlled with Arduino and MATLAB. More info here.

4:15am – 5:45pm (Pavilion 9 – Room 11): CTC: Bring Open-Source into Your Classroom: In partnership with Campus Store Academy, this informative workshop will walk you through implementing Arduino in the classroom with Arduino CTC 101. Register here.

Want to learn more? The entire agenda and all other important information is available on Maker Faire Rome’s website. Planning to attend? Save on admission using the code: MFR18EBGMT.

 

CasioKeyBot plays electronic keyboard with automated fingers

Electronic keyboards have been around for many years, taking human input and translating it into a variety of sounds. In a strange twist on this technology, Igor Angst has decided to substitute a robot in to push the synthesizer’s keys, using a laser-cut finger setup controlled by an Arduino Uno.

The MIDI sequence/notes to be played are provided by a computer running ALSA (Advanced Linux Sound Architecture), and interpreted by a C program that translates it into USB serial signals that the Uno can use. It then actuates its wooden fingers, playing a pleasing tune along with apparently keyboard-provided accompaniment in the video below.

I really like the crappy sound of those ‘80s toy keyboards. Unfortunately, I am a lousy live keyboarder and I only have so many hands. So I thought about adding MIDI capability to my good old Casio SA-21. The simplest way to do this is obviously building a robotized hand with 8 servo motors controlled by an Arduino microcontroller, which in turn receives its commands through the serial-over-USB interface sent by a tiny C application that connects to the ALSA sequencer world of my Linux live music setup.

Laser cutter files are available on the project’s write-up and code can be found on GitHub.

Twinky, the Arduino robot assistant

In the middle of a project, you may find that what you’re making is similar to something that’s been done before. Such was the case with Adrian Lindermann when he started constructing his “Twinky” robot and found the Jibo social bot had a similar design. 

Like any good hacker, he pressed ahead with his build, creating a small yellow companion that can respond to voice commands via a SpeakUp click module, along with pressure on its face/touchscreen.

Control is provided by an Arduino Mega, and Twinky can interact with other devices using a Bluetooth module. The robot’s head can even turn in order to point the display in the needed direction, and it’s able to play sound through an audio amplifier and speaker. 

IT CAN SPEAK! PLAY MUSIC, SET TIMERS, ALARMS, TURN ON/OFF THE LIGHTS OR OTHER APPLIANCES. IT HAS A CALCULATOR AND A WEATHER STATION! DATE & TIME, BLUETOOTH 4.0, EVERYTHING WITH VOICE COMMANDS!!! And also with a touchscreen, it has one little motor so it can turn around when one of the two microphones hear you talk or make a noise.

For more on this wonderful little robot, check out the project’s write-up and and build files here.

Measure noise levels in your home with the Hello Light

After realizing that asking his kids to keep the noise down was meaningless without some sort of standard, maker Jeremy S. Cook decided to construct the “Hello Light.”

This cylindrical device measures sound with an electret microphone and an Arduino Nano, then commands a set of RGBW lights to progressively light up depending on the noise level.  

In the end, the Hello Light eventually ended up as more of a game to see who could trigger the flashing volume limit warning—not particularly effective for its intended purpose. It does, however, make a fun interactive decoration, and also features a random lighting mode, and a slowly blinking white light setting.

Code for the project is available on GitHub, and the build process can be seen in the clip below.

MobiLimb is a robotic finger that plugs into your smartphone

You’re constantly poking and prodding your smartphone throughout the day, but have you ever wondered what would happen if this little computer could poke back? Well now that’s actually possible, with the MobiLimb accessory by Marc Teyssier and a team stretching across two universities in France.

The device uses an Arduino to interface with the phone or other smart device via its micro USB port, and powers a servo-actuated robotic manipulator with five degrees of freedom. The servos give the artificial finger enough power to pull the phone itself across the ground, and for other interactions like acting as a physical avatar, propping the phone up as a stand, or even taking input as a joystick apparatus.

MobiLimb is a new shape-changing component with a compact form factor that can be deployed on mobile devices. It is a small 5 DoF serial robotic manipulator that can be easily added to (or removed from) existing mobile devices (smartphone, tablet). In the spirit of human augmentation, which aims at overcoming human body limitations by using robotic devices, our approach aims at overcoming mobile device limitations (static, passive, motionless) by using a robotic limb.

This approach preserves the form factor of mobile devices and the efficiency of their I/O capabilities while introducing new ones:

  • The users can manipulate and deform the robotic device (input)
  • They can see and feel it (visual and haptic feedback), including when its shape is dynamically modified by the mobile device.
  • As a robotic manipulator, it can support additional modular elements (LED, shells, proximity sensors).

More info is available in Teyssier’s write-up, and you can see it in action in the video below.