Posts with «featured» label

Arduino on GitHub Actions

This post is from Massimiliano Pippi, Senior Software Engineer at Arduino.

GitHub Actions is the name of the SDLC (Software Development Life Cycle) system introduced by GitHub about a year ago, currently in public beta and now approaching the general availability status. When you read SDLC, think about some sort of CI/CD system that’s generic enough to let you define a sequence of operations not necessarily limited to build, test, and deploy code: GitHub Actions can help you automate processes in code reviews, issue triaging, and repository management.

GitHub Actions have been part of the tools we use at Arduino for a while now; helping us solve a wide range of problems from automating workflows within our backend infrastructure, to managing the release process of our open source software. In the spirit of giving back to the community, we started publishing some of the Actions we developed internally so that they can be used right ahead from any GitHub account that’s been granted access to the public beta of GitHub Actions, and eventually in any GitHub repository.

Our Actions are available from this repository and there’s one in particular we think the Arduino community will be happy to have – it’s called setup-arduino-cli and under the hood it takes all the steps necessary to have the `arduino-cli` binary available in a Workflow.  This means that in any step of your Workflow you can leverage the long list of features of the Arduino CLI.

# This is the name of the workflow, visible on GitHub UI.
name: test
 
# Here we tell GitHub to run the workflow when a commit
# is pushed or a Pull Request is opened.
on: [push, pull_request]
 
# This is the list of jobs that will be run concurrently.
# Since we use a build matrix, the actual number of jobs
# started depends on how many configurations the matrix
# will produce.
jobs:
  # This is the name of the job - can be whatever.
  test-matrix:
 
    # Here we tell GitHub that the jobs must be determined
    # dynamically depending on a matrix configuration.
    strategy:
      matrix:
        # The matrix will produce one job for each configuration
        # parameter of type arduino-platform, in this case a
        # total of 2.
        arduino-platform: ["arduino:samd", "arduino:avr"]
        # This is usually optional but we need to statically define the
        # FQBN of the boards we want to test for each platform. In the
        # future the CLI might automatically detect and download the core
        # needed to compile against a certain FQBN, at that point the
        # following include section will be useless.
        include:
          # This works like this: when the platform is "arduino:samd", the
          # variable fqbn is set to "arduino:samd:nano_33_iot".
          - arduino-platform: "arduino:samd"
            fqbn: "arduino:samd:nano_33_iot"
          - arduino-platform: "arduino:avr"
            fqbn: "arduino:avr:uno"
 
    # This is the platform GitHub will use to run our workflow, we
    # pick Windows for no particular reason.
    runs-on: windows-latest
 
    # This is the list of steps this job will run.
    steps:
      # First of all, we clone the repo using the checkout action.
      - name: Checkout
        uses: actions/checkout@master
 
      # We use the arduino/setup-arduino-cli action to install and
      # configure the Arduino CLI on the system.
      - name: Setup Arduino CLI
        uses: arduino/setup-arduino-cli@v1.0.0
 
      # We then install the platform, which one will be determined
      # dynamically by the build matrix.
      - name: Install platform
        run: |
          arduino-cli core update-index
          arduino-cli core install ${{ matrix.arduino-platform }}
 
      # Finally, we compile the sketch, using the FQBN that was set
      # in the build matrix.
      - name: Compile Sketch
        run: arduino-cli compile --fqbn ${{ matrix.fqbn }} ./blink

Example

Let’s say you keep your sketches in a GitHub repository, and you want to be sure that every time you push a git commit or you merge a pull request, the sketches compile correctly on certain boards you own, for example a Nano 33 IoT and a Uno. To keep the configuration at minimum, we can use a “build matrix”, so that GitHub will start a different job for each one of the platforms we list in the matrix, without the need to configure them explicitly.

You can find a working example here: https://github.com/arduino/arduino-cli-example

You can find more info and docs for the Action on the GitHub Marketplace: if you like it, please leave us a star!
We’re eager to hear your feedback, feel free to hit the Action repository and open an issue should you find any problem, or a feature request in case you want more from the action.

Arduino Blog 14 Nov 11:32

Rapidly create your own capacitive multi-touch sensors with this kit

You likely use touchscreens every day when interacting with your phone — perhaps even to read this article — but prototyping your own capacitive matrix is unfortunately out of reach for most makers and electronics novices. As seen here, researchers have devised a new technique that will allow for easier prototyping of this type of interface, which can function on both flat and curved surfaces, over a variety of materials.

To accomplish this, the team developed an Arduino library, as well as one for Processing, and used OpenCV to track multiple finger positions. Interactions have been tested with an Uno, Mega and LilyPad, and would presumably work with almost any other Arduino board as needed!

We introduce Multi-Touch Kit, a low-cost do­ it-yourself technique to enable interaction designers, makers, and electronics novices alike to rapidly create and experiment with high-resolution multi-touch sensors of custom sizes, ge­ometries, and materials. 

In contrast to existing solutions, the Multi-Touch Kit is the first technique that works with a commodity microcontroller (our implementation uses a standard Arduino) and does not require any specialized hardware. As a technical enabler, we contribute a modified multi-touch sensing scheme that lever­ ages the human body as a transmission channel of MHz range signals through a capacitive near-field coupling mechanism. This leads to a clean signal that can be readily processed with the Arduino’s built-in analog-to-digital converter, resulting in a sensing accuracy comparable to industrial multi-touch con­ trollers. Only a standard multiplexer and resistors are required alongside the Arduino to drive and read out a touch sensor matrix. 

The technique is versatile and compatible with many types of multi-touch sensor matrices, including flexible sensor films on paper or PET, sensors on textiles, and sensors on 3D printed objects. Furthermore, the technique is compatible with sensors of various scale, curvature, and electrode materials (silver, copper, conductive yarn) fabricated using conductive printing, hand-drawing with a conductive pen, cutting, or stitching. 

Create a voice-controlled device with Alexa and Arduino IoT Cloud in 7 minutes

We’re excited to announce the launch of the official Arduino Amazon Alexa Skill. 

You can now securely connect Alexa to your Arduino IoT Cloud projects with no additional coding required. You could use Alexa to turn on the lights in the living room, check the temperature in the bedroom, start the coffee machine, check on your plants, find out if your dog is sleeping in the doghouse… the only limit is your imagination! 

Below are some of the features that will be available:

  • Changing the color and the luminosity of lights
  • Retrieving temperature and detect motion activity from sensors
  • Using voice commands to trigger switches and smart plugs

Being compatible with one of the most recognized cloud-based services on the market, bridges the communication gap between different applications and processes, and removes many tricky aspects that usually follows wireless connectivity and communication.

Using Alexa is as simple as asking a question — just ask, and Alexa will respond instantly. 

Integrating Arduino with Alexa is as quick and easy as these four simple steps:

1. Add the Arduino IoT Cloud Smart Home skill.

2. Link your Arduino Create account with Alexa.

3. Once linked, go to the device tab in the Alexa app and start searching for devices.

4. The properties you created in the Arduino IoT Cloud now appear as devices!

Boom — you can now start voice controlling your Arduino project with Alexa!

IoT – secure connections

The launch of the Arduino IoT Cloud & Alexa integration brings easy cross platform communication, customisable user interfaces and reduced complexity when it comes to programming. These features will allow many different types of users to benefit from this service, where they can create anything from voice controlled light dimmers to plant waterers. 

While creating IoT applications is a lot of fun, one of the main concerns regarding IoT is data security. Arduino IoT Cloud was designed to have security as a priority, so our compatible boards come with an ECC508 crypto chip, ensuring that your data and connections remain secure and private to the highest standard. 

The latest update to the Arduino IoT Cloud enables users with a Create Maker Plan subscription to use devices based on the popular ESP8266, such as NodeMCU and ESPduino. While these devices do not implement a crypto chip, the data transferred over SSL is still encrypted. 

Getting started with this integration

In order to get started with Alexa, you need to go through a few simple steps to make things work smoothly:

  • Setting up your Arduino IoT Cloud workspace with your Arduino Create account
  • Getting an IoT Cloud compatible board
  • Installing the Arduino Alexa Skill

Setting up the Arduino IoT Cloud workspace

Getting started with the Arduino IoT Cloud is fast and easy, and by following this tutorial you will get a detailed run through of the different functionalities and try out some of the examples! Please note, you will need an Arduino Create account in order to use the Arduino IoT Cloud and a compatible board.

Getting an IoT Cloud compatible board

The Arduino IoT Cloud currently supports the following Arduino boards: MKR 1000, MKR WiFi 1010, MKR GSM 1400 and Nano 33 IoT. You can find and purchase these boards from our store

The following properties in the Arduino IoT Cloud can currently be used with Alexa:

  • Light
  • Dimmable light
  • Colored light
  • Smart plug
  • Smart switch
  • Contact sensor
  • Temperature sensor
  • Motion sensor

Any of these properties can be created in the Arduino IoT Cloud platform. A sketch will be generated automatically to read and set these properties.

Installing the Arduino Alexa Skill

To install the Arduino Alexa Skill, you will need to have an Amazon account and download the latest version of the Alexa app on a smartphone or tablet, or use the Amazon Web application. You can find the link to the Amazon Alexa app here. Once we are successfully logged into the app, it is time to make the magic happen. 


To integrate Alexa and Arduino IoT Cloud, you need to add the Arduino skill. Then link your Arduino Create account with Alexa. Once linked, select the device tab in the Alexa app and start discovering devices.

The smart home properties already in existence in the Arduino IoT Cloud now appear as devices, and you can start controlling them with the Alexa app or your voice!

For more information, please visit the Arduino Alexa Skill.

Step-by-step guide to connecting Arduino IoT Cloud with Alexa

 A simple and complete step-by-step guide showing you how to connect the Arduino IoT Cloud with Alexa, is available via this tutorial.

Share your creativity with us!

Community is everything for Arduino, so we would love to see what you create! Make sure you document and share your amazing projects for example on Arduino Project Hub and use the #ArduinoAlexa hashtag to make it discoverable by everyone! 

This New Nano Pack Has EVERYthing You Need

The Arduino Nano Every is now available in a 3 and 6 pack – perfect for running a course or powering all your projects with Arduino.

For those countless creations requiring a small and easy to use microcontroller board, the Nano Every has the tiniest Arduino form factor out there measuring just 45x18mm. Whether you’re working on a low-cost robotics project for the entire classroom or presenting a complex prototype with many functional blocks, this pack offers exactly what you need – a batch of Nano Every boards at a great price! 

This robust little board costs as little as €7.50 each ($9.30 each) in the 6 pack, saving €0.50 ($0.60) per board versus the single. It’s now more affordable than ever to forecast the local across town by building your own little band of Gnome Weather Forecasters in your class.

If you are interested in the ARDUINO NANO EVERY – PACK, visit the Arduino online store at this link.

ElastImpact brings a bit more realism to VR

If you’ve ever used a VR system and thought what was really missing is the feeling of being hit in the face, then a team researchers at the National Taiwan University may hold just the solution. 

ElastImpact takes the form of a head-mounted display with two impact drivers situated roughly parallel to one’s eyes for normal — straight-on — impacts, and another that rotates about the front of your face for side blows.

Each impact driver first stretches an elastic band using a gearmotor, then releases it with a micro servo when an impact is required. The system is controlled by an Arduino Mega, along with a pair of TB6612FNG motor drivers. 

Impact is a common effect in both daily life and virtual reality (VR) experiences, e.g., being punched, hit or bumped. Impact force is instantly produced, which is distinct from other force feedback, e.g., push and pull. We propose ElastImpact to provide 2.5D instant impact on a head-mounted display (HMD) for realistic and versatile VR experiences. ElastImpact consists of three impact devices, also called impactors. Each impactor blocks an elastic band with a mechanical brake using a servo motor and extending it using a DC motor to store the impact power. When releasing the brake, it provides impact instantly. Two impactors are affixed on both sides of the head and connected with the HMD to provide the normal direction impact toward the face (i.e., 0.5D in z-axis). The other impactor is connected with a proxy collider in a barrel in front of the HMD and rotated by a DC motor in the tangential plane of the face to provide 2D impact (i.e., xy-plane). By performing a just-noticeable difference (JND) study, we realize users’ impact force perception distinguishability on the heads in the normal direction and tangential plane, separately. Based on the results, we combine normal and tangential impact as 2.5D impact, and performed a VR experience study to verify that the proposed 2.5D impact significantly enhances realism.

TipText enables one-handed text entry using a fingertip keyboard

Today when you get a text, you can respond with message via an on-screen keyboard. Looking into the future, however, how would you interact unobtrusively with a device that’s integrated into eyeglasses, contacts, or perhaps even something else?

TipText is one solution envisioned by researchers at Dartmouth College, which uses a MPR121 capacitive touch sensor wrapped around one’s index finger as a tiny 2×3 grid QWERTY keyboard.

The setup incorporates an Arduino to process inputs on the grid and propose a number of possible words on a wrist-mounted display that the user can select by swiping right with the thumb. A new word is automatically started when the next text entry tap is received, allowing for a typing speed of around 12-13 words per minute.

Proxino takes your virtual circuit into the real world

While circuit simulation tools become more accessible all the time, at some point it’s necessary to actual build your device and test it. Proxino, developed by researchers at Dartmouth College, takes a different approach, and enables you to virtually create a circuit, then test parts of it as needed with electronic components via physical proxies. 

To accomplish this, Proxino hardware sits on an Arduino Uno as a shield, and generates the virtual circuit’s responses to inputs. This setup allows for the implementation of physical elements like buzzers, lights, and sensors to complement the simulated environment, which can even be shared by remote collaborators in different locations. 

Proxino certainly looks like it could be an excellent instructional tool, or perhaps more!

GesturePod is a clip-on smartphone interface for the visually impaired

Smartphones have become a part of our day-to-day lives, but for those with visual impairments, accessing one can be a challenge. This can be especially difficult if one is using a cane that must be put aside in order to interact with a phone.

The GesturePod offers another interface alternative that actually attaches to the cane itself. This small unit is controlled by a MKR1000 and uses an IMU to sense hand gestures applied to the cane. 

If a user, for instance, taps twice on the ground, a corresponding request is sent to the phone over Bluetooth, causing it to output the time audibly. Five gestures are currently proposed, which could expanded upon or modified for different functionality as needed.

People using white canes for navigation find it challenging to concurrently access devices such as smartphones. Build­ ing on prior research on abandonment of specialized devices, we explore a new touch free mode of interaction wherein a person with visual impairment can perform gestures on their existing white cane to trigger tasks on their smartphone. We present GesturePod, an easy-to-integrate device that clips on to any white cane, and detects gestures performed with the cane. With GesturePod, a user can perform common tasks on their smartphone without touch or even removing the phone from their pocket or bag. We discuss the challenges in build­ ing the device and our design choices. We propose a novel, efficient machine learning pipeline to train and deploy the gesture recognition model. Our in-lab study shows that Ges­ turePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster. Our in-wild study suggests that GesturePod is a promising tool to im­ prove smartphone access for people with VI, especially in constrained outdoor scenarios.

Meet Steve, an Arduino-powered camera chauffeur

Taking photos or recording videos by hand is great, but in many situations you’d prefer to have your camera mounted on a movable or even programmable platform. Steve — now funding on Kickstarter — aims to fill that role as a remote-controlled vehicle that can maneuver your camera into places where you couldn’t reach before, all while capturing cinematic shots. 

The device is able to carry a payload of 20kg, and features a suspension system that allows it to traverse rough terrain, along with Mecanum wheels that let it slide in any direction. 

Best of all, it’s powered by an Arduino, meaning that when you’re ready to move on from manual RC operation, it can be customized for a wide variety of uses! 

Arduino Blog 05 Nov 18:12
arduino  featured  

BubBowl displays in-beverage messages with electrolysis

Ads, notifications, and other messages surround us today, and if you were overwhelmed before, researchers at Ochanomizu University in Tokyo, Japan have figured out how to print text and images in your cup of coffee! This system, dubbed “BubBowl,” uses electrolysis to dynamically generate a dot-matrix pattern of 10 x 10 pixels on the surface of beverages.

The Arduino-based device utilizes a series of shift registers to control matrix outputs, along with MOSFETs to handle current through the liquid as it produces tiny amounts of (non-toxic) gas. 

Resolution is good enough to display four characters at once — meaning it can show the time, or even very short messages. The drinks are still consumable after messaging, though touch-sensitive electrodes are implemented to cut off power when imbibing!

Arduino Blog 04 Nov 18:59