Posts with «arduino hacks» label

Automated Turntable For 3D Scanning

Those just starting out in 3D printing often believe that their next major purchase after the printer will be a 3D scanner. If you’re going to get something that can print a three dimensional model, why not get something that can create said models from real-world objects? But the reality is that only a small percentage ever follow through with buying the scanner; primarily because they are notoriously expensive, but also because the scanned models often require a lot of cleanup work to be usable anyway.

While this project by [Travis Antoniello] won’t make it any easier to utilize scanned 3D models, it definitely makes them cheaper to acquire. So at least that’s half the battle. Consisting primarily of a stepper motor, an Arduino, and a EasyDriver controller, this is a project you might be able to assemble from the parts bin. Assuming you’ve got a pretty decent camera in there, anyway…

The general idea is to place a platform on the stepper motor, and have the Arduino rotate it 10 degrees at a time in front of a camera on a tripod. The camera is triggered by an IR LED on one of the Arduino’s digital pins, so that it takes a picture each time the platform rotates. There are configurable values to give the object time to settle down after rotation, and a delay to give the camera time to take the picture and get ready for the next one.

Once all the pictures have been taken, they are loaded into special software to perform what’s known as photogrammetry. By compiling all of the images together, the software is able to generate a fairly accurate 3D image. It might not have the resolution to make a 1:1 copy of a broken part, but it can help shave some modeling time when working with complex objects.

We’ve previously covered the use of photogrammetry to design 3D printed accessories, as well as a slightly different take on an automated turntable a few years ago. The process is still not too common, but the barriers to giving it a try on your own are at least getting lower.

A Remotely Controlled Kindle Page Turner

One of the biggest advantages of e-readers such as the Kindle is the fact that it doesn’t weigh as much as a traditional hardcover book, much less the thousands of books it can hold in digital form. Which is especially nice if you drop the thing on your face while reading in bed. But as light and easy to use as the Kindle is, you still need to hold it in your hands and interact with it like some kind of a baby’s toy.

Looking for a way to operate the Kindle without having to go through the exhaustive effort of raising their hand, [abm513] designed and built a clip-on device that makes using Amazon’s e-reader even easier. At the press of a button, the device knocks on the edge of the screen which advances the book to the next page. Going back a page will still require you to extend your meaty digit, but that’s your own fault for standing in the way of progress.

The 3D printed case holds an Arduino and RF receiver, as well as a small servo to power the karate-chop action. There’s no battery inside, meaning the device needs to stay plugged in via a micro USB connection on the back of the case. But let’s be honest: if you’re the kind of person who has a remote-controlled Kindle, you probably aren’t leaving the house anytime soon.

To fool the Kindle into thinking a human finger is tapping the screen, the page turner’s arm has a stylus tip on the end. A channel is designed into the 3D printed arm for a wire to run from the tip to the Arduino’s ground, which triggers the capacitive screen to register a touch.

All joking aside, the idea holds promise as an assistive technology for individuals who are unable to lift an e-reader or operate its touch screen controls. With the Kindle held up in a mount, and this device clipped onto the side, anyone who can push a button (or trigger the device in whatever method they are physically capable) can read a book on their own. A simple pleasure that can come as a huge comfort to a person who may usually be dependent on others.

In the past we’ve seen physical buttons printed for touch screens, and an Arduino used to control a touch screen device. But this particular combination of physical and electrical interaction is certainly a unique way to tackle the problem without modifying the target device.

Updating a 1999 Saab with an Arduino

Unless your car is fresh off the lot, you’ve probably had the experience of riding in a newer car and seeing some feature or function that triggered a little pang of jealousy. It probably wasn’t enough for you to run out and sign yourself up for a new car loan (which is what the manufacturer was hoping for), but it was definitely something you wished your older model vehicle had. But why get jealous when you can get even?

[Saabman] wished his 1999 Saab 9-5 had the feature where a quick tap of the turn signal lever would trigger three blinks of the indicator. Realizing this was an electronic issue, he came up with a way to retrofit this function into his Saab by adding an Arduino Pro Micro to the vehicle’s DICE module.

The DICE (which stands for Dashboard Integrated Central Electronics) module controls many of the accessories in the vehicle, such as the lighting and wipers. In the case of the blinkers, it reads the state of the signal lever switches and turns the blinkers on and off as necessary. After poking around the DICE board, [Saabman] found that the 74HC151 multiplexer chip he was after: the state of the blinker switches could be read from pins 1 and 2, and he’d even be able to pull 5 V for the Arduino off of pin 16.

After prototyping the circuit on a breadboard, [Saabman] attached the Pro Micro to the top of the 74HC151 with some double sided tape and got to work on refining the software side of the project. The Arduino reads the state of the turn signal switches, and if they flick on momentarily it changes the pin from an input to an output and brings it high for three seconds. This makes the DICE module believe the driver is holding the turn lever, and will keep the blinkers going. A very elegant and unobtrusive way of solving the problem.

Hackers aren’t complete strangers to the garage; from printing hard to find parts to grafting in their favorite features from other car manufacturers, this slick Saab modification is in good company.

Automatic Sunglasses, The Electromechanical Way

These days, photochromic lenses are old-hat. Sure, it’s useful to have a pair of glasses that automatically tints due to UV light, but what if you want something a little more complex and flashy? Enter [Ashraf Minhaj]’s SunGlass-Bot.

The build is simple, beginning with an Arduino Pro Mini for reasons of size. Connected to the analog input is a light-dependent resistor for sensing the ambient light level. This reading is then used to decide whether or not to move the servo which controls the position of the lenses. In low light, the lenses are flipped up to allow clear vision; in brighter light, the lenses flip down to protect the eyes. Power is supplied by a homebrew powerbank that it appears [Ashraf] built from an old phone battery and a small boost converter board. All the files to recreate the project are available on Github, too.

It’s a fun build that [Ashraf] shows off in style. While this may not be as effortless as a set of Transition lenses or as quick as a welding mask filter, it has a certain mechanical charm that wouldn’t be out-of-place in a certain sci-fi aesthetic.

Hungry for more? Check out these self-blending sunglasses we featured a while back. Video after the break.

The Precise Science Of Whacking A Wine Glass

It’s common knowledge that tapping a wine glass produces a pitch which can be altered by adjusting the level of the tipple of choice inside. By filling twelve glasses with different amounts of liquid and tuning them to the twelve notes of the scale, it’s possible to make a one-octave instrument – though the speed and polyphony are bottle-necked by the human operator. If you think it sounds like a ripe project for automation, you’re correct: [Bitluni’s lab] has done what needed to be done, and created a MIDI instrument which plays the glasses using mallets.

Electronically it’s a simple build – some 12 V solenoids driven by MOSFETs, with an Arduino in charge. For the mechanical build, a 3D printer proved very useful, as each mallet could be made identical, ensuring a consistent tone across all glasses. Rubber covers printed in flexible filament were fitted to reduce the overtones and produce a clearer sound. [Bitluni] also utilised different types of glasses for the low and high pitches, which also helped to improve the clarity of the tone.

MIDI is of course the perfect protocol for this application; simple, lightweight and incredibly widely used, it’s the hacker’s delight for projects like this. The instrument can perform pre-programmed sequences, or be played live with a MIDI controller. Both of these are shown in the video after the break – stick around for a unique rendition of Flight Of The Bumblebee. For a more compact wine glass based music creation solution, we recommend this nifty project, which alters pitch using a water balloon raised and lowered into the glass by a servo. 

Hack a Day 31 Jul 16:00

Light Painting Animations Directly From Blender

Light painting: there’s something that never gets old about waving lights around in a long exposure photo. Whilst most light paintings are single shots, some artists painstakingly create frame-by-frame animations. This is pretty hard to do when moving a light around by hand: it’s mostly guesswork, as it’s difficult to see the results of your efforts until after the photo has been taken. But what if you could make the patterns really precise? What if you could model them in 3D?

[Josh Sheldon] has done just that, by creating a process which allows animations formed in Blender to be traced out in 3D as light paintings. An animation is created in Blender then each frame is automatically exported and traced out by an RGB LED on a 3D gantry. This project is the culmination of a lot of software, electronic and mechanical work, all coming together under tight tolerances, and [Josh]’s skill really shines.

The first step was to export the animations out of Blender. Thanks to its open source nature, Python Blender add-ons were written to create light paths and convert them into an efficient sequence that could be executed by the hardware. To accommodate smooth sliding camera movements during the animation, a motion controller add-on was also written.

The gantry which carried the main LED was hand-made. We’d have been tempted to buy a 3D printer and hack it for this purpose, but [Josh] did a fantastic job on the mechanical build, gaining a solidly constructed gantry with a large range. The driver electronics were also slickly executed, with custom rack-mount units created to integrate with the DragonFrame controller used for the animation.

The video ends on a call to action: due to moving out, [Josh] was unable to continue the project but has done much of the necessary legwork. We’d love to see this project continued, and it has been documented for anyone who wishes to do so. If you want to check out more of [Josh]’s work, we’ve previously written about that time he made an automatic hole puncher for music box spools.

Thanks for the tip, [Nick].

Hands-On with New Arduino FPGA Board: MKR Vidor 4000

Hackaday brought you a first look the Arduino MKR Vidor 4000 when it announced. Arduino sent over one of the first boards so now we finally have our hands on one! It’s early and the documentation is still a bit sparse, but we did get it up and running to take the board through some hello world exercises. This article will go over what we’ve been able to figure out about the FPGA system so far to help get you up and running with the new hardware.

Just to refresh your memory, here’s what is on the Vidor board:

  • 8 MB SRAM
  • A 2 MB QSPI Flash chip — 1 MB allocated for user applications
  • A Micro HDMI connector
  • An MIPI camera connector
  • Wi-Fi and BLE powered by a U-BLOX NINA W10 Series device
  • MKR interface on which all pins are driven both by SAMD21 (32-bit ARM CPU) and FPGA
  • Mini PCI Express connector with up to 25 user programmable pins
  • The FPGA (an Intel/Altera Cyclone 10CL016) contains 16K Logic Elements, 504 KB of embedded RAM, and 56 18×18 bit HW multipliers

Sounds good. You can get more gory technical details over at Arduino and there’s even a schematic (.zip).

Documentation

Documentation is — so far — very hard to come by but the team is working to change that by the day. Here are the resources we’ve used so far (in addition to the schematic):

In addition, Arduino just released an example FPGA project for Quartus. I’ll explain what that means in a bit.

Get Up and Running with the Arduino Desktop IDE

Despite the getting started guide, it doesn’t appear the libraries are usable from the cloud-based IDE, so we followed the instructions to load the beta board support for the MKR 4000 into our desktop IDE. Be aware that the instructions show the “normal” SAMD board package, but you actually want the beta which says it is for the MKR 4000. If you search for SAMD in the Boards Manager dialog, you’ll find it (see the second entry in the image below).

 

The libraries we grabbed as ZIP files from GitHub and used the install library from ZIP file option with no problems.

What’s the Code Look Like?

The most interesting part of this board is of course the inclusion of the FPGA which left us wondering what the code for the device would look like. Browsing the code, we were a bit dismayed at the lack of comments in all but the JTAG code. We decided to focus first on the VidorPeripherals repository and dug into the header file for some clues on how everything works.

Looking at VidorPeripherals.h, you can see that there’s a few interesting I/O devices include SPI, I2C, UART, reading a quadrature encoder, and NeoPixel. There’s also a few headers that don’t exist (and presumably won’t get the define to turn them on) so don’t get too excited by some of the header file names until you make sure they are really there.

Then we decided to try the example test code. The library provides a global FPGA object that you need to set up:

// Let's start by initializing the FPGA
if (!FPGA.begin()) {
    Serial.println("Initialization failed!");
    while (1) {}
}

// Let's discover which version we are running
int version = FPGA.version();
Serial.print("Vidor bitstream version: ");
Serial.println(version, HEX);

// Let's also ask which IPs are included in this bitstream
FPGA.printConfig();

The output of this bit of code looks like this:

Vidor bitstream version: 1020107
number of devices 9
1 01000000 MB_DEV_SF
1 02000000 MB_DEV_GPIO
4 04000000 MB_DEV_I2C
6 05000000 MB_DEV_SPI
8 06000000 MB_DEV_UART
1 08000000 MB_DEV_SDRAM
4 09000000 MB_DEV_NP
11 0A000000 MB_DEV_ENC
0 0B000000 MB_DEV_REG

In many cases, the devices provided by the FPGA are pretty transparent. For example, here’s another snip from the example code:

// Ok, so we know now that the FPGA contains the extended GPIO IP
// The GPIO pins controlled by the FPGA start from 100
// Please refer to the online documentation for the actual pin assignment
// Let's configure pin A0 to be an output, controlled by the FPGA
FPGA.pinMode(33, OUTPUT);
FPGA.digitalWrite(33, HIGH);

// The same pin can be read by the SAMD processor :)
pinMode(A0, INPUT);
Serial.print("Pin A0 is ");
Serial.println(digitalRead(A0) == LOW ? "LOW" : "HIGH");

FPGA.digitalWrite(33, LOW);
Serial.print("Pin A0 is ");
Serial.println(digitalRead(A0) == LOW ? "LOW" : "HIGH");

That’s easy enough and it is nice that the pins are usable from the CPU and FPGA. We couldn’t find the documentation mapping the pins, but we assume it is coming.

Using, say, an extra serial interface is easy, too:

SerialFPGA1.begin(115200);
while (!SerialFPGA1);
SerialFPGA1.println("test");

Bitstream

So where’s the FPGA code? As far as you can tell, this is just a new Arduino with a lot of extra devices that connect through this mysterious FPGA object. The trick is that the FPGA code is in the library. To see how it works, let’s talk a little about how an FPGA operates.

When you write a program in C, that’s not really what the computer looks at, right? The compiler converts it into a bunch of numbers that tell the CPU to do things. An FPGA is both the same and different from that. You write your program — usually in a hardware design language like Verilog or VHDL. You compile it to numbers, but those numbers don’t get executed like a CPU does.

The best analogy I’ve been able to think of is that an FPGA is like one of those old Radio Shack 100-in-1 electronic kits. There are a bunch of parts on a board and some way to connect them with wires. Put the wires one way and you have a radio. Put them another way and you have a burglar alarm. Rewire it again and you have a metal detector. The numbers correspond to wires. They make connections and configure options in the FPGA’s circuitry. Unless you’ve built a CPU, there’s nothing in there examining and acting on the numbers like there would be with a CPU.

The numbers that come out of an FPGA tool is usually called a bitstream. Someone has to send that bitstream to an FPGA like the Cyclone onboard the Arduino every time it powers up. That someone is usually a memory device on the board, although the CPU can do it, too.

So that leads to two questions: Where is the bitstream? How does it get to the FPGA?

The answer to the first question is easy. If you look on Github, you’ll see in the library there is a file called VidorBase.cpp. It has the following lines:

__attribute__ ((used, section(".fpga_bitstream")))
const unsigned char bitstream[] = {
    #include "app.ttf"
};

What this means if there is an array called bitstream that the linker will put it in a specially marked section of memory. That array gets initialized with app.ttf which is just an ASCII file full of numbers. Despite the name, it is not a TrueType font. What do the numbers mean? Hard to say, although, in theory, you could reverse engineer it just like you can disassemble binary code for a CPU. However, it is the configuration required to make all the library calls we just talked about work.

The second question about how it gets to the FPGA configuration is a bit of a mystery. As far as we can tell, the bootloader understands that data in that section should get copied over to the FPGA configuration memory and does the copying for you. It isn’t clear if there’s a copy in the main flash and a copy in the configuration flash but it seems to work transparently in any event.

There’s a checksum defined in the code but we changed it and everything still worked. Presumably, at some point, the IDE or the bootloader will complain if you have the wrong checksum, but that doesn’t appear to be the case now.

By the way, according to the Arduino forum, there are actually two bitstreams. One that loads on power-up that you would rarely (if ever) change. Then there is another that is the one included with the library. You can double-click the reset button to enter bootloader mode and we suspect that leaves the FPGA initialized with the first bitstream, but we don’t know that for sure. In bootloader mode, though, the red LED onboard has a breathing effect so you can tell the double click works.

What about my FPGA Code?

This isn’t great news if you were hoping for an easy Arduino-like way to do your own FPGA development in Verilog or VHDL. Intel will give you a copy of Quartus Prime which will generate bitstreams all day for you. We think — but we aren’t sure — that the ASCII format is just a raw conversion from binary of the bitstream files.

Very recently, Arduino provided a Quartus project that would create a bitstream. This provides a few key pieces of the puzzle, like the constraint file that lets the FPGA compiler find the different parts on the board.

However, even with that project, you still have some reverse engineering to do if you want to get started. Why? Here’s what Arduino says about loading your own FPGA code (we added the emphasis):

Quartus will produce a set of files under the output_files directory in the project folder. In order to incorporate the FPGA in the Arduino code you need to create a library and preprocess the ttf file generated by Quartus so that it contains the appropriate headers required by the software infrastructure. Details of this process will be disclosed as soon as the flow is stable.

Programming the FPGA is possible in various ways:

  • Flashing the image along with Arduino code creating a library which incorporates the ttf file
  • Programming the image in RAM through USB Blaster (this requires mounting the FPGA JTAG header). this can be done safely only when SAM D21 is in bootloader mode as in other conditions it may access JTAG and cause a contention
  • Programming the image in RAM through the emulated USB Blaster via SAM D21 (this component is pending release)

In addition, the repository itself says that some key pieces are missing until they can work out licensing or clean up the code. So this gets us closer, but you’d still need to reverse engineer the header from the examples and/or figure out how to force the processor off the JTAG bus. The good news is it sounds like this information is coming, it just isn’t here yet.

Of course, you are going to need to understand a lot more to do anything significant. We know the FPGA is set in the AS configuration mode. We also asked Arduino about the clock architecture of the board and they told us:

[The CPU] has its own clock which is used to generate a 48 MHz reference clock that is fed to the FPGA (and that can be removed at any time to “freeze” fpga). In addition to this reference clock, [the] FPGA has an internal RC oscillator which can’t be used as [a] precise timing reference for tolerance issues but can be used in case you don’t want [the CPU] to produce the reference clock.

Of course, the FPGA has a number of PLLs onboard that can take any valid clock and produce other frequencies. For example, in the vision application, Arduino demonstrated, the 48 MHz clock is converted into 24 MHz, 60 MHz, 100 MHz, and 120 MHz clocks by PLLs.

Mix and Match?

One thing that is disappointing is that — at least for now — you won’t be able to mix and match different FPGA libraries. There is exactly one bitstream and you can’t just jam them together.  Although FPGAs can often be partially configured, that’s a difficult technique. But we were a little surprised that the IDE didn’t understand how to take libraries with, for example, EDIF design files for IP that would all get compiled together. That way I could pick the Arduino UART and mix it with the Hackaday PWM output module along with my own Verilog or VHDL.

The way things are structured now you will have one bitstream that is precompiled by another tool (probably Quartus for the foreseeable future). It will match up with a particular C++ library. And that’s it. Doesn’t matter how much of the FPGA is left over or how much of it you really use, you will use it all for the one library.

Of course, you can load another library but it is going to replace the first one. So you only get one set of functions at a time and someone else gets to decide what’s in that set. If you roll your own, you are going to have to roll your own all the way.

What’s Next?

It is still early for the Arduino Vidor. We are hopeful we’ll get the tools and procedures necessary to drop our own FPGA configurations in. It would be great, too, if the stock libraries were available in source format including the Verilog HDL. The recent GitHub release shows quite a bit, although it isn’t all of the examples, it is probably enough if we get the rest of the information.

As for a more intuitive interface, we don’t know if that’s in the cards or not. We don’t see much evidence of it, although posts on the Arduino forum indicate they will eventually supply an “IP Assembler” that will let you compose different modules into one bitstream. We don’t know if that will only work with “official” modules or not. However, we know the Arduino community is very resourceful so if we don’t get a good ecosystem it will not surprise us if someone else makes it happen. Eventually.

For now, we will continue to play with the existing bitstreams that become available. There are some neat new features on the CPU, too. For example, you can map two of the unused serial modules.  There’s a hardware-based cooperative multitasking capability. As more details on the FPGA emerge, we’ll keep you posted and if you learn something, be sure to leave word in the comments so everyone can benefit.

H2gO Keeps Us from Drying Out

The scientific community cannot always agree on how much water a person needs in a day, and since we are not Fremen, we should give it more thought than we do. For many people, remembering to take a sip now and then is all we need and the H2gO is built to remind [Angeliki Beyko] when to reach for the water bottle. A kitchen timer would probably get the job done, but we can assure you, that is not how we do things around here.

A cast silicone droplet lights up to show how much water you have drunk and pressing the center of the device means you have taken a drink. Under the hood, you find a twelve-node NeoPixel ring, a twelve millimeter momentary switch, and an Arduino Pro Mini holding it all together. A GitHub repo is linked in the article where you can find Arduino code, the droplet model, and links to all the parts. I do not think we will need a device to remind us when to use the bathroom after all this water.

Another intrepid hacker seeks to measure a person’s intake while another measures output.

Save Some Steps with this Arduino Rapid Design Board

We’re all familiar with the wide variety of Arduino development boards available these days, and we see project after project wired up on a Nano or an Uno. Not that there’s anything wrong with that, of course, but there comes a point where some hobbyists want to move beyond plugging wires into header sockets and build the microcontroller right into their project. That’s when one generally learns that development boards do a lot more than break the microcontroller lines out to headers, and that rolling your own design means including all that supporting circuitry.

To make that transition easier, [Sean Hodgins] has come up with a simple Arduino-compatible module that can be soldered right to a PCB. Dubbed the “HCC Mod” for the plated half-circle castellations that allows for easy soldering, the module is based on the Atmel SAMD21 microcontroller. With 16 GPIO lines, six ADCs, an onboard 3.3 V regulator, and a reset button, the module has everything needed to get started — just design a PCB with the right pad layout, solder it on, and surround it with your circuitry. Programming is done in the familiar Arduino IDE so you can get up and running quickly. [Sean] has a Kickstarter going for the modules, but he’s also releasing it as open source so you’re free to solder up your own like he does in the video below.

It’s certainly not the first dev module that can be directly soldered to a PCB, but we like the design and can see how it would simplify designs. [Sean] as shown us a lot of builds before, like this army of neural net robots, so he’ll no doubt put these modules to good use.

Animatronic Puppet Takes Cues From Animation Software

Lip syncing for computer animated characters has long been simplified. You draw a set of lip shapes for vowels and other sounds your character makes and let the computer interpolate how to go from one shape to the next. But with physical, real world puppets, all those movements have to be done manually, frame-by-frame. Or do they?

Billy Whiskers: animatronic puppet

Stop motion animator and maker/hacker [James Wilkinson] is working on a project involving a real-world furry cat character called Billy Whiskers and decided that Billy’s lips would be moved one frame at a time using servo motors under computer control while [James] moves the rest of the body manually.

He toyed around with a number of approaches for making the lip mechanism before coming up with one that worked the way he wanted. The lips are shaped using guitar wire soldered to other wires going to servos further back in the head. Altogether there are four servos for the lips and one more for the jaw. There isn’t much sideways movement but it does enough and lets the brain fill in the rest.

On the software side, he borrows heavily from the tools used for lip syncing computer-drawn characters. He created virtual versions of the five servo motors in Adobe Animate and manipulates them to define the different lip shapes. Animate then does the interpolation between the different shapes, producing the servo positions needed for each frame. He uses an AS3 script to send those positions off to an Arduino. An Arduino sketch then uses the Firmata library to receive the positions and move the servos. The result is entirely convincing as you can see in the trailer below. We’ve also included a video which summarizes the iterations he went through to get to the finished Billy Whiskers or just check out his detailed website.

[Jame’s] work shows that there many ways to do stop motion animation, perhaps a part of what makes it so much fun. One of those ways is to 3D print a separate object for each character shape. Another is to make paper cutouts and move them around, which is what [Terry Gilliam] did for the Monty Python movies. And then there’s what many of us did when we first got our hands on a camera, move random objects around on our parent’s kitchen table and shoot them one frame at a time.