Real–Life Redstone Lamp Replica Controlled by a Raspberry Pi

Early in my gameplay in Minecraft I began making redstone contraptions. For those that don’t know Minecraft, you can use resources in the game to make analog electronics. People have extended this feature to build entire working computers all in redstone logic in Minecraft.

Redstone lamp (activated)

I only used redstone to make traps and novel machines, but the strong connection between redstone and electronics led me to imagine extending these machines out into the real world. I figured the easiest thing to make was the Redstone Lamp, pictured to the right. The redstone lamp is a block that will provide light when powered. My real life replica redstone lamp does the same thing. It lights up when a redstone lamp ingame is lit up. Here is a video of how it works:

I’ll describe how I got to a working replica in a few stages.


I am not the best getting started with software projects, so I enlisted the help of Vince who was hanging out a bunch at Hive76. We made a quick prototype with a python Minecraft client called pyCraft, an Arduino, and transistor, and a papercraft redstone lamp. You can see that first success here.

While I worked on the physical stuff, Vince moved away and Kyle Yankanich stepped in to help me finalize some stuff. PyCraft connects to any server as a simple chat client, in our case as the user LAMPBOT. Kyle wrote a plugin for pyCraft that listens for a whisper of “on” or “off” and sets pin 16 on the Raspberry Pi’s GPIO high or low respectively. You can download my fork of pyCraft here with Kyle’s plugin and my shell script to start the client. I set my home server to Offline mode so that I wouldn’t need to purchase another Minecraft account.


Redstone lamp replica

For the replica, I did my best to turn pixels into straight lines. I designed a laser-cuttable box in six parts with finger joints on the edges. I used 16 finger joints because the a block is 16 pixels wide. The material is MDF with a zebra wood veneer laminated on top. I laser cut six sides and glued all but one together. I acquired some amber cathedral glass from Warner Stained Glass, cut, and glued it in place with silicone adhesive. The RPi is attached to a MDF board sitting diagonally in the cube. The LEDs were torn from inside a failbot and glued around the RPi to light up the inside as much as possible.

In order to turn the LEDs on and off, we use the signal from the RPi GPIO to control an NPN transistor and turn the lights on and off. There is a fritzing wiring diagram of the electronics here. On the NPN transistor, the Collector is the negative lead from the LEDs, the Base is connected to a 100KΩ resistor and then pin 16, and the Emitter goes to the ground on the LED power supply.

There’s no room for a power regulator, so there are two power sources and ethernet running through a hole in the back.

Ingame Stuff

Ingame redstone

To trigger the lamp, command blocks are used ingame as you can see to the left. When a lever is thrown powering a specific redstone lamp, we also power a command block that sends the server command:
/tell LAMPBOT on
We also send the inverted signal to a different command block that outputs:
/tell LAMPBOT off
This can be used on any server with no mods. You would need a Minecraft account for the lamp so you don’t expose your server to cracked clients. The server this was designed for runs Minecraft 1.6.4 now, but in 1.7.2 the /testforblock command and a clock could also trigger the lamp.

I really hope you take what we have done here and continue to connect your Minecraft creations to the real world. Enjoy!

Experiments in Garden Hose Hydraulics

I recently decided to make a proof-of-concept for a simple hydraulics kit. Ultimately you would be able to take this kit, get some standard PVC pipe from the local hardware store, and very quickly build your own simple hydraulic devices. Use it to learn about the principles of hydraulics while staying cool on a hot summer day, or use it to power your homemade tools like simple presses, lifts, or even an articulated digging arm.

Double-acting PVC hydraulic cylinder and control valve

The pressure in your typical garden hose is nominally around 40 psi or so, so my first hydraulic cylinder should be able to develop about 125 pounds of force if it had really good seals. This is a proof of concept so I didn’t bother with o-rings or anything, so it leaks like crazy and thus is unable to develop quite those kinds of pressures, although it is quite strong. Moving from a 2″ to a 3″ hydraulic cylinder would bring this up to about 282 pounds of force, not too shabby for garden hose power!

The hydraulic cylinder is made of standard PVC pipe (2″ for the cylinder and 1.5″ for the ram), although I had to use my lathe to turn down a 1.5″ pipe cap to fit inside the outer cylinder. The control valve is made of 1/2″ CPVC fittings and tubing, with the exception of the spool which is a length of 1/2″ solid PVC rod. I had to turn down the spool on my lathe to the appropriate profile and also had to drill out the valve to fit it. The fit is fairly poor but it shows that the concept definitely works. Eventually I am hoping to be able to have all the custom parts injection molded to get the unit cost down cheap enough that it would make a good toy for DIY doodlers and budding engineers everywhere.

Magnetic Dip, Illustrated

I was surprised at the absence of a concise illustration of magnetic dip available on the internet, so I cobbled together a short 3D animation using the excellent free software Blender. Magnetic dip is a very simple phenomenon but one which can quickly get confusing since it deals with 3-dimensional fields through space that can be difficult to visualize. The gist of it is that the Earth’s magnetic field lines are only parallel with the ground around the equator, and everywhere else the field lines actually dive downwards into the Earth by some angle, the steepest of which are found at the magnetic North and South poles. In the Philly area this angle is surprisingly steep, about 67 degrees below the horizontal — it’s actually more vertical than horizontal! This means that in areas far from the equator, tilting to the East or West will result in a compass error since the needle can align more closely with the magnetic field by deviating from the projection of the field lines onto the ground, which is what we normally think of as North. Tilt to the West in the Northern hemisphere, and the compass needle will tilt to the West as well.

It gets even more confusing when you are talking about traditional non-gyro-stabilized compasses, such as those normally found in small aircraft. The compass needle is usually weighted carefully such that it rests level with the ground under normal circumstances, but this means that when you accelerate in certain directions, that weight’s inertia keeps it lagging behind somewhat, resulting in yet more compass errors. These acceleration effects are not directly the result of magnetic dip, but they are partly the result of an incomplete attempt to deal with magnetic dip. Normally pilots are just taught to remember that this happens and vaguely what to do about it, but if you level out and stop accelerating the problem takes care of itself.

Blender allowed me to put this simple animation together in a very short time. It has a challenging learning curve, but it is a very powerful set of tools. Hopefully this animation will be useful to somebody out there other than me.

My Blade mCP X Mods

She ain't pretty, but she flies great

I’m a noob when it comes to RC helicopters. I got a Syma S107 for about $30 a year or two ago and it is incredibly stable while being ridiculously bulletproof. I can fly it into walls, and I’ve never replaced a part. If it’s laying on its side on the floor, I can often get it to right itself by just gunning the throttle. (Do I recommend it? No. Do I do it? Sometimes.) A wire fatigued off the board once but that was the only thing I’ve had to fix. It’s a hell of a bargain and it’s treated me great, but being so stable and easy to fly, it has some inherent performance limitations. So I decided to step up a few levels.

I heard about the Blade mCP X helicopter, the first “real” helicopter of its size that came stock with 3-axis attitude-holding stabilization electronics. It weighs maybe double what my S107 does but its performance is amazing. It can do inverted flight, flips, all sorts of crazy stuff. That is, when supplied with an appropriately-skilled pilot, which I certainly am not. But I can fly it in my backyard in 30 mph winds, and this little beast can take it — pretty impressive for something that weighs the same as a good quality 9V battery. Being such a noob, I crash constantly, but I can usually patch things up without needing to buy replacement parts. Here is the list of mods I’ve performed on my helo so far, mostly out of necessity:

  • Grommet mod – tightens up the swash, reduces vibration (not my idea). Works great
  • Tail boom from mCP X2 – comes with a more aggressive tail rotor which helps with yaw authority
  • Created a simple tool to speed up resetting the main gear after crashes – just remove the battery and push this drilled-out rod over the gear hub to click it back in place, no need to remove the canopy or landing gear. I keep it zip-tied to my transmitter since I crash a lot 🙂
  • Lengthened tail boom – added perhaps 1/2″, seems to help with yaw authority
  • Added magnetic breakaway tail boom mount – after a crash the tail boom pops off instead of breaking, can be reset by simply moving it back in place and letting the magnets lock it down. Works very well, but be careful because if you have too much slack then with the right kind of crash the tail motor wires can get wrapped around the head. I’m sure I’ll keep experimenting with this one
  • Masking tape holding my canopy together? Classy
  • Hot glue holding my landing gear together? Not perfect by any means, but it keeps me flying until I buy a spare
Aluminum piece presses into the helo frame like a stock tail boom, but has an embedded magnet. Tail boom is hot-glued to a piece of bamboo skewer that fits into the channel, and has a magnet glued to it using CA and baking soda. Rubberbands or o-rings might be better

My Little Pwnies

Some more fun with Blender. Introducing… My Little Pwnies. enjoy.

My Little Pwnies. Sure, they LOOK cute. But they can frag you like nobody's business.

original pony models are here (no longer available, but i have copies if you want lemme know).

Philly Tech Week Events

For Philly Tech Week, we’re opening our doors every night of the week at 8pm, extending our normal Open House format to the entire week, for this week only. We have a variety of different activities planned. Check it out.

Useless Photo
It's gonna be hot!

Monday, 25th: Open Work Night
For the first night of Tech Week, we’ll be working in the space on projects together. Come stop by and say hi, lend a hand, or just to jibber-jabber about your own projects. This is a little different than normal Open Houses in that we typically curb work sessions for the night.

Tuesday, 26th: Micro-controller Show and Tell
Have an Arduino, MSP430, Propeller, or other MCU project that you want to show off? Want to learn some basics of gettings started with the MSP430? Come out this night and have fun with bit-twiddling, speaker beeping, and LED-blinking.

Wednesday, 27th: Regularly Scheduled Open House + Late Night Karaoke
Our regularly scheduled social hour. We have a hacktastic “karaoke machine” running on a Macbook that lets you queue songs through our IRC channel. We don’t usually start the Karaoke until 10pm, but if enough people are interested we’ll get it started early.

Thursday, 28th: DIY/Electronic Music
Step-tone generators, electric guitar effects pedals, sequencers, keyboards. Whether you’ve made your own instrument or not, any way you want to make music tonight, come on down and jam with us.

Friday, 29th: “Bricks and Grips” – Arm Wrestling/Puzzle Game Tournament
Based on a similar concept that we are not permitted to mention due to trademark issues, this game is a standard 2-player, head-to-head Tetrimino Puzzle Game, where players manipulate their pieces through an arm wrestling competition on a specially designed arm wrestling table-shaped controller.

Saturday, 30th: Artemis Game Session
For all you trekkies out there, Artemis Spaceship Bridge Simulator is a networked multiplayer game that simulates a spaceship’s bridge; much like what you’d see on Star Trek®.

User-Literate Technology

This is a broad-concept idea that I’ve had in my head for a while and have discussed with a few people. This post is mostly a direct adaptation of those discussions. I’ve taken to calling the idea “User-Literate Technology”, mostly because, in the same way we might say that a particular person is technology-literate, we should also be able to say that a particular technology is user-literate.

In some ways, this is similar to “user-friendly”, except that it places the burden on the technology to adapt to the user, rather than the technology making it easy for the user to adapt to it. Does some particular technology in question create its own gestures and idioms, while seeking to make them easy to learn, or does the technology capture idioms that are common in the culture for which the technology is intended? If the technology errs more on the latter side, then it is “User-Literate”, more than “User-Friendly”.

Before systems can become more User-Literate, they largely need to dispense of their most prevalent interface: the keyboard and mouse. The keyboard is a text and data entry tool, but as an interface into consumer computing, it is roughly 150 keys of confusion, distraction, and indirection. For example, why do we still have a Scroll Lock function on our keyboards? Scroll Lock hasn’t been a useful feature for the last 20 years; in other words, one of the most important and significant markets for consumer computing has never lived in an era that needs a Scroll Lock. It’s like issuing every new driver a buggy whip with their driver’s license.

Mice are nice for tasks that involve precise selection of elements on a 2D plain. It was designed in an era when graphical displays were not much larger than 640×480 pixels. Nowadays, I have a laptop with a native resolution of 1600×900, and I can hook up a second monitor to it to double that space. We’re talking about screen real estate that is five to ten times larger than when the mouse first became popular. To give you an idea of what that means, take a look at the 640×480 highlighted area on my desktop screenshot (and yes, I paid for Photoshop).

Imagine using only the lower-left corner

Computing has seen more huge leaps and bounds in usability than it has incremental improvements. Check out this screenshot of the Xerox Star GUI. I remind you that this is from 1981. Try to identify any functional elements from modern computer interfaces that are not in this image (protip: from a technical perspective, there aren’t any, they are all adaptations of concepts shown here).

Xerox Star GUI
The Graphical User Interface from the Xerox Star experimental OS, 1981

The early GUI interfaces like Star and its clones (including Macintosh and Windows) got something very right: they made functionality discoverable. There were two primary ways in which they did this, by providing visual cues on the screen immediately in the user’s field of view, and by providing multiple access points to the functionality to accommodate users who work in different ways. Having a menu option labeled “Help” is very inviting, but advanced users learn to ignore large portions of screen text, so it’s very important to make systems that cater to both the wide-eyed (literally) newb and the hardened veteran.

Regardless, monitors are only good if the user A) has a fully functional visual sense, and B) is able to devote their attention to the display. If the user is blind or distracted by other visual tasks (say, operating a heavy machine) then the display is a large, hot paperweight on the desk.

Luckily, we are starting to see some very basic work in this area hitting the consumer market. Between systems like the iPad and the hacktastic stuff being done with the Kinect, there is a lot going on with removing computing from its keyboard-and-mouse hegemony. Still, in many cases, they often rely on the user being able to memorize gestures and command sequences. If a user has to do something unnatural–even if it is done through advance motion sensing and image processing–then it might as well just be any other button-pushing interface.

This is why I never got into the Nintendo Wii. Yes, the motion tracking of the controller was a wonderful sweet-spot between price and precision. Despite that, few–if any–of the games were doing anything actually unique with it. Instead of waggling a joystick permanently affixed to a controller base and mashing buttons, you were… waggling a joystick in mid-air and mashing buttons. The user still had to learn new motion patterns and adapt to the system.

I think Google kind of picked up on the absurdity of most modern motion-tracking systems with this year’s April Fools prank, the “Gmail Motion“. Also, I think there are some good examples of user-literate technology on the market already.

I have a Wacom tablet here that is not only pressure- but also tilt-sensitive. I’ve found that the primary training hang-up is the disconnect between moving the stylus in one location and the drawing marks showing up in another; without strong hand-eye coordination that can be difficult to adjust to. Wacom has had LCD displays for a while now that have the full touch-and-tilt sensitivity built into them. I can’t imagine how amazing working with them must be (and probably won’t for a while, the smallest one is only 12” across and costs nearly $1000. The one that I would actually want is 2 kilobucks).

There is a weather station ran by MIT with a natural language processor that you can call on your phone called JUPITER. I’ll be damned if I couldn’t figure out how to trip this thing up. Even with a fake southern accent (though reasonable, I’ve spent enough time in the south to know what they actually sound like) I couldn’t baffle it. Anything that it faltered on, I had to admit that a human would have had a hard time understanding me anyway. It’s best feature was context tracking, you could ask for the weather on a certain day in a certain city, receive it, then make an extremely contextual query like “what about the day after?” and it would get it right, “and the next day?” and BAM, weather forecastery in your ear. I heard about this thing over 5 years ago, why don’t we have flying cars yet? I understand the technology was based on a DARPA project that was being used for automated logistics in battlefield scenarios. People getting shot at don’t have time to remember how to talk to a computer. So they built a computer that could understand a screaming, cussing US Marine.

My sister cued me in to a project being developed by a group of high-schoolers in Portland, OR. A team of two 11th graders are developing voice-emotion recognition technology that; they’ve already won the Siemens Science Award in the team category. You talk into a microphone and the computer judges the emotional state you were in when you spoke. The kids are currently envisioning developing a wristwatch for autistic children who have difficulty assessing others’ emotions. The watch will flash an emoticon indicating the emotional state of the person the child is talking to.

So what is the point of all of this talk? I am organizing a symposium/exposition for User-Literate Technology. I want it to be a spring-board for starting to talk about technology that adapts to and understands how people work, rather than having artificial systems that strive to be easy to learn. Hopefully, we can have it going either by the end of the year or by this time next year. I’d like it to be a multi-disciplinary event, with equal participation from industry and academics, from artists and computer scientists and engineers.  If you or your organization is interested in participating, you can reach me through the gmail with the name “smcbeth”.

We haven’t seen a major innovation in human-computer interaction in over 30 years. It’s time to start working on the problem.

Open Source Rendering with Blender, LuxRender, and SmallLuxGPU

There’s some outstanding new open-source add-ons for Blender, one of our favorite open-source 3D rendering/simulation/animation programs.

The first, LuxRender is a physically based Light Modeler. It’s currently limited to CPU-rendering only, but it creates enormously realistic lighting scenarios based on physical equations that describe the behavior of light. An amazing new feature here is that it stores the contribution of each light to each pixel during rendering, so you can modify the rendered image photorealistically and non-destructively without having to re-render the entire scene again.

The second, SmallLuxGPU is even more experimental but it is able to harness the full power of your GPU for unparalleled rendering speed of highly photorealistic visual scenes. Even better, with SLG you can interact with your scene in realtime to get just the view you want.

SmallLuxGPU v1.6 (OpenCL) from David Bucciarelli.

And here’s some examples of renders we’ve done in the past few days. Keep in mind, these are entirely synthetic images. Jump over to flickr to see at higher resolution.