Meet Morphos, the driving shape recognizing robot. Morphos is a robot able to follow shapes and recognise shapes. It can be taught many different shapes. Morphos is the result of the Lego beyond Toys module, aimed at using Lego as a prototyping tool for complex intelligent products.
Morphos has been realised by combining knowledge from three different fields, namely mechanics, feedback systems and artificial intelligence. In small master classes on these three topics we learned how to combine these fields into a working product.
Morphos recognises shapes using an infrared tracker to keep track of the edges of the shape. Next, a special feedback system allows it to stick closely to the edges without wandering off. Also, another feedback system allows it to track its position, by monitoring how much each wheel has rotated whilst driving. After completing the drive, it sends his sensor input to a computer, running a neural network trained to recognise various shapes in the sensor data.
In other places of the report was mentioned that Neuroph was used, which means that the student did not really program. I do not know what the thought is.
In the report the complete process is briefly described, as instructed during the final meeting. I worked with Java for the neural network learning system, which means I have created custom software to make the prototype function correctly. Some interesting snippets have been included in the report. However, I have relied on the neuroph library to take care of the neural network calculations; it would make no sense for me to wite a neural network algorithm from scratch.
This has also been demonstrated during the final presentation. I created a small video to give an overview of the device’s functioning.
The experiment and the report showed a good understanding of the problem and the way a solution has to be made. Several recommendations for testing the validity of the shape recognition were given during the demo, and the report shows that they were understood and implemented before the final report. However, more work on the documentation is recommendable.
I agree there were still lots of things that could be done to improve the robots functioning. At the same time this is the exact point where design turns into engineering. Adapting the software to make it more efficient is an engineering aspect and doesn't contribute to my understanding of the "design" of a complex intelligent product.
In general, the basics of mechanics and feedback theory were enlightening. They contained very useful theory which was immediately applicable within the module. The freedom allowed us to go “into deep” in our subject. This generates lots of knowledge (intended goal of module), but also has a downside (less meetings). I'm having trouble defining the "social impact" of the deliverable; how is the current learning bot relevant in society? We’re designers after all.
Lego as a prototyping tool
Lego was the obvious prototyping platform, hence the name of the module. The famous plastic blocks and rods were no secret to me; I was already familiar with the capabilities of Lego. But I never worked with the NXT bricks before. The NXT bricks proved to be the perfect addition to make a mechanical Lego construction come alive. I was already experienced with Java, which is used to program the interactivity into the bricks. The Lejos OS made it easy to write complex behaviors, but contrasts with the “fool-proof” and easy-to-use plastic parts Lego parts.
Because of it's promissing nature we decided to work with these “neural networks”. This proved to be difficult since the tools available weren’t specifically tailored for our purposes. I ended up writing specific software for training and calculating custom neural network. This took by far the most time, but in the end delivered accurate results. Writing a program to read, write, send and receive data was just one of the challenges encountered when working with neural networks.