Tuesday, 12 December 2017

M1ll13 robot step 2.1

Whilst I await on a very slow delivery of a 3D Printer (yes, I decided to go ahead and buy one and give it a go), I decided to switch back to the software side of things.

I decided that I need to do some work with the XBox Kinect and ROS.

After a bit of googling around, I see that as I've decided to use the ROS "kinetic" version and not the "Indigo" version that everyone else has previously used (that's the older version, btw), I'd be figuring this out for myself and who knows, I might even help some other people out along the way.

I got a bit distracted and it looked like I needed to setup the robot simulator software

Apparently I need to install MoveIt! - so, time to fire up the Raspi3, drop to a Terminal and type:

$ sudo apt-get install ros-kinetic-moveit
$ source /opt/ros/kinetic/setup.bash

(and then !boom! 2hrs of power-cuts just hit my area, probably something to do with the snow, etc...)


and then I figured out that this was a mis-direction.  This wasn't going to help me with the XBox Kinect (unless I really missed something obvious here?).

A quick wander back to the ROS Wiki..... and I see there is reference to needing to be using OPENCV3?  (red-herring! not needed)

...and then I realised, I've been distracted by work/work in my personal time and I've completely missed the point of what I was trying to do!

If I go back to the setup of the XBox Kinect that I originally performed here.
I notice that I actually explained it to myself previously:

TERM2: Start the Node for the Kinect.  An ROS node is a bit like an "IoT module", it'll do it's own thing, gathering/capturing data and if things are subscribed to it, it'll publish this out to whoever is listening.  For us here, it is the roscore/master.   The /topic sub/pub concept shouldn't be a new concept, we've been doing it with Message Queues (MQTT) for years now....
$ roslaunch freenect_launch freenect.launch

started roslaunch server http://rpi3ubuntu:46776/
 * /camera/.....
core service [/rsout] found
[ INFO] Starting a 3s RGB and Depth stream flush.
[ INFO] Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A0033333335A'

TERM3: Now that the above TERM2 should be publishing topics now, we can run a command to see the topics.  We can now look at the RGB image from the Kinect with image_view
$rostopic list

$ rosrun image_view image_view image:=/camera/rgb/image_color
libEGL warning: DRI2: failed to authenticate
init done
[ INFO] Using transport "raw"

All I need to do is make a serverNode that starts the freenect_launch and a clientNode that subscribes to specific topics published by /camera/xxx then extract out some of those values and publish so the serverNode picks the values up and acts accordingly.

There's the first mission then.....detect an object, work out the 2D/3D distance, etc.. and then trigger the motors to track the object with the XBox Kinect to keep the object 'in vision' (does the XBox Kinect have motors inside to do that? if not, I'll rig up a platform with some micro-servos to do that)

okay, so I was being a bit over-complex/dumb.

The XBox Kinect publishes to the topics listed when you run:
$ rostopic list

Then, to see the values that are being published you can use
$ rostopic echo <topic you want to know more about>

s rostopic echo /camera/rgb/camera_info

gives info output on:
  seq: 251
    secs: 1513072954
    nsecs: 801882376
  frame_id: camera_rgb_optical_frame
height: 480
width: 640
distortion_mode: plumb_bob
D: [0.0, 0.0, 0.0, 0.0, 0.0]
binning_x: 0
binning_y: 0
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False

After checking this info about Topics, we can find out a bit more about the structure of the above.

$ rostopic type /camera/rgb/camera_info

$ rosmsg show sensor_msgs/CameraInfo
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
uint32 height
uint32 width
string distortion_model
float64[] D
float64[9] K
float64[9] R
float64[12] P
uint32 binning_x
uint32 binning_y
sensor_msgs/RegionOfInterest roi
  uint32 x_offset
  uint32 y_offset
  uint32 height
  uint32 width
  bool do_rectify

well....that's a bit more like it!..... time to investigate further.....

Looks like I'm getting closer to what I was looking for:


(and I now see that my earlier reference to MoveIt! wasn't as crazy as I thought)

I see that if I run:
$ rosrun image_view image_view image:=/camera/rgb/image_color

I am presented with this image (moving the mouse over the colours shows the different RGB values):

If I then run:
$ rosrun image_view disparity_view image:=/camera/depth/disparity

I am presented with the depth from the XBox Kinect represented in colours.  As you can see, my hand is moved closer therefore it is in red:

okay, so now I think I can start to make progress of capturing the published topic data and do something useful with it now via code..... we'll see....

and I found a Youtube video too:

as pointed out in the video, I also need to do:
sudo apt install ros-kinetic-depthimage-to-laserscan

.....and then I broke it all!  (foolishly did something in relation to opencv3 and gmapping, now just get tons of errors.  great unpicking time)

I'm so glad I documented the steps back here: https://tonyisageek.blogspot.co.uk/2017/11/m1ll13-robot-step1.html - time to wipe the SD Card, re-install everything and get back to where I was before I broke everything.  Hey, it's all part and parcel of the experience, isn't it :-)

So, a few hours later and a (re)fresh install onto the SD Card, everything set back up again and running the same tests as above, I then move into running the rviz software:

$ rviz
(remember to run the source command in new terminal windows first)

Then load the .rviz file downloaded from the YouTube video above and there we have it, a weird view of the XBox Kinect using LaserScan:

The order of running in 3 different terminals was important.
TERM1: $ roscore &
TERM2: $ roslaunch freenect_launch freenect.launch
TERM3: $ roslaunch depthimage_to_laserscan.launch (this is the file downloaded from YouTube video)
TERM1: $ rviz

This then shows the LaserScan (white line) where the laser scan is "hitting a surface", this is good for working out obstacles in the way etc...

...and this is how the Terminator M1ll13 is going to "view" the world.....

Right, that's me done for now.... time to figure out how to get that little lot now all working from ROS coding and python or C++... (note to self, I did NOT need to do anything extra with OpenCv3)

Monday, 4 December 2017


Baby Driver is a good movie, but this is not what this article is about....

I extracted the bits that I thought were eye-opening from the following article:

Soul Machines wants to produce the first wave of likeable, believable virtual assistants that work as customer service agents and breathe life into hunks of plastic such as Amazon.com’s Echo and Google Inc.’s Home. https://www.soulmachines.com/

Mark Sagar’s approach on this front may be his most radical contribution to the field. Behind the exquisite faces he builds are unprecedented biological models and simulations. When BabyX smiles, it’s because her simulated brain has responded to stimuli by releasing a cocktail of virtual dopamine, endorphins, and serotonin into her system. This is part of Sagar’s larger quest, using AI to reverse-engineer how humans work. He wants to get to the roots of emotion, desire, and thought and impart the lessons to computers and robots, making them more like us.

Since my 20s, I’ve had these thoughts of can a computer become intelligent, can it have consciousness, burning in my mind,” he says. “We want to build a system that not only learns for itself but that is motivated to learn and motivated to interact with the world. And so I set out with this crazy goal of trying to build a computational model of human consciousness.

Here’s what should really freak you out: He’s getting there a lot quicker than anybody would have thought. Since last year, BabyX has, among other things, sprouted a body and learned to play the piano. They grow up so fast.
Feeling he’d solved the riddles of the face, Sagar dreamed bigger. He’d kept an eye on advancements in AI technology and saw an opportunity to marry it with his art. In 2011 he left the film business and returned to academia to see if he could go beyond replicating emotions and expressions. He wanted to get to the heart of what caused them. He wanted to start modeling humans from the inside out.
Sagar clicked again, and the tissue of the brain and eyes vanished to reveal an intricate picture of the neurons and synapses within BabyX’s brain—a supercomplex highway of fine lines and nodules that glowed with varying degrees of intensity as BabyX did her thing. This layer of engineering owes its existence to the years Sagar’s team spent studying and synthesizing the latest research into how the brain works. The basal ganglia connect to the amygdala, which connects to the thalamus, and so on, with their respective functions (tactile processing, reward processing, memory formation) likewise laid out. In other words, the Auckland team has built what may be the most detailed map of the human brain in existence and has used it to run a remarkable set of simulations.

BabyX isn’t just an intimate picture; she’s more like a live circuit board. Virtual hits of serotonin, oxytocin, and other chemicals can be pumped into the simulation, activating virtual neuroreceptors. You can watch in real time as BabyX’s virtual brain releases virtual dopamine, lighting up certain regions and producing a smile on her facial layer. All the parts work together through an operating system called Brain Language, which Sagar and his team invented. Since we first spoke last year, his goals haven’t gotten any more modest. “We want to know what makes us tick, what drives social learning, what is the nature of free will, what gives rise to curiosity and how does it manifest itself in the world,” he says. “There are these fantastic questions about the nature of human beings that we can try and answer now because the technology has improved so much.”
AND NOW FOR THE COOL/CREEPY BIT (that I absolutely love!):
Sagar’s software allows him to place a virtual pane of glass in front of BabyX. Onto this glass, he can project anything, including an internet browser. This means Sagar can present a piano keyboard from a site such as Virtual Piano or a drawing pad from Sketch.IO in front of BabyX to see what happens. It turns out she does what any other child would: She tries to smack her hands against the keyboard or scratch out a shabby drawing.

What compels BabyX to hit the keys? Well, when one of her hands nudges against a piano key, it produces a sound that the software turns into a waveform and feeds into her biological simulation. The software then triggers a signal within BabyX’s auditory system, mimicking the hairs that would vibrate in a real baby’s cochlea. Separately, the system sets off virtual touch receptors in her fingers and releases a dose of digital dopamine in her simulated brain. “The first time this happens, it’s a huge novelty because the baby has not had this reaction before when it touched something,” Sagar says. “We are simulating the feeling of discovery. That changes the plasticity of the sensory motor neurons, which allows for learning to happen at that moment.

Does the baby get bored of the piano like your non-Mozart baby? Yes, indeed. As she bangs away at the keys, the amount of dopamine being simulated within the brain receptors decreases, and BabyX starts to ignore the keyboard.
Sagar remains sanguine about the lessons AI can learn from us and vice versa. “We’re searching for the basis of things like cooperation, which is the most powerful force in human nature,” he says. As he sees it, an intelligent robot that he’s taught cooperation will be easier for humans to work with and relate to and less likely to enslave us or harvest our bodies for energy. “If we are really going to take advantage of AI, we’re going to need to learn to cooperate with the machines,” he says. “The future is a movie. We can make it dystopian or utopian.” Let’s all pray for a heartwarming comedy.


Friday, 24 November 2017

IBM partner with MIT


"This work is undeniably promising, but it’s a simple evolution of the hardware we have today. Another, more dramatic option is the use of a quantum computer to explore the potential of an A.I. Such research is still in its earliest conceptual stages, but the enormous computational power of a large-scale universal quantum computer seems likely to inspire a major leap in our understanding.

MIT’s lab will have access to IBM Q, the company’s flagship quantum project. Recently updated to a 20-qubit processor, an even more impressive 50-qubit version on the horizon – hardware that will surely be a real gamechanger when it’s possible to use it to its full potential. This avenue of research is set to be a two-way street. Machine learning will be used to help advance research into quantum hardware, and the results will help scientists push the boundaries of machine learning.


The MIT-IBM Watson A.I. Lab will be the setting for these discussions. It’s clear that A.I. is bursting with potential, but that brings about its own challenges. Individuals and organizations working in the field are sure to want to use their talents to break new ground. Both MIT and IBM want to facilitate that important work – but they want to make sure that it’s carried out with the proper caution."

Sunday, 12 November 2017

InMoov finger

Today, I decided to get on with putting together the InMoov 3d printed finger.  I wanted to see how good the 3d printing was and how the construction was done for one of the fingers, seeing as I've got to do all of them, I wanted to get to see how they were put together and how they can be controlled by a servo.

Well, there's the basics all laid out:

After a bit of digging around, I found some acetone (nail varnish remover to you & me!), why we'll need that will become clear shortly....
....as the 3d printed finger parts are ABS, you can use the acetone with a small brush to melt the parts together - there's the finger parts on the right melted together and with the blue joint material threaded through to hold it all together - works pretty well:

For now, time to attach to the base unit:

So, this was the original HobbyKing servo I was going to use.  Note those 2 extra circular discs were meant to be used, but they don't fit the servo, so I opted no to use them yet (that will change)

Now it was time to thread the tendons through the finger, I mis-used an LED to help push through the last part of the finger - hey, whatever works, right :-)

...and there we have it, 50% done...
 ..and there's the other 50% threaded through:

Now to hook up the servo to digital Pin3 of the Arduino (just easier and quicker to test with the Arduino)
 Quick bit of code:
 Verify and compiled:

....and there we go, we can now flip the finger when we need to.  I've left off the end of the finger-tip for a good reason.  You'll see the tendons are tied off there and you then melt the finger tip over the top, but it makes it permanent...and, well, I don't want to do that just yet.

Of course there are challenges.  Wouldn't be fun if there wasn't some.

Here's a quick video:

As you can see, it kinda works, but I need a better pull/Robring mechanism (basically a dished outer edge to the white plastic circle on the servo, so that the tendon can move further back) - with a better mechanism the finger will be able to pull back straighter than it does right now.

Hey, it's all learning....small steps.  Now, time to get those .stl files over to a 3d printer and get a whole hand and arm printed up ready for the next phase.  Until then, I'll switch back to the code side of things and see if I can get the Kinect hooked up for vision and get it to react by moving servos etc...