Sunday, 12 November 2017

InMoov finger

Today, I decided to get on with putting together the InMoov 3d printed finger.  I wanted to see how good the 3d printing was and how the construction was done for one of the fingers, seeing as I've got to do all of them, I wanted to get to see how they were put together and how they can be controlled by a servo.

Well, there's the basics all laid out:

After a bit of digging around, I found some acetone (nail varnish remover to you & me!), why we'll need that will become clear shortly....
....as the 3d printed finger parts are ABS, you can use the acetone with a small brush to melt the parts together - there's the finger parts on the right melted together and with the blue joint material threaded through to hold it all together - works pretty well:

For now, time to attach to the base unit:

So, this was the original HobbyKing servo I was going to use.  Note those 2 extra circular discs were meant to be used, but they don't fit the servo, so I opted no to use them yet (that will change)

Now it was time to thread the tendons through the finger, I mis-used an LED to help push through the last part of the finger - hey, whatever works, right :-)

...and there we have it, 50% done...
 ..and there's the other 50% threaded through:

Now to hook up the servo to digital Pin3 of the Arduino (just easier and quicker to test with the Arduino)
 Quick bit of code:
 Verify and compiled:
 Downloaded:

....and there we go, we can now flip the finger when we need to.  I've left off the end of the finger-tip for a good reason.  You'll see the tendons are tied off there and you then melt the finger tip over the top, but it makes it permanent...and, well, I don't want to do that just yet.

Of course there are challenges.  Wouldn't be fun if there wasn't some.

Here's a quick video:


As you can see, it kinda works, but I need a better pull/Robring mechanism (basically a dished outer edge to the white plastic circle on the servo, so that the tendon can move further back) - with a better mechanism the finger will be able to pull back straighter than it does right now.

Hey, it's all learning....small steps.  Now, time to get those .stl files over to a 3d printer and get a whole hand and arm printed up ready for the next phase.  Until then, I'll switch back to the code side of things and see if I can get the Kinect hooked up for vision and get it to react by moving servos etc...

M1ll13 robot step 2


As explained here:  http://www.cs.bham.ac.uk/internal/courses/int-robot/2015/notes/concepts.php

ROS is a message-passing framework which allows you to create fully-fledged robotic systems quickly and easily. Rather than writing a single, large program to control the robot, we run a number of smaller, process-specific programs (e.g. to analyse images from a camera) and run them side-by-side, passing messages between them to share data and other variables.

The core component in ROS is called a Node:

Each Node performs a particular process, such as processing sensor data or controlling a laser scanner. You can think of them as code modules. However, rather than calling a Java method to get the robot to do something, we publish messages.
Messages are published on topics, which are like separate channels.  Nodes subscribe to these topics:
Whilst one Node publishes messages on a certain topic (for example, robot movement commands), another Node may subscribe to this topic and act upon the messages it receives (you could think of it in terms of following someone on Twitter). Nodes can publish on any topic, or subscribe to any topic, or do both; and multiple nodes can each subscribe to the same topic.
Nodes can find each other using a program called roscore:

This acts a bit like a router in a network, and ensures that messages on each topic are being passed between nodes correctly.


I decided to make a Service to control the Servos.  "Hang on", I hear you say, "where did this concept of a Service come from?"....Well, let's take a look at what the ROS documentation says:

The publish / subscribe model is a very flexible communication paradigm, but its many-to-many one-way transport is not appropriate for RPC request / reply interactions, which are often required in a distributed system. 
Request / reply is done via a Service, which is defined by a pair of messages: one for the request and one for the reply. A providing ROS node offers a service under a string name, and a client calls the service by sending the request message and awaiting the reply. Client libraries usually present this interaction to the programmer as if it were a remote procedure call (RPC).
Services are defined using srv files, which are compiled into source code by a ROS client library.


A client can make a persistent connection to a service, which enables higher performance at the cost of less robustness to service provider changes.

I set about making a Service to specifically control the Servos (if this is the wrong thing to do, I'll no doubt find out shortly!).  Why did I choose to make this a service?  Well...I wanted a response to come back and tell me that the servo movement was valid and possibly what the status of the servo is.
I applied the following logic:
On the RPi3 I have Geany installed to edit/create the required python files.  As I'm still using the default folders from the initial walkthrough I'll be in the ~/catkin_ws folder for the following.
As is required for the service, I created the MoveServo.srv file in the /srv folder.  This contains a definition of the parameters to pass IN/OUT for the service.

Then, within the /scripts folder, I created the servo_control_server.py file.  This is the servo service that will listen on the topic /move_servo and will, well, move the servos!

Now, we need to create the servo_control_client.py file.  This will publish a message to /move_servo passing a couple of parameters ([which servo], [start degree], [end degree]) and will receive a response back that the action was successful. ie. the servo moved as was expected.  As you can see the code is very simple.

(Yes, I'll shortly upload the code to a github repo. so you can see the wonder for yourself)

Now that the python server/client code has been written, how do we make ROS aware of it and test it?  One thing to note already is that I've hooked up the RPi3 to an Adafruit PCA9685, that allows me to control 16 servos via the RPi3 I2C interface (here's an image of it being used with an Arduino, which may possibly also happen in the future)

As for servos, I have a couple of H-King 15298 servos as they are needed for the variant of the InMoov robot, that we'll be making.

Here's the initial setup.  As you can see I'm using an external power source for the PCA9685 (4xAAA batteries) as I read that the power draw for the servos will trip the RPi3, it's a good idea to power servos from their own power source anyway, (as I've previously done with ArcTuRus)

Now that we have the hardware ready.  Let's get ROS all setup and we can test this out.
$ cd ~/catkin_ws
$ catkin_make
//this will compile our .py code and make all the files needed for ROS
//as mentioned earlier, as I built on the tutorial code
//I've already done the modifications to CMake.txt and package.xml files
//so I'm not going to define it again here
$ source ./devel/setup.bash

Now, we need 3 Terminal windows to be opened:

TERM1: $ roscore

TERM2: $ rosrun beginner_tutorials servo_control_server.py
Ready to move servos.

TERM3: $ rosrun beginner_tutorials servo_control_client.py 0 150 600

The client will call /move_servo passing 0 150 600 that will trigger the code in (Service node) to then invoke the Servo[0] and move the servo accordingly 150 then 600 degrees.

(If you have to change anything in the .py files, just remember that you need to re-run $ catkin_make)

Of course, we have a little video of that doing it's thing (albeit, this video is from me writing a test app, before I merged it into the servo_control_server.py):


Yay!  Well, there we have a server Service that controls the servos and a client app that sends a request to move specific servos.  "Hang on", you say (btw - you say that a lot), "I could have just written a single app to do that.  I didn't need to use ROS.  Why do I need to add complexity for no perceived value?".  I am so glad you say that......


Let's now go one step further.  I happen to have a Sharp GP2Y0A21YK sensor knocking about.  I wire it up to the RPi3 and write a little bit of code that just does the detection of something breaking the IR beam.  You are meant to use the IR sensor to calculate the distance from the object by reviewing the voltage value, but for now, all we really care about is: "Did we detect something in the way?  If so, we need to trigger a reaction on a servo.....".  That sounds like a more realistic scenario.

As you can see below, this time we are going to create the pub/sub node concept:
We have a topic /servo_chatter listening and handled by the servo_listener.py node.
We have a servo_talker.py client app that is monitoring the IR sensor, if an object is detected it publishes a message to the topic /servo_chatter.
From the command line, we can also just run a command to manually publish to the topic /servo_chatter (demonstrating the many-to-many concept) - we could have multiple external sensors that would call the same topic just passing different parameters.  As we do not care about a response, this setup is valid.
Within the servo_listener.py code, we can then call the Service /move_servo.
As shown previously in the bottom right of this image:
We are now the servo_listener.py publishing to the /move_servo service that then performs an action on the servos.  cool, huh!  Now, you start to see why this setup makes sense.


Now, let's see all that in action:


Now we have the basics worked out, we can use this as the foundation framework to start building upwards from.  Obviously, there are ROS packages that we can re-use to do further things with, for instance, I am going to look into whether I can use this package to perform the REST API calls to the IBM Cloud Watson services.....for Conversation/Speech-to-Text/Text-To-Speech/Visual Recognition and Language Translation....



Right, now it's time to start making the InMoov finger....it's a start!

Saturday, 11 November 2017

Saturday status

Making some great progress with ROS, RPi3, PCA9685 and a couple of HK 15298s and some Python code...
(and yes, Jelena is just "resting" in the background)

I'll write-up the notes from the center of the desk a little later.  Small steps, but certainly moving in the right direction!

Friday, 10 November 2017

Aging isn't a disease

A great article on what I see as the "Next step....."

https://sdtimes.com/ibm-expands-ai-research-support-aging-population/


“For the first time ever, there are more people over 65 than under 5,” Keohane said. “There is essentially a shortage of care providers. If you look at this demographic shift across the globe, this is why a company like IBM is interested in looking at aging. What does the aging demographic shift mean for our clients? We’re in every industry and every one will be impacted.”

By combining IBM’s IoT and health care research with their Watson machine learning, Keohane says that the effect this shift will have can be broken down in such a way that it will benefit everyone from the elderly in need of improved care to businesses, now more aware of their customer base.


“Aging isn’t a disease,” Keohane said. “We’re all doing it. But it does have impact on health. So could we surround ourselves with emerging technology in the home, while assuring the privacy and security that comes with health care and design something that will help someone understand how well they’re aging in place?”


It's a very interesting read and totally an area that I see that will receive more and more focus between now and 2020 and onwards.  (Also, a great big nod towards why I'm figuring out how to build M1ll13, my own personal care robot of the future)

Tuesday, 7 November 2017

Digital Music tangent

Sometimes we just consume.  Sometime we create.  Sometimes we listen.

Sometimes we combine all of the above.


If you've ever listened to Faithless 2.0 and thought, "Hey, I can do that" (no, it's not as easy as it looks/sounds), if you have time & insomnia then yes, you can do something creative, fun and potentially get the party jumping.... maybe you're gifted like my mate Leigh, and can just do it:


For us mere mortals you need some tools:



and you might need a dose of Omnisphere (my mate Andy swears by it!)

Get creating.....

Some of these scenes look familiar:

M1ll13 robot step1

After a couple of long nights, a lack of appetite and a fair amount of headaches, I finally have something meaningful setup as a framework/platform to build on for the new robot called M1ll13 - it's a new (Millennial) species, so it needs a new type of naming structure.

(this is not M1ll13 - just a stock photo)

I originally went the RPi3 route of installing Raspbian/Jessie and then attempting to build upwards from there to install the ROS, as explained here:
http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Kinetic%20on%20the%20Raspberry%20Pi

All would seem to work okay, until it got to the rosbag install and it would fail, with no sensible way of recovering....trust me, I tried.

I even went backwards a version to the Indigo, but still had no joy:
http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi
....that just gave me issues at another point further down the line.

Not being one to admit defeat, I adopted my usual approach (to work & life) and that was to find another way to the goal...just imagine running water.  Running water will always find a way to get around what it needs to in order to keep moving.

With that in mind, I wondered if I could do what I had just done on my Mac.  I setup a VMWare Fusion VM and installed Ubuntu linux and installed ROS (as defined here).  That went off without a hitch, so I was thinking...I wonder if I could install Ubuntu onto a Raspberry Pi 3?  Not something I've done before....but time to give it a go.

After a bit of searching, I found this site: http://phillw.net/isos/pi2/
and more specifically, I downloaded this image:
http://phillw.net/isos/pi2/ubuntu-mate-16.04-desktop-armhf-raspberry-pi.img.xz

After a quick session with Etcher and a 32Gb SDCard....the image was burnt and ready to go into the Raspberry Pi 3.  It booted.  Why was I surprised?  After a quick setup and a couple of reboots later.  I was ready to see if I could get something working.  After the usual 'sudo apt-get update/upgrade' it was time to get on with the ROS stuff.
From a Terminal session, it was time to enter:
---------------------------------------------------------------------------------------------------
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116
sudo apt-get update
sudo apt-get install -y ros-kinetic-desktop-full
sudo rosdep init
rosdep update
# Create ROS workspace
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
source /opt/ros/kinetic/setup.bash
sudo apt-get install -y python-rosinstall
sudo apt-get install -y python-roslaunch
---------------------------------------------------------------------------------------------------

Initially the ros-kinect-desktop above did not have the word "full" on the end, but after having the last command "python-roslaunch" fail, I found some suggestions to do it like the above.  The "python-roslaunch" still fails with unmet dependency errors, but it didn't seem to be an issue at this point.

Now it was time to plug the XBox 360 Kinect into the RPi3.  I noticed that the Kinect has an odd cable end....not USB.  Typical.  I looked online, Amazon could get me one by the weekend, so could eBay, but that was, like, 4 days away!  After hunting around the house for 30mins, I found the original charger for the Kinect that included a USB connector - result!  It was a bit picky about which of the usb sockets I put it into on the RPi3, but I found the top left to work okay - the Kinect green light came on!  (also running lsusb told me it was detected: Microsoft Corp. Xbox NUI Camera/Motor/Audio)

Back to that open Terminal from earlier, we have to download the software to get ROS to work with the Kinect:
---------------------------------------------------------------------------------------------------
# search for packages
sudo apt-cache search ros-kinetic | grep "search term"
# Install kinect packages
sudo apt-get install -y freenect
sudo apt-get install -y ros-kinetic-freenect-camera ros-kinetic-freenect-launch
sudo apt-get install -y ros-kinetic-freenect-stack ros-kinetic-libfreenect
---------------------------------------------------------------------------------------------------


Now it was time to open up multiple Terminal windows.....

TERM1: This is the "master", think of it like the main brain that has to be running to process all the events that are happening (bit like a web server):
---------------------------------------------------------------------------------------------------
$ roscore
....
started roslaunch server http://rpi3ubuntu:37735/
ros_comm version 1.12.7
summary
parameters
 * /rosdistro: kinetic
 * /rosversion: 1.12.7
nodes
auto-starting new master
process[master]: started with pid [2592]
ROS_MASTER_URI=http://rpi3ubuntu:11311/

setting /run_id to c2dbad48-c3d7-11e7-9e7d-b827eb8fbf84
process[rosout-1]: started with pid [2605]
started core service [/rosout]
---------------------------------------------------------------------------------------------------

TERM2: Start the Node for the Kinect.  An ROS node is a bit like an "IoT module", it'll do it's own thing, gathering/capturing data and if things are subscribed to it, it'll publish this out to whoever is listening.  For us here, it is the roscore/master.   The /topic sub/pub concept shouldn't be a new concept, we've been doing it with Message Queues (MQTT) for years now....
---------------------------------------------------------------------------------------------------
$ roslaunch freenect_launch freenect.launch

started roslaunch server http://rpi3ubuntu:46776/
summary
parameters
 * /camera/.....
nodes
  /camera/
ROS_MASTER_URI=http://localhost:11311
core service [/rsout] found
....
[ INFO] Starting a 3s RGB and Depth stream flush.
[ INFO] Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A0033333335A'
---------------------------------------------------------------------------------------------------

TERM3: Now that the above TERM2 should be publishing topics now, we can run a command to see the topics.  We can now look at the RGB image from the Kinect with image_view
---------------------------------------------------------------------------------------------------
$rostopic list
/camera/depth/camera_info
....
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_color
....

$ rosrun image_view image_view image:=/camera/rgb/image_color
libEGL warning: DRI2: failed to authenticate
init done
[ INFO] Using transport "raw"
---------------------------------------------------------------------------------------------------
This will pop up a window shows you what the Xbox 360 Kinect is currently "seeing".
As shown here (and the purpose of ALL THIS WRITING WAS JUST TO SHOW THIS PHOTO!)


Yes, that is the re-purposed iMac G5 running Ubuntu with Arduino IDE and an Arduino UNO plugged in ready for some servo action later in the week.  Some random Sharp IR sensors are waiting to be used too... along with a 15 servo HAT for the RPi too.

You know it's going to be a fun couple of evenings when you can see a soldering iron sitting on the corner of the desk.

So, there's the RPi3 on the bottom left, plugged into the Xbox Kinect (on top of the iMac G5) and the big monitor on the right os plugged into the RPi3, showing the 3 terminals described earlier and the output image of the Xbox Kinect, with yours truly trying to take a decent photo.... (and yes that is a Watson T-shirt I'm wearing).

If anyone is interested, I ran up "System Monitor" and the 4 CPUs are running at 35-50% and memory is at 365Mb out of 1Gb.  I am streaming the image quite large though - something to keep an eye on.

Phew, that was step 1..... now it's time to do some further basic ROS testing and to create some new nodes and make sure that ROS is working correctly.

What ROS nodes will I be creating?.....well, obviously there will be some STT/TTS/NLU and now I have the Xbox Kinect working a Visual Recognition node too.

And for the true eagle eyed, yes that is an AIY Projects - Voice Kit box on the top right - I haven't gotten around to making it yet, maybe by the end of the week, but in essence it's the same thing as what I'm going to be doing anyway, just without pushing a button or using Google....

UPDATE: To get the I2C to work, you need to modify /boot/config.txt because we're using Ubuntu we can't use raspi-config

Saturday, 4 November 2017

InMoov open source robot

Whilst I like the Poppy robot, the cost of £4,500 just for the actuators blows it out of the realms of affordability - (hey, I have a custom car that eats all my money!), now, if I didn't have the car I would probably have bought this kit already.... yeah, that'll set you back about £8,500.  Whilst that is about a 1/3rd of the cost of a commercial robot, I'm not looking to invest that much just yet...

My aim was to not spend time on the "small" £500 robots that you can get from the likes of EZ Robot
which are pretty cool in their own right, I like the fact this robot is modular and allows you to change and extend it as you want.  But, I want to make something that is about child size, 4foot or so.  That's why I was liking the Poppy.  Legs can be a problem (like the Poppy) and whilst I'm not too fussed about spending a lot of time getting leg/weight/balance co-ordination right, I would rather have a bottom half that is more Pepper like.

(Next time I'm in IBM Hursley, I'll take a photo of the Pepper that they have in the ETS office)

This has led me to do some more research.....and a decision...the robot has to be bigger/life size and allow us to build it piece by piece.

Then I found the InMoov robot, which looks like it will allow that to happen:

I just need to purchase a 3D Printer (now I have a genuine excuse to get one!), I have enough servo's and Arduinos and Raspberry Pi's knocking about...I even have an XBox 360 kinect that I can re-use.
Software-wise, I don't believe there will be any problems, local coding+Cloud API calls = Cylon!

Of course, I'll be hooking it up to use all the IBM Cloud Watson API services, so expect Speech Recognition, Language translation, Visual Recognition and tapping into the Machine Learning services too....

The first step....build a finger and work our way upwards/outwards from there.  I'm looking forward to this journey and where it will go.  Who knows, it could end up looking after me when I get old(er).

(I'll document the journey on YouTube, a bit like this guy)

and this guy has come up with some great idea's about using ROS.org too.

If we really need to, we can have legs....but that won't be on my initial build plan:




an interesting article that I need to bookmark for later....


Poppy Humanoid v1.0

Sunday, 29 October 2017

95 Theses about Technology

Coming on October 31st......    "95 Theses"

INTRODUCTION



THESES

"This website is part of a wider project in public engagement. My long-term academic interest has been in public understanding of technology, and in particular popular understanding (and misunderstanding) of the Internet and digital technology generally.
I like the idea of using a ‘thesis’ as a way of starting off a discussion. Long ago, this was a standard way of conducting scholarly debates. I thought it would be interesting to try it again as a way of sparking public interest in what digital technology is doing to us.
We’re in the early stages of a radical transformation of our information environment. It’s happening on a scale that has only been matched once before in history — when Johannes Gutenberg invented printing (or at any rate re-invented it in Europe). The print revolution transformed Western society and shaped the world in which — at least until recently — most of us grew up. The digital revolution is going to be just as far-reaching. So it’s worth trying to trigger some serious discussions about what’s happening and what may lie ahead."



Looks like these will be an interesting read to those of us who started our journey back in the early/mid 1990s with eye's of wonder and amazement......

Saturday, 28 October 2017

Something is brewing.....

.......My creative brain has been going overtime the past couple of months (okay, I'll admit since 2001, but formulating since 2010....) and soon, very soon, it's going to over-spill into something awesome.......

I'll share, when mind transforms into matter and reality.... until then here are some things to distract the mind

....................................................................................




....................................................................................

Thursday, 12 October 2017

Don't bin that old PowerPC G5 Mac.....

I do dislike hardware technology waste.  There seems to be a mindset trend of "well, that's 2 years old, it's out dated, chuck it away and get a new one".  Apple does this a lot.  okay, okay, there are probably a lot of places that will recycle your "old" iPhone/iMac for you and you'll take the £xxx financial hit for the privilege of using the latest hardware version, but it still erks me that hardware that is still pretty decent gets thrown away when it can still be really useful.

Read about the journey of keeping a PowerPC G5 iMac alive by installing Ubuntu Linux onto it - all, so I can use the machine to help build my new AI robots....

http://tonyisageek.blogspot.co.uk/p/dont-bin-that-powerpc-g5-mac.html


http://tonyisageek.blogspot.co.uk/p/dont-bin-that-powerpc-g5-mac.html


http://tonyisageek.blogspot.co.uk/p/dont-bin-that-powerpc-g5-mac.html

Tuesday, 10 October 2017

Nvidia built a Holodeck

https://blogs.nvidia.com/blog/2017/10/10/holodeck-design-lab-of-the-future/


'Nvidia Holodeck' is a real product – and it's almost what you're expecting, though not intended strictly for Starship crew use:



Oh and they've also built AI into it to....

"Holodeck is also AI-ready, meaning that you can train agents and deploy them in the virtual space to test your designs against anticipated real-world conditions, including virtual operators and incidental personnel and staff who might interact with any machinery or other objects being prototyped before they’re built."

https://blogs.nvidia.com/blog/2017/10/10/holodeck-design-lab-of-the-future/

https://blogs.nvidia.com/blog/2017/05/10/holodeck/

https://www.nvidia.com/en-us/design-visualization/technologies/holodeck/


Wednesday, 20 September 2017

Making an Android Mobile device app that uses the IBM Watson Conversation service (as a chat-bot)

A few posts back I mentioned:

Oh, I've also been making a Java Android Mobile application that uses Speech to Text, then calls the APIs of the IBM Watson Conversation Service translates that Text back to Speech and enables you to have a full blown conversation in voice instead of having to type and read messages (also does it in 3 different languages at the moment) - I'll create an article specifically to show you how to do this quickly and easily using the Watson SDK.  Then, "if" my arm gets twisted enough, I might look at porting it to run on iOS (but as I still don't have a iOS device, it might be tricky!)


Well, here it is....






Naturally, I'll be extending and growing this example of using Watson APIs beyond just the Conversation Service.  I think I might interface with the Watson Visual Recognition service for v2.0 - and hook in some of the Augmented Reality stuff that I did a while back....


(It feels good to finally share some techie stuff again)

I also published this to the FORMAL IBM DEVELOPERWORKS website as well.





Artificial Intelligence & Transhumanist Takeover

Just like buses, you wait for ages and none come....then they all start to come at once.  Same with these posts!

Right, this is something very close to my heart (and brain - pun intended!).  I want/need to sign up to this as soon as I can be a donor.


Here is a TED talk on "ems" - machines that emulate human brains and can think, feel and work just like the brains they're copied from.

Yes, copy of your brain.  This is awesome (well, probably scary for a lot of people thinking of the negatives), but I want to be Clone#7.  I digress.

Economist and social scientist Robin Hanson describes a very possible (near) future when EMS take over the global economy, running on superfast computers and copying themselves to multitask, leaving humans with only one choice: to retire, forever!

Come and glimpse a strange future as Hanson describes what could happen if robots ruled the earth.




As I say, where can I sign up to trials for this?  I want, no, I need this and I need it soon.


Monday, 18 September 2017

Quantum leadership

So...it has been noticed that I have been rather quiet on the technical front for quite some time... have I been doing exciting things? have I been doing boring things? have I been doing both and neither of those two things?  I've been doing all of them....and more.


"That makes no sense!", I hear you say.  Well, as you may or may not know I've had a bit more than a passing interest in Mysticism, Cosmology, the Occult and Magical teaching of times gone by.  Yes, I do actually have a larger library than John Dee had back in the day.  Did I go off and become the next Gandalf the grey or Dr.Strange?  No.  Well, okay, maybe a little bit...... I was attempting to learn the techniques and learning practices of working with the mind, soul and the perception of reality of the world (as you do) and looking at ways that these teachings and practices can be applied in a Technology orientated world.  I like a challenge ;-)

As you can imagine, this sort of thing requires time, spare brain processing power and effort and a reduction of distractions.  Therefore, this year I chose to work on a cutting edge first of a kind project doing a vast amount of foundation work to make it successful and consuming a lot of my time and attention to details - why? well....as you'll find out (and I'll share in the near future), when you observe, you create.  Therefore you plan, imagine, virtually construct in your imagination - then "look the other way" (!observe) and kick off the create and then you look (observe) to make it happen.  Okay, that sounded like gibberish didn't it.  Welcome to attempting to understand the Quantum world and apply the concepts to the current world around us :-D



I'll do a follow up article (in depth) to go through the findings and applications of Quantum to Corporate business and technology and how it is going to be the NEXT transition shift/wave.


Oh, I've also been making a Java Android Mobile application that uses Speech to Text, then calls the APIs of the IBM Watson Conversation Service translates that Text back to Speech and enables you to have a full blown conversation in voice instead of having to type and read messages (also does it in 3 different languages at the moment) - I'll create an article specifically to show you how to do this quickly and easily using the Watson SDK.  Then, "if" my arm gets twisted enough, I might look at porting it to run on iOS (but as I still don't have a iOS device, it might be tricky!)

Monday, 17 July 2017

Google Blocks

Okay, so, if like me, you've had your interest piqued by the prospect of Virtual Reality (VR) and Augmented Reality (AR) over the past year and a half and have even invested in Oculus Rift and/or a Samsung Gear headset (*other VR units are available) and you've got yourself a copy of Unity or Unreal Engine and had great expectations about making this wonderfully great new VR world with magical spinning things and whooshy (yes, that's a word I just made up) swirls of rainbow goodness, only to the have to fire up Udemy or Coursera to find a course that teaches you how to make 3D models.

Several weeks later, you've made a potato.  A bad looking potato.  It is kind of 3D, but has lumps in the wrong places.  It does NOT look like the prancing unicorn that you envisaged for your main character representation in your snazzy new VR world.  Come to think of it, the rocks, buildings, cars, <insert any other shape or object that you wanted to have in your landscape> now all represent variants of your potato.  Just in different colours and with lumps in different places.

You have to admit it.  You are no 3D studio max / Blender 3D modeller.  Some people are (they get out less than you/I do, there's a reason for that) and I bow and curtsey to them with much honour and respect.  I do not have 5-10 years to learn everything about those tools to make a "thing".
I want to spend 1 week figuring out how to make my "thing" and then focus on making a world where my "thing" can prance around in and then spend some time to write small snippets of C# code that does "stuff" when my "thing" gets noticed by the collision detection event and then the "thing" does another "thing" and "whooooa! you weren't expecting that" happens.

Oh and I want to be able to make my "things" in the VR world.  I mean, what is the point of having a VR hat/helmet/visor thing if I only spend a brief period of time with it on my head?  I might as well stare at my laptop monitor and forget all about the VR world.

Oh, thanks Google!

Sometimes I like Google....they saw a gap here and thought, yeah, we'll fill that and get to market first.  I respect them for that.



Google may have answered my dreams....





Oh, darn it!  It doesn't support the Samsung gear (yet?).  I do have an Oculus, but I don't have a powerful enough laptop (Mac) to use it.  Grrrhh...grrhhhh...grrrhhhh.... why am I so cheap, my dreams are thwarted again, by me being a cheapskate...perhaps I shouldn't spend all my money on that damned custom car!


Friday, 7 July 2017

Roomba Inventor Joe Jones on His New Weed-Killing Robot, and What's So Hard About Consumer Robotics

Roomba Inventor Joe Jones on His New Weed-Killing Robot, and What's So Hard About Consumer Robotics


I've been toying with the idea of a GardenBot, for a few years now..... and many people have asked, "Why haven't you done it yet?".

Well.....I'll let the "expert" explain why.  It's not as simple as it sounds.

CHECK IT OUT HERE



Although I am still in the process of making K9-RPi3_bot...so maybe that will evolve into my GardenBot at some point, we'll see...


Friday, 19 May 2017

Babies first Computer (Quantum)



Totally blatant work focused sales pitch material alert.  But, c'mon, you've got to admit, this is pretty bl00dy awesome!!!  The one below, not the one above.


IBM Q is an industry-first initiative to build commercially available universal quantum computers for business and science. While technologies like AI can find patterns buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be found and the number of possibilities that you need to explore to get to the answer are too enormous ever to be processed by classical computers. We invite you join us in exploring what might be possible with this new and vastly different approach to computing.



IBM Q has successfully built and tested two of its most powerful universal quantum computing processors. The first has 16 qubits and is for public use by developers, researchers, and programmers via the IBM Cloud at no cost. The second is the first prototype commercial processor. With 17 qubits, and incorporating materials, devices, and architecture innovations, this processor is the most powerful built by IBM to date. All of this sophisticated engineering makes it at least twice as powerful as the free version in the cloud. 




I know, the cynic in you is asking, "yeah, sounds great.  But, what can I actually DO with it?"

Good question.  Check out some ways of applying it here: https://www.research.ibm.com/ibm-q/learn/quantum-computing-applications/


update:
oh, and here is the MANUAL for writing code...yes, YOU can also write code and run it on this actual machine.  for free.  pause.   think about that for a minute.   now, go and read the manual and have some fun....

...and here's me "writing" my first app.  There's a visual front-end, but you can manually write the QASM code too: