Wednesday, 21 March 2018

IBM's Watson-based voice assistant is coming to cars and smart homes
IBM's Watson-based voice assistant is coming to cars and smart homes

This all sounds very familiar :-D


One of IBM's first partners Harman will demonstrate Watson Assistant at the event through a digital cockpit aboard a Maserati GranCabrio, though the companies didn't elaborate on what it can do. In fact, IBM already released a Watson-powered voice assistant for cybersecurity early last year. You'll be able to access Watson Assistant via text or voice, depending on the device and how IBM's partner decides to incorporate it. So, you'll definitely be using voice if it's a smart speaker, but you might be able to text commands to a home device.

Speaking of commands, it wasn't just designed to follow them -- it was designed to learn from your actions and remember your preferences. If you allow the Watson-powered applications you use to access each other's data through IBM Cloud, which delivers and enables Assistant's capabilities, they can learn more about you and deliver life-like conversations with context.

In IBM's sample scenario, it said Watson Assistant can automatically check you into your hotel and make sure your rental car (so long as it has a Watson-powered console) is ready as soon as you walk out the airport. The car's console can suggest locations to visit en route to your hotel, as well. If the hotel uses a smart assistant powered by IBM's AI, then it can automatically tweak your room's temperature and lighting based on your preferences and even start playing music you like when you're almost there. The hotel room's (Watson-powered) wall dashboard can also display your schedule and emails before you even walk in using the electronic key automatically sent to your phone.


Tuesday, 20 March 2018

Automated harvesting by agricultural robots

This is something close to my own heart and something I'd like to get more involved with in the future.

I like the idea that the robot harvesters can use Visual Object detection and then Recognition to examine the fruit and determine if it needs picking - how it does that picking raises an eyebrow, but mixed with robotic arm and pressure sensitive 'finger' tips and some good coding it'd be feasible.

Then, in the grocery shop as a customer, you can whip out your phone, scan the fruit on the shelf and it can "highlight" which fruit will ripen when, allowing you to get the best choice of fruit to meet your needs.  ie. do I want to eat it today? actually, I want to eat 3 of these in 3 days time, which are the best 3 to select?

sounds all very cyberpunk....

Friday, 9 March 2018

Program in C

Frank, I doff my cap to you Sir, awesome find and very relevant!

Friday, 2 March 2018

Elon, you send a car....IBM will send a disembodied head...


Called CIMON (Crew Interactive Mobile Companion), the new crew member is about the size of a medicine ball and will work alongside human astronauts in space. The “floating brain” is equipped with IBM’s Watson artificial intelligence technology and is expected to assist astronauts during the European Space Agency’s Horizons mission in June.

T1ll13 robot step 2.2

very minor update, but great fun along the way....

As previous shared, I setup an RPi3 with ROS and was about to move into the coding of ROS nodes.  Whilst that was involved me having to use Python and whilst I see it's really useful to use, it's not my native tongue.  That is JavaScript...has been client & server-side since about 1998, so it's my go-to comfort first choice.

So rather than attempting to port everything in my head into Python and then into ROS nodes, I thought I would do what I needed to do: "to get it to work" and then I'd look at porting it over.

What did that require?  Well, installing NodeJS and npm on the RPi3 for one.  Dead easy and simple to do.

Oh, I forgot to say what it is/was I wanted to actually achieve!  okay, I want to be able to SHOUT at T1ll13 and for her to always be listening via a microphone (no trigger "Alexa" or "Google" words for me), to convert that Speech to Text, then send that Text to a Chat-bot, receive a response from the Chat-bot as text and then convert that Text to Speech and then SPEAK the response back to me via the speakers.

Sounds simple enough......

Of course, I have an IBM Bluemix Cloud account that provides me with Watson STT and TTS services - extremely simple and easy to setup.  Once created, all you need are the service credentials.
There is also an SDK for most language available too.
I chose the Javascript SDK (for NodeJS usage).  I say I chose, but don't think you have to download the SDK from github and do something with it.  All you need to do is include reference to it in the package.json file of the NodeJS app and it'll get pulled down and placed under the node_modules folder.  All you really need to know about it is: what are the API calls we need to make.

I did the usual and set off on a DuckDuckGo quest searching for other peoples code/attempts at doing this.  That didn't really reveal anything all that useful...until I found:

Which seemed great initially....and it even worked too.  But and this is crucial, as it states it uses the OLD REST API calls, which is fine and I can say it does still work, but, I was really looking for something forward looking.  Darn this "free code" for not doing exactly what I want it to... :-)

It was a good exposure though to the mechanism of using 'arecord' to capture the speech (as the RPi3 is running Ubuntu Mate linux it's already installed) and how to use the .pipe() command to stream the speech to the STT service rather than recording it to a file locally and then pushing that to the STT service.

I then attempted to mash things around and "upgrade" the code to use the new REST API calls....then, as is usual, I got distracted by work-work.

I then had some free time one evening, so I did a bit more DuckDuckGo searching, I was actually looking for something else and eventually stumbled over what I needed!

I confess, I lifted most of what they were originally doing - but, I did follow through and understood every step of what was happening, which pleasantly surprised me.  I'll write a different article to go through the code in-depth as it is quite smart.

The one major change I made was to use the "dotenv" npm library to allow storing of the variable values for the STT and TTS into a .env file.  Oh, and it uses SOX now instead of 'arecord'.

I also upgraded the usage of CleverBot to now use the latest npm library and API, that includes the 5000 free API calls APIKey.

In fact the JavaScript code itself is a really good explanation of adding callback()'s to your functions and passing back responses via the callback().

So, anyway, yes, after a bit of tweaking of the code, it would do exactly what I wanted it to do:
[listen to Mic]-->STT-->CleverBot-->TTS-->[Speaker output]

I had a few experiments of attempting to re-use the little microphones that are still attached to the USB webcams in the robots eyes, but they just weren't good enough to pick much up.  I also had a small USB microphone that plugged directly into the RPi3, but again, that proved to only work if you were about 3cm away from it.  Not ideal.

Then I remembered that I ages ago, I purchased a USB connector that allowed a Mic & Speaker to be connected to it - also means I don't have to use the 3.5mm jack on the RPi3 anymore.

After a quick bit of setting up, this works great.

Okay, it's not perfect (yet), but at least it hears "most" of the words and I can hear it talking back to me as well as see the debug output.

Oh, yes, as mentioned in the pumpkin article, I am investigating forever as an option too.... even if not for this project.

Here's a little video of it working in action:

Of course, now that I've got the concept working, I'll switch out CleverBot for a Watson Conversation Service (WCS) as I was only using it to get something working and it does respond with a fair amount of gibberish.

I also now need to hookup the GPIO pins to the servo to open/close the mouth to match the "talking".

...and then have a "trigger word" to take a photo via the USB web camera eyes and then get that image sent to the Watson Visual Recognition service to determine what the robot thinks it did see and then trigger a conversation about that....

Monday, 12 February 2018

PWAs...hang on, did time stand still and then loop back on itself?

Back in the day, I purchased one of these phones.  (Wow! was it really in 2013?...I suppose it must have been)
The cool thing about it at the time was, it wasn't Android, Apple or Microsoft.  It had it's own OS and had the potential to investigate "other options" for apps.  As it was backed by Firefox / Mozilla, it made sense that it was driven by the Web and more specifically, Web apps that run in web browsers.  Hey, that suited me fine, s'what I've been mostly doing since, well, since, for far too long.

Whilst I have no issue, when the need requires,  to code for a native Android java app, or flip my head the other way and code in Swift for an iOS device, it did kind of irk me that I had to start following the code religion camps again... it also conflicted that I wasn't focused on just coding for the device itself, but having to code for the server-side APIs and usually a web app that also offered the same functionality.

Now, maybe it's my issue and not other peoples (most likely), but when I want to make something pretty fast, I fall back onto what tools / coding languages that I can use to help with that and to me that's good old HTML, possibly some JavaScript Framework and some UI Framework too...

Back to the ZTE phone - to make an app for this, you basically built a web app that you could host, like any other web app, on a web server, you just needed to create a manifest file that would get downloaded and the phone would then create an "app icon" on the home screen and when you pressed it, it would load the web app just as you would from the web browser - there were some little tweaks you could do to start to make the web app more "mobile friendly" and it showed some great creative thinking..... it also handled local caching and the concept of "offline" usage too.  (Which raised my eyebrows back to the good old AvantGo days)

...then the Firefox OS and ZTE phone's went into decline and so did this novel approach to this progressive approach to web app creation for mobile devices.  The Android and iOS app stores then got flooded with millions of native apps and the common census was that was the way to go.
Tooling like Cordova was mocked by the purists on each platform (yawn), and no-one really wanted to work outside the "in crowd" (okay, maybe there was myself and quite a few others, but you get the idea - most people just follow the crowd of least resistance) .

Hey, I even built native Android Java apps for very high profile clients in San Francisco and native Swift iOS phone/tablet apps for major car manufacturers in Europe.  Sometimes, you just do what is needed and for both those occasions native was what was defined.  The amusing thing being, both projects then evolved to require a server-side nodeJS REST API component (JavaScript) and a matching web application (AngularJS & Bootstrap).  Now, those skills are complimentary, HTML/JavaScript/CSS and JavaScript again.... so we'd need more people involved and you know what happens when you add more people to the mix.... ;-)

Now, the above was back in mid-2016, early-2017.... (I haven't done much/any mobile apps, okay, I did a couple for some examples to demonstrate Watson APIs, but nothing major, just come simple 3hour coding apps).

Now, zoom forward to February 2018...and lo-&-behold I see this article:  What are Progressive Web Apps
and on the same day, I then get an invite to an Ionic 3 PWA YouTube live-stream...whoa?!what?...

Okay, having a bit of a read through, it seems like the early days have 'progressed' rather well and it looks like some sense is coming back to the world.

Google is even working on (and I seem to remember this from the "back in the day" era) a WebAPK app that converts your PWA to a native APK file to install as an app.

Apple is also allowing for adding PWA features into it looks like the planets might finally be starting to align.

I also note that my favourite Mobile / Web app UI tool IONIC are going the way of the PWA:

As I was saying earlier, you had the mobile app developers who had to learn and become experts in a specific language and platform and all of it's oddities/quirks and they were able to make re-usable things which helped with speed of development and deployment..... but and here's the boring business-headed sided view of things.... what happens after that?
Well, you have the "now I must submit my app for review to the App store" cycle.  Where you have to wait for "them" to assess, review and possibly reject your app.  then it can appear on the "App store".
Then you do some more work and want to release an update (big or small), you have to go through the same cycle, time, cost & effort being munched up....
Then you realise that you need an urgent fix, you have to repeat the above.  There is NO guarantee that the end users will actually download your latest update and install it - I know that I quite often swipe "that way" and ignore the "you have 23 updates to install" on my phone.
You now have to support multiple versions of your app on multiple different device on the same platform and if you have decided to make apps for both platforms you need to manage all of the above...hmmm... starting to get rather expensive all this and for not a lot of return on benefit.

Whereas, back in the good old days when you had yourself a static web site, you could just FTP a set of new files and job done.  Then you morphed to a web app, that had some static but mostly dynamic content, you could update the content and boom, it was there, it was live.  If there was a structural change, you could release that on it's own and it was up to you to approve it.

Maybe, by going forward so quick, there is a realisation that you are now silo'd into 2 companies controlling the apps that get to the end users and you are now paying quite a lot of money to manage and maintain your apps to use these 2 companies to get out to your user base.  Did you really realise the lock-in that you were walking into?  Probably not.
Also, if you were to launch a new app today - you'd get buried inside millions of other apps and your target audience user might not ever get to you app because of all the noise.  They might know your URL (after all, wasn't that also the big thing before: "get your web domain, so everyone will know your brand and how to reach you" - that seemed to have been thrown out with the bath water).
Back "in the day", you could just make your web site/app, host it someplace, get traffic to it via different marketing methods and people just used didn't need the overhead of "app stores"...  Revolution time, well, small/tiny revolt time...

As Ionic quite nicely put it, visit here as:

It seems to be the hipster thing to be doing....but like all hipster things, they are just those youngsters doing the thing that us oldsters were doing previously, but were told we were uncool, until now, when it is now cool....(oh, I don't know!?)  All I do know is that I'm sticking with PWA for 2018!

Yes, I'll probably do some article soon on making a PWA that calls Watson REST APIs from both the Desktop web browsers and Mobile browsers and see how to cater for both from different devices.  In fact, I'll probably start doing that in the evening this week, as I have a requirement for it.....

until then HEREs a walkthrough of taking an existing Ionic app and making it PWA.  (did someone say, service-worker.js?! more on that later!)

oh, and for the people who remember AvantGo....I find this little nod to the invention of AvantGo amusing:

Thursday, 8 February 2018

Self parking slippers from Nissan

original article HERE.

At first glance, the ProPILOT Park Ryokan looks like any other traditional Japanese inn, or ryokan. Slippers are neatly lined up at the foyer, where guests remove their shoes. Tatami rooms are furnished with low tables and floor cushions for sitting. What sets this ryokan apart is that the slippers, tables and cushions are rigged with a special version of Nissan's ProPILOT Park autonomous parking technology. When not in use, they automatically return to their designated spots at the push of a button.

For its primary application, Nissan's ProPILOT Park system uses an array of four cameras and twelve sonar sensors to wedge its host vehicle into even the smallest of parking spaces—whether it's nose-in parking, butt-in parking, or trickiest of all, parallel parking. It seems unlikely that the slippers use quite the same technology, although Nissan does suggest that the technology is at least similar, which would mean that the slippers are operating autonomously rather than relying on someone off-camera with a remote control. If you'd like to investigate further, Nissan is offering a free night for a pair of travelers at this particular ryokan, which located in Hakone, Japan—a lovely place that you should consider visiting even if self-parking slippers aren't on the amenities list.