• Activ Right Brain
  • About Dean
  • Designing The Future
  • Speaker
  • Keynotes
  • Blog
  • Art
  • Contact

activrightbrain

  • Activ Right Brain
  • About Dean
  • Designing The Future
  • Speaker
  • Keynotes
  • Blog
  • Art
  • Contact

The Subtle Art of Adfluence

There are genuine markets for all our current social broadcast ephemera but brands and audiences alike have little idea what experience and depth sits behind these. Do they care? Should they care?

Adfluence.jpg

In a world where anyone can jump on a live stream, become an overnight YouTube sensation or deliver an Instagram account to rival the exploits of Ernest Hemingway, how do we make sense of what we’re presented with, and what sticks?

I recently spent three days in London in the company of Chinese tech giant Huawei at their ‘Global Mobile Broadband Forum 2017’, which sounds tediously dull to my regular audience. It wasn’t.

The main conference introduced the great and the good of various networks (BT, Vodafone, BELL, Viacom, Telefonica) and a collection of speakers telling us how amazing 5G is going to be. It will be, when it finally arrives, but Huawei are at least in the driving seat of the future of connectivity.

Consumers have little interest and even less belief in 5G when most of us still struggle to connect on 3 or 4G. Although we’ll be able to download every episode of Game of Thrones in less than a second with the new technology, in the real world we’d probably do this over WiFi before we left the house.

One of the greatest hurdles ahead for anyone hoping to sell the wonderful world of connectivity is to successfully tell the story of where it practically sits in our daily lives, how it invisibly weaves its magic by empowering the things we already love.

And this is where the most interesting part of the Forum kicked in – the expo. Here, Huawei were demonstrating the practical applications for 5G, such as streamed VR and AR content, connected cars, cows (yes, cows), robots, phones, watches and the world’s first full-size passenger drone – which I just managed to cram my 6 foot frame into. Next steps, test this future mobility platform in the skies where current legislation actually allows for it.

There’s a taster of the expo action in my summary video below.

.@Huawei Global Mobile Broadband Forum = #5G + #VR + #IoT + #Cars + #Drones + #AutonomousDriving + #Wearables + #Robots + #DoctorWho!! #HWMBBF pic.twitter.com/1fZuWV1AgR

— Dean Johnson (@activrightbrain) November 16, 2017

The main reason I find myself spending quality time with Huawei is as a Key Option Leader (KOL) and I’ve built up enough social significance and driven public opinion through conference speaking and broadcast platforms to demonstrate relevance. My third day with Huawei was all about me, I mean us, well the future of ‘Influence’ anyway.

There weren’t any YouTube sensations or Snapchat superstars – this was about how influence will develop beyond the mere title, and how we can genuinely shape opinion rather than merely grab a bunch of likes.

I’ve seen some really poorly targeted influencer campaigns recently, including one global auto manufacturer letting a bunch of the aforementioned YouTubers/SnapChatter stars loose across Europe in their newest hero model, a car they’re never realistically going to buy – not because they couldn’t afford to (they’ve either made enough from their Instagram posts or rich parents to grab anything they want) but because the car was clearly aimed at an entirely different demographic.

I’m sure they delivered thousands of likes and views for the brand – but not from anyone that would part with their cash. The Social/Marketing team probably thought they had a massive success on their hands though as the initial results would seem positive. Let’s see how many cars they shift as a direct result…

So, what’s the future of Influence? Well, most agreed that the type of platforms will be similar – even if new concepts appear, they’ll be accessed on different devices but text, image and video will still be relevant, with voice becoming increasingly popular, especially with the adoption of more AI-driven content and interaction.

There was a general consensus of opinion that ‘likes’ wouldn’t be relevant in the future, but I disagree with this as it’s usually something said by people that don’t receive enough likes. Many people use a like as a way of bookmarking or personally expressing agreement. So unless we all plan to remember everything or agree with nothing, the ability to like is not going away in a hurry.

Also, seeking out your audience will become increasingly important as broadcast continues to increase, you can’t expect everyone to find you.

The best quote of the day came from Tamara McCleary “Relevance is the intersection between your opinion and theirs” – make yourself relevant but not by simply posting exactly what you think your audience wants to hear, as that adds little or no personality.

Here's my personal approach to social content that genuinely influences:

  • Have an opinion – even if it causes controversy by conflicting with your audience because that generates conversation
  • Make something – don’t just repost everybody else’s content or you become a researcher rather than an individual
  • If you want to become an opinion leader, then lead by example rather than generate white noise in the continual pursuit of likes
  • Don’t be afraid to hi-jack a conversation – play the hashtag game and tag your posts to amplify yourself by having the right opinion at the right time
  • Remember, you've had no influence if everything remains the same

I’ve put the above into practice over the past couple of weeks, so here are a few examples. These links are to Tweets but I also posted supporting tailored content across Instagram, Facebook and LinkedIn where relevant:

Lamborghini (I helped the Italian supercar brand to trend globally by Tweeting the launch of their latest concept car revealed at MIT).

Oh, great work @Lamborghini (and @MIT ) - say hello to the #TerzoMillennio, the #EV Superfuture! #EmTechMIT #cars #future #tech #design pic.twitter.com/tzIxzCN4dN

— Dean Johnson (@activrightbrain) November 6, 2017

Hattie meets Google meets John Lewis meets Moz (Google sent Hattie the cuddly Moz toy featured in their Christmas commercial. They also included the accompanying book which interacts with Google Home and Google Home Mini. My video of Hattie was then picked up by John Lewis, Google and the publisher, Nosy Crow).

When Hattie met @Google and @johnlewisretail and #MozTheMonster (and @sallyephillips ) . A lot of love for these brands right now. Thanks @GoogleUK - Hattie loves Moz! Great work from @NosyCrow on the book and innovative #publishing! #googlehome #IoT pic.twitter.com/6M9h1H1U5j

— Dean Johnson (@activrightbrain) November 19, 2017

CUBED (I filmed a promo video for my Keynote at CUBE Tech Fair in Berlin next year. Promoted by the event).

We’re pumped to have @activrightbrain at #CUBETechFair - watch him as he spills some time-tested secrets!

Get a head start on Tech Fair and register now https://t.co/IKac3EVvwx pic.twitter.com/LPpBRGhJUC

— CUBE Global (@CUBEConnects) November 21, 2017

Go forth and Adfluence!

tags: Influence, Influencer, Social, Social media, Huawei, technology, 5G, telco, Lamborghini, Google, Google Home, IoT, John Lewis, CUBE Tech Fair, conference
categories: Conference, Connected World, Social
Sunday 11.26.17
Posted by Dean Johnson
 

Connected Development

Start a conversation about Artificial Intelligence and you evoke Hollywood’s vision of the future, full of killer robots, time-travelling cyborgs and sentient machines. That’s a fun, if apocalyptic, view but we’re closer than you think. To the AI, not end of days.

Thanks to our increasingly rapid efforts to connect the world around us, your car will soon drive you to work whilst teaching you Swahili, ordering milk and cheese for your fridge, reminding your significant other you’ll be home late because you’re having an affair with a robot shop assistant, taking a DNA sample from the steering wheel and poking you in the buttocks during your virtual porn session.

CES is once again revealing a selection of crazy devices many of us will never need but the message is the same – they should all talk to each other.

The machines don’t get to hog all the conversation as we’re already used to talking to the digital partners in our lives: Siri, Google, Alexa, Cortana, our cars. The idea works in a home or personal space where we’re all comfortable with a bit of digital banter, but it comes unstuck when we’re expected to talk to our devices in a social situation, wandering down the street, on a train, in the supermarket. This interactive tourette's didn’t help Google with its Glassholes image and I’m not sure this will change with Glass 2.0. It’s all still a bit weird – and noisy if we’re all doing it at the same time.

Speaking of cars (or to them) the tech and automotive worlds really have collided at CES this year. The convergence has been happening over the last decade but there’s never been as much infrastructure in place to genuinely make one relevant to the other as there is this year.

Ford announced its partnership with Amazon to connect their cars to Alexa, operating IoT devices such as lights, heating, A/C and garage doors. In home, the same tech offers status updates from your car via Echo. Ford also displayed their new smalle…

Ford announced its partnership with Amazon to connect their cars to Alexa, operating IoT devices such as lights, heating, A/C and garage doors. In home, the same tech offers status updates from your car via Echo. Ford also displayed their new smaller autonomous car sensors and announced their plans to become beyond an auto manufacturer in 2016, becoming a mobility company.

Major announcements have been timed to coincide with CES by big players such as BMW and Ford but it’s a new arrival that has grabbed some of the brightest headlines – Faraday Future. This new kid on the block plans to set up local operations right here in Vegas, with design and production of their first electric vehicle planned to start in 2017.

They revealed their FFZERO1 concept last night and it’s a truly stunning piece of design, not just from a physical product perspective, but also the well considered internal digital design and augmented reality and how this and the experience will translate to our personal devices. They're a young team that prides itself on rapid turnaround and they've designed a connected car from the ground up.

The concept of the connected car doesn’t just refer to a phone and a dashboard, it’s also the communication with the surrounding environment and how this awareness will eventually deliver the first credible autonomous vehicles. You’ll know of my love of cars and an unhealthy fascination with technology so you’d be forgiven for thinking I’m all for self-driving cars. I’ll sit on the central reservation here as I’ll happily hand over the controls on a motorway commute as long as I get the wheel back for the twists and turns of a challenging country road.

For me it’s an unwillingness to hand over the whole experience because I still love driving. For many others, it will be a trust issue as they’ll expect it all to go horribly wrong, or have all their travel data sold to the highest bidder (Google is building a car after all).

Our intelligent connected world holds great promise for things like interactive storytelling (I hinted at this in Mark Peising’s recent article for Publishing Perspectives) or making life simpler when travelling the globe. We’ll also have to wade through a pile of connected crap on the way as manufacturers and designers still seem hell-bent on adding ‘smart’ to everything they make.

It’s only a matter of time before a tabloid headline exposes a man caught having virtual sex with his SmartFridge.

“Siri, write another article about CES”.

tags: Artificial Intelligence, AI, Connected Car, connected home, smart home, autonomous driving, cars, CES, CES 2016, porn, Google, Google Glass, Glassholes, Google Glass 2.0, Google Glass 2, Faraday Future, EV
categories: Apps, Automotive, cars, Conference, Connected World, Futurology, Gadget, Innovation, Mobile technology, Virtual Reality, Wearable Technology
Tuesday 01.05.16
Posted by Dean Johnson
 

Rise of the Machines

Robots: Who doesn't love the idea of a humanoid personal assistant with artificial intelligence, laser eyes and the ability to preempt your every move? No, wait…

2015 is the year of wearable technology, right? No, that was last year. This year wearable tech beds in and gets on with the real job in hand. The 2015 buzz surrounds robots of all shapes and sizes.

Yesterday, I caught up with Aldebaran at CES, a French company making big waves in the android marketplace. That’s ‘android’ with a small ‘A’. I first met NAO in Poland last month and loved the playful approach Aldebaran had taken to developing their first consumer robot.

Only it isn’t available to consumers yet. 10 years of refinement means developers have been the first to get their hands on the cute little chap to hone interaction and push boundaries for commercial markets. He currently responds to a set list of commands but this is then extended by the user with AI kicking in as he learns new and exciting ways to respond.

The limb articulation is particularly impressive. If you push NAO over, he struggles back to his feet in a truly human fashion. This where it all begins to get a little weird. The mere act of pushing him over makes you catch your breath, as if actually bullying a small child. You feel bad about having performed this action, merely to see see how he’ll react. And so begins the human/humanoid relationship.

Aldebaran have some exciting products in the pipeline, including Romeo, a personal assistant and companion for the medical and care industry. Humanoid robots with character can play a vital role in this area, with NAO already used in hospitals to aid rehabilitation and put children at ease in an intimidating environment.

I spoke with a number of other fascinating robotic manufacturers at CES but the tiniest was Ozobot, a cute little droid that (in their words) “teaches robotics and coding through fun, creative and social games”. I love the use of physical and digital inputs – in its simplest form, providing a pen or pixel line for Ozobot to follow across paper and tablet. The manufacturers picked up a raft of awards at CES – deservedly so.

This market isn’t just for startups, Google went on a buying spree in 2014 and their ownership of robotics companies is now in double figures. Automotive manufacturers have been at this game for years. Honda and Toyota are particularly hot in this area, as Mercedes and Audi up their game at CES with autonomous cars – that kinda makes the whole car a robot!

Amazon is another big player entering the arena. They’re offering us the chance to purchase our own personal assistant, Echo, useful and worrying in equal measures – especially with its ‘always listening’ approach.

Window cleaning robots, digital sandwich board cyborgs and tiny printer droids – Vegas was wall-to-wall automatons.

Of course, robots come in all shapes and sizes, with drones also falling into this category. Now everyone wants a drone before they get legislated out of existence so grab one while you can!

If 2015 wasn’t kicking off with a big enough buzz around robots, just wait for the season climax with the release of Star Wars: The Force Awakens in December. These are the droids you are looking for.

tags: Robots, robot, cyborg, automaton, android, humanoid, Aldebaran, CES, CES 2015, #CES2015, Romeo, Ozobot, Google, Honda, Toyota, Mercedes, Mercedes F015, F015, Audi, autonomous driving, autonomous cars, Amazon Echo, Amazon, Echo, drones, Star Wars, Star Wars: The Force Awakens, Star Wars The Force Awakens, Star Wars Ep VII, Star Wars Episode VII, NAO
categories: Automotive, cars, Conference, Design, Futurology, Gadget, Innovation, Star Wars
Friday 01.09.15
Posted by Dean Johnson
 

Glass Half Empty

Hands up who thinks I look an idiot (comments restricted to the head gear please). Raise your hands if you think I’m spying on you. And finally, who wants a go on my new Google Glass?

DJ_Glass.jpg

I think we’ve already established my fondness for cars, design and technology. This new toy sits firmly in the third category and stretches my propensity for rabid gadget adoption to new limits. I love the thought of taking information away from a screen and delivering it as close to the brain as possible, however I’m not a fan of looking daft.

It’s this healthy skepticism that helped me through the door at Google’s New York office last week when I collected my shiny new Glass headset. Or should I say attempted to collect, more on that later.

If you don’t already know what Google Glass is (and I keep forgetting there are some of you who don’t) this is the Palo Alto tech giant’s first foray into wearable technology. 

Essentially, a metal bar runs around the user’s brow line from ear to ear, balancing on their nose. An additional arm reaches round the right side where a prism projects digital content directly onto the retina of the eye so both foreground and distant images remain in focus and the multitasking begins. This grey, black, white, blue or red arm also contains the speaker where the audio is transmitted directly into the wearer’s skull through bone-conduction. It’s not as freaky as it sounds and effectively leaves you with both ears to function normally without the ambient audio barrier headphones create.

That’s the tech, so what’s the experience? The screen image is almost unnaturally sharp and it comes as a shock when both the digital information and the world around you are in focus. The headset is very light and to someone who doesn’t usually wear glasses such as myself, it feels pretty unobtrusive.

When the headset is asleep, the screen remains completely clear and only when the touch-sensitive panel on the screen arm is tapped does the content spring into life. Next steps require the wearer to utter the magic words “OK Glass” to activate voice control or swipe back, down or forwards with one or two-fingered gestures on the screen arm. Still with me?

You can also nod your head up and down to deactivate the headset or go directly to the camera mode by pressing a physical button near the screen.

All this takes some getting use to but becomes increasingly addictive as you discover more features such as the ability to ask Glass for a local restaurant, then receive screen navigation direct to the door. Or how about a live Google hangout with your screen view transmitted to your invited circles? Maybe flight information beamed directly to your screen with Glass having drawn the information from your airline’s email confirmation?

That last point is actually impressive and unnerving in equal measures. The level of ‘data sharing’ hits home when you realise HAL has started reading your private correspondence.

At this juncture I’d like to explain why I’m not wearing a conventional pair of Google Glass in the photo above. I’m English.

It seems that’s the only reason. I was selected for the #ifihadglass programme via a twitter conversation instigated by Google, asking how we’d use Glass in interesting and innovative ways. Only a few of us were fortunate enough to receive the single acknowledging Tweet that said we’d been selected to participate in this groundbreaking research into the future of wearable technology (and it would cost us $1,500 each – a small price to pay in the grand scheme of things).

A wall of silence followed, during which time the first ‘I/O 2012’ conference attendees received their headsets (more commonly known as Glass Explorers, or the unofficial title, Glassholes). The press got to review this kit by spending time with these early adopters, who subsequently became tech/geek superheroes.

The silence persisted and my anxiety grew so I began to harass everyone I knew at Google (or were in any way connected to the organisation) to see if I could learn of the schedule for issuing Glass. I apologise now to all those nudged but the radio silence lead to doubts that I had actually been selected.

I needn’t have worried as the message finally appeared on my iPhone to tell me that @projectglass had followed me and a DM arrived “Your Glass is now ready! Please purchase within 14 days.” and I should head to a designated website to book my dedicated one-to-one introduction to Glass. This was it, I could organise my US trip to collect my headset!

With my US address supplied (Brandwidth’s New York office) and $1,500 paid in full (via my colleague’s US-registered credit card – thanks Dacia!) I was finally in the system with my designated time and destination set. Google New York here I come!

On the basis of this momentous occasion, I arranged a coast-to-coast US tour, taking in meetings in San Francisco, Cupertino, Mountain View, LA, Washington and New York – all in one fast-paced week, culminating in my appointment at the funky Google New York office.

Upon arrival I was escorted upstairs by a Glass-wearing brand ambassador, my first human contact throughout the whole process. I presented my passport as proof of ID at the reception desk and was issued with my Glass pass and whisked into the inner sanctum.

The one-to-one session began with the opportunity to try all Glass colour (or color) options, with and without the sunglass attachment. This was a pleasant surprise as I had assumed my selection would be set in stone the moment I had completed my online purchase. Both blue and red suggested I was trying too hard and looked more than a little like Timmy Mallet or Jonathan King – not a good look. The black looked a little heavy when not accompanied by the sunglasses. The white looks great but too clinical. I stuck with the neutral grey I had chosen on my original order.

I was logged into a Google Chromebook (the first time I had seen one in the wild), my headset registered and synced with G+ and GMail accounts and I was in the system! A two hour induction and training session followed during which I explained I intended to make a charity wing-walk wearing my new Glass, amongst other death-defying feats to entertain, reward and probably annoy my digital audience in equal measures.

Then my Glass world came crashing down as another cheerful Google employee asked if I was a resident of the United States? “No” I replied “but I do spend a lot of my time here, test and review emerging technology, build content for these devices and make recommendations to global clients. And was chosen by Google to be here”.

“Oh, but you’re English”

“Yep”

“I’m sorry but we can’t let you leave as this is only open to US residents”

“But that doesn’t make any sense for a beta programme looking to obtain as much feedback as possible from a varied global audience”

“I’m sorry but we can’t let you leave as this is only open to US residents”

“Can I not try to run out with them? I’ve paid for them and still own them?”

“I’m sorry but we can’t let you leave as this is only open to US residents”

I left, without shouting at the well-intentioned Google staff and walked 66 blocks in 30º heat to get the frustrating situation out of my system.

Had I made a run for it and actually made it as far as the sidewalk, my first public outing wearing my Glass headset would probably have been akin to wearing only an oversized pair of Dame Edna specs and my pants (that’s underwear for the Americans, not trousers). In New York, these anxiety levels may well have proven unfounded as the US crowds are more accepting of the technology – especially in cosmopolitan New York. Might have been quite different on the streets of London.

I’m still intrigued to see how Glass evolves and how the human element is represented. There certainly wasn’t much human interaction in the run up to my Google visit!

Although much of the focus surrounding Glass is on generating images and video footage that gives a first-person perspective, I’m concerned we’re beginning to see a removal of character rather than adding personality. Transmitting or recording ‘my viewpoint’ never actually features the wearer so we’re actually one step removed rather than making a connection. Either way, this still generates interesting content but we shouldn’t lose sight of the individual.

Am I disappointed? You bet your ass I am, even if the number of individuals lucky enough to be accepted onto the beta programme (as I was) are very small, I’m willing to bet I’m in a category of 1 when it comes to those that have owned Glass only to be parted from them for being a little too English.

 

tags: Google, Google Glass, Glass, Glass Explorer, #ifihadglass, New York, Wearable tech
categories: Futurology, Gadget, Innovation, Mobile technology
Monday 06.17.13
Posted by Dean Johnson
Comments: 1
 

Designing the Future