Watching someone do something can make you experience it as if you are doing it yourself… hard to believe?

Sounds far-fetched! But, believe me it’s not a figment drawn from science fiction but grounded in neuroscience studies…

mirror neurons

I recently came across a reference to mirror neurons in neuroscience studies and the more I read about them the more I got intrigued…

A simple explanation suggests that there are specialised neurons (named the mirror neurons) that are seen to fire both when a person acts and when the person observes the same action performed by another – thus mirroring the behaviour of the actor, as though the observer was himself performing the action.

If this is true, then its almost as if the mirror neuron is performing a virtual reality simulation of the other person’s action… just think about the possibilities – it can start to explain simple behavioural and complex social responses… I have often wondered why most of us get so engaged and emotionally charged when we watch our favourite sports… it’s almost as if we are playing ourselves! Could it be the mirror neurons in play?

As with any new discovery it’s a subject of speculation and intense debate and while its premature for us to draw conclusions, I am personally biased by my passion for understanding how our brain adapts and using that to simplify every day activities.

The potential of the discovery in itself is enough motivation (for me) to delve deeper into the subject and the initial opinions that I have found have not yet disappointed me. (Ref. A good introduction to the subject is a TED talk  by neuroscientist Vilayanur Ramachandran where he describes his research on mirror neurons).

Most of the discussions talk about the potential role and importance of mirror neurons in two different areas – from understanding the actions of other people or empathy (where we could literally experience what others are experiencing and adopt the other’s point of view) to learning new skills by imitation (where the mirror systems simulate observed actions). The experiments show that while we can empathise and imitate other person’s action, we are still able to distinguish, eventually, that the action is not done by us since we do not get the same feedback from the sensory receptors in the skin (touch, pain etc.)

The importance of empathy and imitation is not hard to imagine in any context – from broad social and cultural contexts to dynamic business environments. As our environments become more global and we work in geographically distributed teams, our primary business interactions are centered on email, conference calls, social and collaboration tools, etc. As the opportunity to watch someone in action has notably gone down, it has inadvertently restricted the use of our natural ability of imitation and empathy in everyday interactions.

It is believed that video can fill this void – it provides an opportunity for people to observe and watch others as they speak and act… it is becoming increasingly apparent that embedding video in our interactions and work-flows in product design not just drives simplicity of action, but influences user behaviour through an increased ability to understand and empathise with others and to co-relate more effectively by imitating behaviour and skills.

I genuinely believe that by understanding what makes people act the way they do, we can design more intuitive and engaging products and interactions that match their natural way…

ps. Of-course I cannot deny that my excitement extends beyond every day social and behavioural application and I am equally fascinated by the possibility of a scientific explanation to the Indian philosophy (that I have grown up with,) that is based on the belief that there is no real independent self and we are all part of the same consciousness)… after all who knows we are all connected by neurons and we just need to dissolve the barrier of the physical self to communicate and interact – far more effectively than the current digital plane of the internet!

This article was first published @LinkedIn on May 07, 2016. 



When products are designed to fall apart…?


A couple of days back, the home button of my iPhone stopped responding… this is the second iPhone I have owned that has ended up in this state in the last 3 years… and it got me to think…

As I started to reflect on it, I started to become more and more convinced that this is by design – a clear strategy to deliberately restrict the lifespan of a product… clearly to drive the replacement cycle.

But what intrigued me the most is that this is not a new radical approach conceived by Apple, but has been successfully deployed by product manufacturers and producers for decades.

I came across an interesting story from the 1920’s where it is said that Henry Ford started to buy back scrapped Ford cars and asked his engineering team to disassemble them. Almost everyone believed that the goal was to find the parts that had failed and identify ways of making them better. On the contrary, Henry Ford asked the team to identify the parts that were still working and explore ways of re-designing these parts to cut down their life and have them fail at the same time as the others – a smart business intent to cut down the cost of design and manufacturing and avoid over-designing!

Its an out-of-the-box way of looking at things… and seems to make perfect strategy. Introducing the product lifespan as product parameter adds flexibility to the product development cycle by opening up options for exploring other constraints – not just time, cost or quality, but also technology selection, material properties, user experience, performance, processes, regulations etc.

I got so fascinated with the idea that I continued to look further and found a term planned obsolescence, that has indeed been used in the context of product design and economics… it talks of the approach that attempts to design a product with an artificially limited useful life such that it becomes obsolete or no longer functional after a certain period of time, where the driver is primarily to reduce the repeat purchase time interval i.e. shorten the replacement cycle. It appears that the light bulb was an early target for planned obsolescence when the companies standardised the life of a light bulb to 1000 hours and even went to the extent of fining producers if the light bulbs lasted longer! The strategy has found support from governments in the past and it has been used to stimulate consumption and fuel economy… but over the years it has resulted in divided camps, and in recent times there have been movements against this strategy with some countries now requiring manufacturers to declare the intended product lifespans.

As I thought about it further, it dawned on me that I was practically guilty of following the same strategy… and hence had lost the moral right to be judgemental … I realised that it can easily be argued that we (software providers) are no different and have enforced users to upgrade to new products by stopping support for older technologies, using incompatible interfaces, restricting hardware or OS support and building vendor lock-in… the intellectual production has fallen prey to the same pattern (as industrial and consumer production) of generating constant (renewed) demand for their products… creating a society that lives under the illusion of perpetually new.

In this state of mixed emotions, my view got biased by my own experience and actions… while many people argue that this belief that products are designed to fall apart is a fallacy, I have (albeit reluctantly) to disagree.

My experience of product design and development has taught me that every product design cycle involves a complex interplay between many business, technology and operational factors – from time-to-market, price points and product positioning to technology readiness, user experience, performance or resources, processes etc… and it is a reality that I have designed products with a clear view of a restricted life-span – simply using them as first generation products for early adoption and then replacing them (over time) with new product releases… which is an example in itself of designing products to fall apart (after a time)… or maybe it begins to sound more reasonable when we rephrase it and say that products are designed to work successfully for the defined lifespan and specified business goals…

Of-course, the answer is not what I wanted to hear as it means that I have to start looking for a new phone – even when I did not have the need for any new functionality… but then maybe I do not know what I am missing and may be pleasantly surprised by the ‘new’ product…

Arti is the co-founder of humanLearning ( – a fast growing UK-based technology startup – setup with an earnest desire to make the life of busy professionals simpler and more effective. hL is disrupting business workflows thru WinSight – a mobile-video based platform – that is changing the way businesses drive innovation and quality in sales and service. Arti can be reached at

[This article was first published on @LinkedIn on April 16, 2016]


If only my interaction with a machine could be Collaborative…?

I realised that most Error Messages frustrate me, some even scare me and only a very few actually guide me through the error scenario…

error scares me

Just last week, I was exploring a new software service – and incredible as it sounds – despite being a native user (and developer) of software, and having survived generations of software applications, I still panicked at the sight of a ‘big red X’ (a typical ‘ERROR’ alert) – so much so – that before I knew, I had instinctively killed the application!

That got me thinking. A simple response by a system to one of my ‘natural’ actions managed to induce a feeling of helplessness and despair, even a degree of frustration and anger – enough for me to give up. And, if I am honest then I know that I have no intention of retrying anytime soon.

I am sure that I am not an exception – many of you will have a similar experience to share – at least at some point in your interaction, with one or another system or application.

I looked further – I found research that said that there is a tendency in all of us to blame ourselves for our failure with everyday objects and systems. Surprised? Well I certainly was. Isn’t it a contradiction to our natural inclination to blame everything that goes wrong in our lives on others or our environment?

But, the underlying question that continued to bother me is simple – if I reflect on my ‘natural’ action, it was nothing but a typical ‘user behaviour’. How can user behaviours be ‘Errors’? So what are we missing?

I guess, it comes down to the product design and user experience. I know, that when I design a product, no matter how I expect users to use the product, it is a reality that, there will always be a few users who will find different and unexpected ways to use it. And a good design can neither ignore that (and hence needs to handle the unexpected), nor be restrictive to force all users to comply with a single flow (which would inherently conflict with their natural behaviour).

I started to look at the error messages that we issue under different error scenarios in our own mobile video application. We had invested a lot of time in humanising all our application messages and notifications, and were even inspired by NLP (Neuro-Linguistic Programming) – so while most had a human touch (and hence were not as scary as the big X), I realised that they still were fairly limited in guiding the user through to the next stage.

I started to look at error messages from different applications under different scenarios. I noticed that even when actual errors encountered were similar, my experience (and hence response) as a user was very different – and the difference was in the small details of how the message was communicated.

And then I remembered an old reference from the book ‘Design of Everyday Objects’, where Don Norman interestingly uses a standard interaction between two people as an example to demonstrate that effective interactions are mostly built on collaboration – where a person tries to understand and respond to the other party, and when something is not understood or is inappropriate, then it is seamlessly questioned, clarified and the collaboration continues naturally.

I guess, as a user, I am tuned to expect my interactions to be collaborative – and hence struggle when my interactions are with a machine (and not another person)… of-course they inevitably fall short! Any expectation that the user behaviour will/should change when interacting with a machine is suspect – we all know how un-adaptable we all are as a species! There is no doubt that the goal for us – as product designers – should be to build in intelligence into the machine interaction and aspire to develop a collaborative interaction between the user and the machine – i.e. when the user does something wrong, the machine should respond by providing clarification, helping the user to understand, guiding the user to continue through to the next stage – ensuring that the communication illustrates how to rectify the inappropriate action and recommend the next actions.

I know it sounds onerous. But it is not. Technology is a powerful tool and we have enough capability and building blocks now to easily build us simple collaborations and design good feedback and interaction models.

I am fascinated with this new challenge. Our goal will now be to eliminate all error messages and instead replace them with messages that aid and continually guide the user…

This article was first published on LinkedIn on January 23, 2016.

When I paid the price for forgetting that the first 5 minutes of user journey is more important than all the cool features…

user journey

I love technology and of-course I thrive on building cool features – nothing compares to the excitement of implementing highly complex algorithms or finding new ways of using the technology to solve a problem.

But many hard experiences have taught me that technology (alone) does not sell and I have over-the-years learnt to first focus on user experience and always keep the technology hidden.

So, when I started to write down the product specifications for my new startup idea, I did everything right – there was not a single word on technology and I captured the product definition through 5 stages of user journey.

[5 Minutes] captured the 1st Experience: AWARENESS

[5 Hours] focused on the 1st Use: ORIENTATION

[5 Days] looked at making it work for the user and lead to Participation: PERSONALISATION

[5 Weeks] looked at making it work for the larger group and added elements of Induction: INFLUENCE

[Ongoing] introduced capabilities to sustain the momentum: CULTURE

It was the right way to do it. We ran the user journeys with many clients and fed the inputs and preferences back. After 4 months of market validation we decided we were ready to start development.

And that’s when the problem inadvertently occurred (of-course, I never realised it at that time). As we drew out the release plan we ran through the normal cycle of chopping features and prioritised to define the feature-set for the first beta version. I believe that is when my latent love for technology overtook my experience and I selected the base feature set to include functions that demonstrated the algorithms (justifying it all by saying that these complex contextualization algorithms were our differentiator and critical for illustrating the value to the user). Nothing comes for free – to balance time and resources we decided to take a shortcut for our initial sign-up and login process. At that stage it seemed the perfect thing to do – after all its something a user does only once or at best a few times – and even if its a few extra steps or a little painful – it will still work!

Of-course it worked. But only for those true early-adopters who had the motivation to take that extra initiative and accept a few painful interactions. As our user base grew, a lot many users attempted to sign-up but never managed to get onboard. Its amazing how often it happened – and believe me that the interaction wasn’t anything demanding – it simply required them to copy an access code (sent separately) and input it as part of the sign-up. In those days, we lost a few users and for many others our client engagement teams had to invest time and run after them (and often hand-hold) to complete the process. If only we had stuck to our original definition of keeping the 1st 5 minutes interaction simple and seamless, we would not just have got a lot more people on-boarded, but also ensured that their first touch point was fail-safe.

In hindsight, it seems a blunder – how could we have ignored the fact that users had to onboard first before they could experience the cool contextual and personalisation features? There was no technical complexity to the desired sign-up process and I do not even know if we really saved that many development hours and resources. It’s more like we were avoiding working on a task that had virtually no challenges for us to fix…

What hurts me even more is that its something that I, as a Product Manager, always knew and even apportioned the right value to it at the time of conceptualisation. And yet, somewhere I still lost control on the road from concept to delivery.

This article was first published on Medium on October 27, 2015 and LinkedIn on November 21, 2015.

Reducing attention span – how I started to exploit it in today’s asynchronous world…

Less is More

I can no longer focus on a thought for long – can you? Is a short attention span really as bad as often suggested…? I don’t think so. Maybe working professionals can adapt to exploit this emerging behaviour pattern?

My attention span has fallen over the last few years. It’s a fact. Earlier I could focus and concentrate at length, now I struggle. With so much happening in today’s multi-media, always-on world if something fails to catch my attention in the first few seconds, I just drop it and jump to the next – and even then I end up with so much that I still can’t find time to look at. Some research puts the blame on our growing use of smartphones and connected networks, others on the never-ending information overload. Whatever the cause, the effect can’t be ignored.

An article some months back created excitement by announcing that a goldfish (at 9 sec) has a higher attention span than a human…

While I have found little medical validation to support this claim and hence decided that it’s premature to give up on my digitally-connected lifestyle, it certainly opened my mind to accept that I am now doing many things differently. And since the verdict is out on whether it’s good or bad, my actions are contradictory. At one extreme, I am using yoga and meditation to build concentration; on the other I am developing new operating patterns that fit better with a shorter attention span.

One important change that I am exploring is to drive a culture where information is broken into bite-sized chunks. Obvious, as it may be, believe me that it indeed is a step change. We all suffer overflowing inboxes, overly-descriptive documents and hours of conference calls. To break away from this overload and start communicating in bite-sized chunks makes us edgy… Will data be missed? Can all the facts be captured succinctly? With more pages/slides comes more extensive preparation. And so on…

But, if we stop for a moment – we all know (deep down) that hardly anyone reads big communications diligently and much of what is said is often ignored… We also know that if the key message is clear then we can ‘capture’ it simply. It’s only when we lack clarity that we waffle… and we have all seen that structuring information into multiple, easier-to-digest, pieces helps present the big picture much better.

Let’s say we succeed in getting the information broken down. Smaller pieces work well with our reduced attention spans. We don’t have to wait to free-up time; instead we can pick up chunks to fill in time-windows. We can cover a lot more in a shorter time. We can start by focusing on the key highlights – and only delve into details for areas that really deserve attention. And we can always create time for that. We can become better at filtering and prioritising – and hence act more effectively.

The shift is to move from quantity to QUALITY.

The shift is to move from activity to RESULTS.

The shift is to move from management to OWNERSHIP.

Through our innovative use of structured mobile-video, we at humanLearning have pioneered this change. Our platform ‘WinSight’ organises professionals to craft clear messages – quickly & easily – in short (30-60sec. max.), segmented, templated videos. We can now exploit small time-windows – we have tried to capture our messages as we walk out of a client meeting to the parked car, sometimes as we wait for our turn in a queue, sometimes as we travel in the train or the underground, but quite often as we walk the busy the streets. All our communication has become near real-time and yet is available asynchronously… it gives us all the independence of un-interrupted work-flows and flexibility from time-zones but still keeps us connected – more than we have ever been before!

Our belief in ‘Less is More’ may seem counter-intuitive in the age of BigData but it can evolve into the natural way of future communication – a more human way to interface & interact.

It will take time to move all the interactions to this new mantra. However, we should get started now. After all, it’s not just about survival anymore but an opportunity for working professionals to simplify their work-life by creating new, easier, quicker, more effective – asynchronous – ways of working.

This article was first published on LinkedIn on October 18, 2015  [ref.] and GrowthHackers on November 2 2015 [ref.]

I recently concluded that I learn more from Success than Failure… and yet, isn’t it ironic that we are still obsessed with learning from failure?

right or wrong

It is popular belief – especially in the startup eco-system – that failure is a stepping-stone to success. I cannot deny that this gave me a lot of confidence (and comfort) when I co-founded a technology startup, as I believed that the worst outcome (for me) would be all the great learning that I will acquire, even if we faltered on the way.

Now, after many years of living the startup journey, I have lots of learning – both good and bad. But, being true to the spirit of learning from failure, I always diligently record everything that doesn’t work. I even look at it often, analyse it sometimes, and consciously try not to follow the same approach again. But then everything changed one day…

It was just one of those days when I was flustered – I was looking for answers and I was getting irritated as I realised that for every previous effort that had failed, I only knew what did not work. But I still had no clue of what would work? I asked myself – how effective is that learning – if I still have to go back to the drawing board and continue the search for answers on how to make it work? I was not very upbeat as I had gone through the process once and failed to find the answer, and what was the guarantee that the second search would be any more fruitful?

In that state of exasperation, I happened to come across an interesting neuroscience research that suggested that brain cells only learn from experience when we do something right and not when we fail. I was intrigued.

I wondered if I could correlate it with my own personal experience – so I tried to test the theory on the problem at hand. Our mobile-video based service for sharing experiences, stories and insights is deployed across 25+ countries in Europe. Most groups are very actively engaged, but few still require constant nudges. All our discussion around driving adoption in the low-activity groups has always focused on what wasn’t working for these groups. That day we changed our outlook – we instead discussed everything that was working for the high-activity groups. We uncovered simple observations and found interesting patterns. We realised that we just had never bothered to re-apply this successful learning back into the groups that required external stimuli.

That was the day I realised, that my obsession with learning from failure meant that I was simply – taking for granted – everything that was working for us. Here was an opportunity for us to focus on the success and build upon it – I knew what worked and I could make it happen again, maybe even do it much better. And yet I was spending more of my time in learning from failures. Why? It made no sense.

I am now a convert. I now track our successes as much as (if not more) than the failed attempts. Of-course I know that I need to be cautious and ensure that I am not blinded by success. More importantly I am cognisant that I need to continuously strive to do better than the last success. And, of-course it also does not mean that I overlook failures – but I now look at them in the right context.

Learn from success is my new mantra! I realise that the need is not to glorify success – but to recognize core strengths and convert them into strategic assets. Just as it is important to manage our weaknesses, we also need to diligently work on developing our strengths. And believe me – it is harder to focus on strengths, far much easier to lapse into failures, regrets, emotions.

This article was first published on LinkedIn on September 13, 2015 [ref.]

Arti is the co-founder of humanLearning – a fast growing UK-based technology startup – setup with an earnest desire to make the life of busy professionals simpler and more effective. humanLearning is disrupting business work-flows thru WinSight – a mobile-video based platform that empowers ‘every’ professional to benefit from each other’s experiences & insights in the easiest, fastest and most impactful way. 

A new National Highway: Virtual Connectivity to override Physical Infrastructure in India


Can Aadhaar evolve into a virtual connectivity infrastructure that drives a seamlessly connected society? What if Aadhaar gears up to be India’s answer to its painstakingly slow progress in building physical highways and infrastructure.

Aadhaar is a unique identification initiative launched by the Government of India under its planning commission. It is an ambitious project of using basic IT technology (databases, computing) and connectivity (fixed or mobile) to create a dynamic online identity system. The integration of biometric technology has provided an advanced and secure capability of authentication. This has further been extended by integrating payment platforms and providing an unified system of real-time identity, authorization and payment transaction support.

The vision outlined by the government lays emphasis on social & financial inclusion. As the first step, authorization and payment services are being used to drive delivery of distribution and transaction based services. Initial pilots have focused on social and welfare schemes such as Public Distribution Systems, LPG distribution & subsidy management, old-age pension distribution etc. In the next phase, applications could extend usage from authorization to access control or location/presence and drive services that are as simple as attendance to more dynamic deployment of resources based on current location of users. The scenarios are only limited by our imagination.

However, for true momentum to be built up, the initiative has to garner the industry attention and evolve to provide value to encourage adoption by businesses and enterprises.  This will not just lead to a massive build-up of Aadhaar-enabled services but also provide the impetus to propel it out of the current orbit to the next level of growth.

This evolution will need to be centered around 3 core areas – (1) Extending its application beyond social welfare into businesses (2) Introducing Support for Analytics – analytics could be used for converting raw data into value-added user/service context or applied to intelligence-driven operations (3) Inter-linkages with other databases and systems for seamless connectivity.

Data has been touted as the new oil of the connected world. However, our experience has taught us that data has no value unless it is acted on and converted into meaningful actions. Its only when the monetization potential is realized that it will drive social change.

The question for all of us – can this infrastructure be exploited to compensate for the lag in physical infrastructure investments? A nation that has been recognized for its extensive reserves of IT resources should not falter in playing to its strength in IT – we should be investing in creating an unprecedented scale of connected applications cut across both social and industrial sectors and use the virtual connectivity to open up reach as well as delivery. This could be the one area where we outpace every other nation & challenge the perceived dominance of other emerging nations like China.

Explosion of Data: Can it be monetized? (Part 1)


Effective data pricing is not about simply rolling out new pricing plans – it requires a re-think of strategies: implementation of new capabilities like policy control & traffic management; innovations in self-care, loyalty programs and cross-marketing; and  integration of all these dimensions into real-time charging, notification & payment solutions.

The discussion on the ideal model to monetize the explosion of data is live again! The classic one-size-fits-all  approach does not make sense any more.

Last year saw most of the major operators eliminating unlimited data plans to move to tiered pricing. There is very little support for the operators from the community as everyone sees it at an inhibitor to the connected world. Questions afloat on whether it will impede the growth of video-centric applications (still in their infancy) – be it the multi-player gaming or the video calling,  media streaming or the many anticipated new applications. But it is being recognized that growth in data traffic is impacted by multiple drivers – as was seen in India over the last year after introduction of 3G services, where data growth was  impeded due to high tariff’s, inadequate coverage for 3G across the regions and lack of seamless interoperability of many services (e.g., Video conference) across operators. It is clear that monetization of higher bandwidth networks cannot be taken as given –  a more holistic approach is needed to facilitate the adoption of data services and thereafter manage the explosion of data.

It is a reality that the carriers need to gain control of growing bandwidth consumption and to make consumers pay for what they use, while provisioning for an adequate level of quality of service. While the initial efforts started with ways to curtail data-hogging activities, it is slowly been recognized that there may be better alternatives to address the fundamental problem by re-defining the delivery of their services, managing the way bandwidth is utilized between voice and data services, as well as within multiple data sessions, and introducing new revenue streams. This also serves to enable the telecom providers to differentiate their role in the value-chain and demand a share in the revenue thru premium services.

This requires a re-think of the existing architecture to target fair play, offer flexibility thru tiered services facilitate monetization thru dynamic policy control & upgrade options, recognize customer loyalty, and integrate partner/sponsors into service models.

Architecture: Manage Inter-linkages

Some of the dimensions involved:

  1. Implement a data architecture which is able to distinguish & differetiate various types of data services in a granular form, so that differentiated policies can be implemented based on service types, usage and subscription profiles.
  2. Implement policy control solutions in the networks to exploit the variation in data usage and apportion the usage of the network to maximise the revenue. It has been seen that the data usage varies widely depending on the end-device, end-user and other parameters – half of a typical operator’s data traffic is driven by approx 5% of the subscribers only, top 20% of subscribers in usage avail 80% of available capacity. The available capacity during off-peak hours can be monetized by deploying non-user-based services (e.g., utility metering, telematics, M2M etc).
  3. Extend Policy Control to create monetization opportunities, i.e. moving beyond a denial or restriction of service to introducing real-time notifications & engagement with the user to present options to upgrade to the desired service levels. These could be offered on additional payment or linked with operator’s other loyalty & payment modules.
  4. The key is to provide a seamless experience to the user that integrates policy control with real time charging, self-care, payment & notification systems.
  5. Develop network intelligence by consolidating data from multiple sources – monitoring usage patterns, behaviour and service experience, data collection from network nodes, integration with operational & service assurance, CRM,  and care systems for analytics in real-time to develop profiles and characterics that can drive usage based pricing strategies aligned with user behavior.
  6. Implement an intelligent “offload mechanism” to selectively detour certain types of data traffic (e.g., bandwidth hogging video traffic etc) to altrenate bypass routes at the network edge, to ensure consistent quality of service.
  7. Proper dimensioning of networks (access and core) to support the heavy-tailed nature of data traffic is required
  8. Enhance  loyalty packages (discounts, loyalty rewars, bonus points, promotions etc) based on collected intelligence to define targeted segment promotions and innovate pricing capsules. Integrate into the payment systems to interlink with service upgrades.
  9. Integrate of new channels for notification, communication and self-care.
  10. Introduce new pricing Models from bundles, service premiums, partner/B2B models etc. Other options such as dynamic pricing could also be added. 

Next  we will look at the opportunities created by the new architecture to introduce new services, products & offers for the changing connected society.

This article first appeared on Aricent Connect on February 14, 2012  (

Great User Experiences start at the Back End


Successful consumer experiences are as much about behind-the-scenes business operations and processes as they are about easy-to-use products and cool designs. 

There is no doubt anymore that most companies recognize the true power of experience—thanks in large part to Apple, who has successfully emphasized user experience as an important element of success. And yet, it’s amazing to see that there is no simple definition of what “experience” really means or entails. Is it the overall interaction with a cool or smart design or is it confined to the graphic user interface (GUI)? I would say both. But I would also add that the complete user experience must also entail the service flows around various experience touch points.

The fact is, experience is the art of taking all those behind-the-scenes business processes and operating complexities (which we always tend to overlook), rationalizing them into streamlined functions, and then hiding them from the user by creating easy-to-use touch points and cool designs. So far, this is what has set Apple ahead of the competition, even as the competition floods the market with a surfeit of Apple look-alike products.

I’ve been associated with customers who go through an innovation cycle to replicate Apple’s success. Most of these initiatives ended with marginal success, and none created any industry-changing paradigms. What they all had in common was a single dimensional approach of hurrying the service functionality to the market. Existing operations and business processes were given short shrift and overlaid with ad-hoc upgrades in order to address service impact requirements. The result was that while the functionality excited technology innovators, it failed to generate any momentum because the experience wasn’t seamless enough.

These days, as the world becomes more connected than ever, dependency on the entire business eco-system has increased significantly. Touch points of customer experience now extend into multiple and diverse back end systems and processes (e.g. authentication and user identity/profile management, discovery and recommendations, download and upgrades, optimization for bandwidth and performance, campaigns and advertising, billing and revenue assurance, business analytics and service assurance, inventory and fulfillment, and third party eco-system management across CRM, billing, BI, OSS and SDP systems). Initially, it’s fairly common to overlook the complexity and impact of creating a comprehensive user experience, and when the challenge surfaces during the later phases of deployment cycles, the impending launch dates leave no room for innovation in that area.

In my experience, the most well-defined experiences get created when the impact on operations is envisioned at the same time as the product or service itself. This allows for all dependencies to be built in upfront into planning, while ensuring that a separate focus is created to commit to a seamless end-to-end operation. This may be possible through simple rationalization or through the evolution of existing systems, but in some cases it may require a full transformation. As the world becomes more connected and more people and products come online, the biggest challenge to service delivery models and business processes will be to sustain dynamic ever-changing real-time user and business parameters.

This is, in-fact, the difference. Apple designs for the end-to-end service—not just for a product. What this means is that companies must plan for service integration as part of an innovation strategy. At least, they do if they want Apple-like success.

This article first appeared on Aricent Connect on June 18, 2011 (

The Opportunity is not about Big Data but Intelligence-Driven Operations


Everyone today is talking of Big Data. The discussion is divided straight down the line – most people pushing the case present it as the panacea of all known & unknown problems; and interestingly the opposing group does not deny its relevance or value but makes every effort to categorize it as no different from the investments in data handling & management done painstakingly over the years. Of-course, both sides are right and miss the real point in the debate.

The real opportunity is untapped and often overlooked – it goes far beyond just data or its derived intelligence. The potential is in the real-time application of the data (intelligence) into operations by developing an active closed feedback loop. The ability to seamlessly integrate the results of analysis (post data aggregation) into active operating and business work-flows changes the landscape. Information today has a shelf life of a few hours (data even lower!) and the achievement here is if it actually gets acted on during its life to make an impact. This possibility is what makes the whole discussion and investment around big data so appealing. It has the potential to enhance not just true customer experience but also have a tangible measurable impact on network costs, support services, self-care and even go beyond to facilitate launch of differentiated services in new businesses.

Data has been collected, managed and applied for many years now. The shift that is emerging is driven by a complex multi-dimensional change in underlying network technologies, early introduction of automated workflows (e.g. policy control & enforcement, self-optimizing networks, etc) and customer behavior fundamentals.

  • Volume of data has increased many-fold. Existing systems and solutions may not be able to handle the ever-increasing scale of data (terabytes, records, transactions, tables/fields) and it may require a re-look at data architectures.
  • New sources and types of data are getting added. This ranges from the unstructured data from social platforms, application marketplaces, mobile devices, and semi-structured data from M2M.
  • Sources of data are spread across different organizational functions and many different systems. Data needs to be culled from large networks, internal corporate operations and users of all types.
  • Data needs to be extracted in real-time from network nodes and user devices. Silo’ed transactional measurements are no longer sufficient and continuous management is imperative. Re-use of existing tools, systems is a must for practical implementations.
  • Same data is relevant for multiple businesses and functions. Operating efficiency is critical by building collect once, use many times architectures.
  • There is a growing need to combine the in-service and out-of-service data to develop dynamic co-relations i.e. associating the real-time & stream data with static data (customer data from CRM/BI systems or operations data (Fault & Performance Tools, Policy Control Frameworks) or network (HLR, EIR etc) to derive actual use-cases.
  • Managing data goes beyond data aggregation or even an analysis of usage patterns. It requires seamless integration with existing operating & business systems. And most importantly, there is a need for tight integration with different businesses to ensure insights are actually used! Metrics & KPIs, use-cases, business applications need to be driven & owned by the business teams for full integration into product life-cycle and pre-emptive & predictive operations.

These shifts can no longer be ignored; it is no longer a future vision but a hard-core reality for survival. But the key point to take note of is that the shifts have far reaching impact and go beyond big data to also trigger a change in all operator functions (IT, Networks, Planning, Operations, Care, Marketing, etc). This is where the challenge lies. An effective strategy is needed to define a holistic implementation approach that goes beyond organizational boundaries. We see the need to factor 4 goals to cover all critical success factors.

Goal 1: Affordable Management of data at extreme scale

  • Existing Systems/Tools vs Big Data Platform selection to evaluate on various parameters such as Volume, Velocity, Variety & Variability, Parallel processing/distributed architectures
  • Consideration of new requirements like distributed compute-first (as against storage first) architectures, distributed file-systems, parallel architectures, complex event processing, highperformance query architectures based on in-memory architectures for analytical driven cloud operations etc.
  • Design of hybrid data architectures built on multiple data platforms and technologies to support different needs of different business applications

Goal 2: Optimal re-use strategy without compromising on architecture expansion

  • Audit of data & systems to define re-use approach
  • Reuse peripheral infrastructure – tools/systems but evolve core data architecture
  • Define data sources & types to avoid duplication and rationalization of systems & tools across functions
  • Business driven def of KPIs & Metrics and Analytics
  • Identification of high-impact business applications to define prioritized use-cases Analytical Application Acceleration

Goal 3: Seamless Integration into Operations

  • Automation & feedback cycle into operating & business systems and process flows to ensure tangible business value
  • Convert insight into real-time actions for preemptive & predictive actions by alignment with businesses

Goal 4: Continuous Operation as a managed service

  • Handling the complexity & diversity of multiple services and managing multiple organizational function interfaces
  • Visualization thru customized dashboards and reports

These goals overlap at many points and the implementation priority and planning can be determined by the business approach and the decision of adopting a disruptive or a simpler adaptive strategy – driven by the initial risk & investment appetite. While Disruptive Approach will deliver a high market impact it will also require higher organizational alignment and upfront investment. It will establish the full blueprint and implementation plan and result in e2e innovation thru a fully integrated approach built over big data solution and automated action loops. The alternative approach is to take Adaptive Approach route which has an edge due to lower initial risk besides providing early feedback that can be fed back into the fully integrated approach. This would develop a high-level blue-print and identify 2-3 selected pilots in identified functional areas and expand to other areas later. The cost of execution will largely depend on the approach preferred, the implementation timelines established and the business goals defined.

Most initial implementations will fall under the adaptive approach and slowly evolve to put in place an effective roadmap for Intelligence-driven Operations.

However, the next big-challenge is to identify the right business use-cases that actually have an impact on the business and the user. The need of the day is to go beyond the standard talked-of use cases (customer segmentation, application analytics, content analytics, network optimization, performance, predictive & preemptive technical problems etc.) and come up with something innovative (such as bridging the consumption gap, turning customer service data into a new revenue stream or…) – the possibilities are only limited by our imagination.

This article was published on Aricent Connect on 27 September 2012: