Sunday, January 29, 2017

Understanding Artificial Intelligence and the Shades of Difference Among its Subsets

In the era of the fourth industrial revolution or the second machine age as some term it, Artificial Intelligence or AI, as it is commonly referred to, has emerged as the disruptive technology which will have a major impact on business models and our day-to-day lives. Some have rebranded AI as 'cognitive computing' or 'machine intelligence' while some others have incorrectly used the terms AI and 'machine learning' interchangeably. Let us delve a little deeper into some of these here and the branches of AI which are likely to have a significant impact on businesses and economies even, in the coming decade.

Artificial Intelligence (AI) has been around for a long time. Very early European computers were conceived as 'logical machines' and by reproducing capabilities such as basic arithmetic and memory, engineers at that time saw their jobs as attempting to create 'mechanical brains'. However, as technology and our understanding about the working of the human mind has progressed rapidly, the concept of what constitutes AI has changed. Rather than performing increasingly complex calculations, the ultimate goal of AI has changed to one of building machines and systems capable of performing tasks and cognitive functions that are only within the scope of human intelligence. In order to get there, machines and systems must be able to learn these capabilities automatically instead of each of them having to be programmed explicitly, end-to-end. AI has thus become a broad field, involving many disciplines ranging from robotics to machine learning and deep learning.

Thus, AI may be broadly defined as machines developing the capability to carry out necessary tasks in a way that humans would consider 'smart'. Machine learning is a subset of AI that envisages giving computing systems access to large volumes of data that will enable them to 'learn' and carry out necessary tasks without having to be explicitly programmed, end-to-end. The emergence of the internet and the consequent huge amounts of digital information that can be stored, accessed and analyzed, have also provided a major fillip to the domain of machine learning. The internet has given rise to the development of data lakes which may be visualized as a storage repository holding a vast amount of raw data in its native format, including structured, semi-structured and unstructured data, which may be accessed, depending on the need. Unlike a data warehouse which holds data in predefined hierarchical formats in files and folders and is expensive to maintain, a data lake uses a flat architecture to store data.

One of the early approaches of machine learning was in the development of neural networks. Neural networks are inspired by our understanding of the biology of the human brain and the interconnection between all those neurons. But unlike the human brain where any neuron can connect to any other neuron within a certain physical distance, computer-based neural networks have discrete layers, connections and directions of data propagation. Neural networks work on a system of probabilities. Based on the data fed to it, it is able to make statements, decisions or predictions with a certain degree of certainty. The addition of a 'feedback loop' enables the learning part. By sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

Machine Learning applications currently can peruse text and figure out the sentiment of the person who wrote that piece of text. They can listen to a piece of music, decide whether it is likely to make people happy or sad and find other pieces of music which match the mood. Natural Language Processing (NLP) which has gained pre-eminence in the last couple of years or so, is heavily dependent on Machine Learning (ML). NLP applications attempt to understand natural human communications, either written or spoken, and attempt to communicate back using similar natural language. ML is used here to understand the vast nuances of human language and learn to respond in a way that the given target audience can comprehend.

'Deep Learning', another subset of AI, is finding increasing use these days in a number of areas. Essentially, 'Deep Learning' starts off with the neural networks which we have mentioned before and then goes on to make them huge, by increasing the layers and the neurons manifold while running massive amounts of data through it to 'train' it. The 'deep' in 'deep learning' describes all the layers and their interconnections in this neural network. Today, image recognition by machines trained by 'deep learning' is better than humans even in several instances. Google's AlphaGo  learned the game and trained for its Go match by tuning its neural network simply through 'deep learning' methods which involved playing against itself, over and over again.

Some of the key applications of 'Deep Learning' today  are in the following areas:

Autonomous vehicles: Using a variety of sensors and onboard analytics, together with massive existing datasets, 'deep learning' systems are all the time learning to react to a variety of obstacles and road conditions appropriately in real-time.

Recolouring black and white images: By teaching computers to recognize objects and what they should look like to humans, the right colours can be restored to various images and videos.

Predicting the outcome of legal proceedings: When fed massive amounts of data about a case, including similar cases historically, the system is able to predict the court's decision fairly accurately.

Precision medicine: Using 'Deep Learning' methodology, medicines genetically tailored to an individual's genome are being developed.

-- Raja Mitra

Wednesday, December 28, 2016

If AI Takes Away Many More Jobs Than it Creates, Could Universal Basic Income be the Answer?

That AI is going to be one of the disruptive technologies embraced in a big way by 21st century businesses is no longer a scenario that is to open to speculation. Simply put, it is happening already and will gather speed during the coming months and years. There are very few domains or functional areas which will not be affected significantly by the adoption of machine learning and artificial intelligence (AI).

In one of the early studies done about how humans will complement machines during the second machine age, Carl Benedikt Frey and Michael Osborne examined the possibility of computerisation for 702 different occupations and found that 47% of the workers ran high risks of their jobs being taken away as automation & AI started becoming increasingly mainstream. The domains where they saw this mostly happening included transportation & logistics, office services, sales and support. Subsequent studies put the equivalent percentages at 35% for Britain (more people there are engaged in creative activities, compared to the U.S.A.) and 49% for Japan.

MIT economist David Autor, in a paper in 2015, concluded that dramatic declines have happened in certain occupations over the past 100 years. For example, the percentage of workers engaged in agriculture in the U.S. has declined from 41% in 1900 to 2% currently. Cars have drastically reduced the requirement for blacksmiths and stable hands, machines have replaced many manual jobs in construction and factories and computers have replaced many record-keeping and office positions. Yet, despite the automation of so many human jobs over the last couple of centuries, the number of occupations which still remain is indeed surprising to many an observer of this change. As Prof. Autor explains this succinctly: "tasks that cannot be substituted by automation are generally complemented by it.". The fact is that while automation does substitute labour, it also complements labour in several ways, leading to higher economic output which, in turn, generates new demands for workers.

Many economists are already talking of 'job polarization', a scenario where middle-skill jobs, like those in manufacturing or office support, are steadily on the decline whereas both low-skill and high-skill jobs keep expanding all the while. Automation and AI today is blind to the colour of the worker's collar and is steadily taking away a number of both blue-collar as well as white-collar jobs. While, during previous industrial revolutions, large numbers of workers had the option to move from one routine job to another of a different nature over a period of time, the fourth industrial revolution that we are witnessing currently, enables companies to use the same 'Big Data' techniques, that they may use to improve their sales, marketing and customer-service operations, to also train machine-learning algorithms which then become capable of taking over several other types of jobs and executing them efficiently. For example, 'E-Discovery' software can search mountains of legal documents much more efficiently than human clerks or paralegals can, while, in the arena of journalism, certain tasks like writing market reports or sport summaries are steadily getting automated. 

In a scenario where 'technological unemployment' becomes widespread and alternate job generation simply fails to keep pace with the number of jobs disappearing owing to AI and automation, a system needs to be evolved so that the basic needs of the growing population of unemployed humans is at least taken care of. Failure to do so could result in a high degree of social tensions, leading to social upheavals whose results can be quite unpredictable. This is where the idea of a 'Universal Basic Income' comes in. 

Add caption
The 'Universal Basic Income' idea banks heavily on certain assumptions. One of the key assumptions is that, owing to advancement in technologies, the cost of living will be going down significantly in the years to come. In the two most populous countries of China and India for example, most of the expenditure happens in the areas of housing, food, transportation, education, healthcare and entertainment. One of the scenarios envisaged is that emerging disruptive and cutting-edge technologies will bring about a steady fall in the cost of housing, transportation, food, healthcare, entertainment, clothing and education among others and, over a period of time, the cost for some of these could even be approaching zero.

Universal Basic Income (UBI) is a policy in which all citizens of a country regularly receive, unconditionally, a certain sum of money, either from the government or from some other public institution, in addition to any income that they may be having through other means. UBI's core idea is to keep under control, social tensions and 'ills' by giving people, unconditionally, a certain amount of 'free money'. Several countries, including Finland, Netherlands and India, have carried out limited experiments in UBI. While the implementation of UBI at scale, is still in its very early stages anywhere in the world, the results of the limited experiments have been encouraging mostly. For example the Indian experiment showed that this led to more labour and work and not less, as was expected by sceptics. A shift was observed from casual wage labour to more self-employed farming and other business activities and a resultant drop in distress-driven migration from the region was also observed. Measurements of average weight-for-age of young children showed improvements in nutrition levels.

Raja Mitra

Thursday, December 8, 2016

The Impact of A.I. on Management and the C-Suite during the Second Machine Age

The Industrial Revolution was when humans first overcame the limitations of muscle power. Often referred to also as the First Machine Age, humans during this period were largely complements to the machines. The Second Machine Age, which we are into currently, is mainly about complementing our mental faculties many times, using digital technologies. It is not too clear though whether humans will complement machines during this era or will be replaced altogether. Examples of both can be seen.

The defining moments when machines got the better of humans, occurred during 2011. In one of these instances, IBM's Watson defeated two leading players of the game Jeopardy!. In the other instance, during a machine learning competition, contestants were asked to design an algorithm that could recognize street signs which were rather indistinct and dark. Humans managed to correctly identify these 98.5% of the time. The winning algorithm did better than them, managing to identify these 99.4% of the time.

Since then, the question whether smart algorithms and software can replace managers, has gained primacy. In several instances, particularly when it comes to problem solving or finding answers to complex queries, machines have been doing consistently better than humans. Knowing whether to assert your own expertise or simply to step aside and let the software do the job for you is becoming a critical skill for executives in organisations which have been steadily adopting and integrating disruptive technologies into their operations. Yet, senior managers are far from becoming obsolete. As AI makes rapid strides, senior managers will be called upon to fashion the innovative new organizational forms needed to crowdsource the array of talent which is coming online all around the globe. They will also be called upon to emphasize their creative skills, strategic thinking and leadership abilities.

One of the biggest pitfalls for managers is underestimating the impact that data, combined with AI and analytics, can have on both their organizations and on society. One of the reasons for this is that the capabilities of disruptive technologies like AI and Predictive Analytics have been growing at an exponential rate and human brains often are unable to conceive of this. Two examples would help to illustrate this.

Google recently announced that it had completed the task of mapping the exact location of every business, every household and every street number across France. In the normal course, you would think that it would require Google to send out a team of 100 people, every day of the week, with GPS, to do this over possibly a couple of years. In fact it took Google just about an hour to get this done. What they actually did was to take their street-view database, containing hundreds of millions of images, and have someone go through them manually and circle the street numbers in a few hundred of these images. Then these were fed to a machine-learning algorithm which was told to figure out what was unique to the circled portions in the few hundreds of images, find them in the hundreds of millions of other street-view images and read the numbers it could find in those. That is what took about an hour. As would be evident, the increase in productivity and scalability by so many orders of magnitude, totally changes the nature of the challenges that the organization faces.

Let us look at another instance of how this same company, Google that is, does HR. It has a unit called the Human Performance Analytics group which takes data about the performance of all employees, what interview questions they were asked, what is the location of their office, how does that fit in with the organization structure and several other associated parameters. Then it runs data analytics to figure out what interview questions work best and which career paths are the most successful. Earlier, HR possibly never quite figured out some of these results obtained and possibly used anecdotal data and intuition to figure out some of the other answers.

Several proponents of disruptive technologies talk about how these can work seamlessly with humanity and not harm them. Some of these areas are:

1. Faster Onboarding
Typically new hires in an organization spend several weeks meeting new colleagues, going through documents and files and understanding and navigating internal processes. A knowledge graph, full of information gathered before the new employee is hired, can help cut down drastically on the time spent by the new employee in going through and understanding these processes. The knowledge graph can help answer questions like:
a) Who do I need to work with on the assigned task?
b) What were the meetings where this has already been discussed and will be discussed soon?
c) When is the next status meeting coming up and what information and updates do I need to be ready with for this meeting?
While these may sound simple enough, a knowledge graph will save the new hire plenty of time and help him or her navigate all the initial hurdles much quicker and easier.

2. More Efficient Workflows 
Executives and managers can forget emailing meeting recaps and minutes. A.I. will help process decisions and assignments made in the conference room and add them to the knowledge graph appropriately. The technology for pattern matching and classification already exists, as would be evident from the 'Search box autocomplete'. In addition to linking decisions with assignments and steps needed to complete a project, A.I. has the potential to measure less tangible workplace influences and comb all communication to assess internal sentiments. Knowing what issues are being discussed most frequently, what concerns are being analyzed and how financial and emotional capital are being used would provide valuable layers of insights to managers. The Director of MIT's Media Lab terms the process of using intelligence as a network phenomenon and using A.I. to enhance, rather than replace human intelligence, as 'extended intelligence'.

3. Objective Performance Reviews
A recent Deloitte study found that only 8% of the organizations said that annual performance reviews were worth the effort required. While some jobs can be measured fairly successfully, what counts as success for most jobs is more subjective. Workplace politics, differing opinions and unconscious bias often take their toll on performance reviews. A knowledge web could capture every tiny detail of who proposed an idea in a past meeting and who managed the tasks to make it happen. While the need for people skills at work and human judgement in performance reviews will still remain, A.I, can help managers identify patterns in workers' strengths and weaknesses. This can, in turn, help managers make better assignment decisions. 

-- Raja Mitra

Thursday, November 24, 2016

The Impact of Artificial Intelligence on Management Processes and Practices

The year 1995 is often heralded as the beginning of the 'new economy' triggered by digital communications that was set to upend markets. Today, over two decades later, the net impact of the digital era, in economic terms, can be said to be a reduction in the cost of search, communications and a variety of transactions. The cost reduction in turn led to a boost in the volume of searches, communications and certain transactions.

The buzz being heard presently about cognitive technologies (aka Artificial Intelligence ) is in many ways similar to the earlier buzz about digital communications technology. All technological revolutions involve, in economic terms, the cost of some activity becoming cheap. Artificial Intelligence is, in essence, centred around prediction technology, so the significant economic impact will be a drop in the cost of prediction.

When the cost of any input falls significantly, two things usually happen. Prediction currently is an input to a host of activities which include transportation, healthcare, agriculture and retail, among others. With the drop in costs, predictive technologies will be increasingly applied to domains where they weren't being used earlier. Together with this increased application, the value of other things that complement prediction will rise.

As an example of these new domains, consider navigation. Till recently, autonomous navigation was restricted to highly controlled environments like warehouses and factories where programmers could anticipate scenarios and build 'if-then-else' decision algorithms accordingly (e.g. 'If object approaching sensed, then slow down, else continue at set speed'). Once the cost of predictions fell, innovators simply reframed driving as a prediction problem.  Rather than program endless 'if-then-else' decision algorithms, AI was simply asked to predict what a human driver would do in a huge number of different scenarios. Vehicles were outfitted with a variety of sensors like cameras, lidar, radar and others and went around collecting millions of miles of human driving data. By linking the environmental data collected from outside via the installed sensors with the driving decisions made by the human inside the car, AI learned to predict how humans would react ( braking, accelerating, steering, stopping ) to a large variety of environmental conditions prevailing at any instant outside. Thus, prediction technology has now become a major component of the solution to a problem which was earlier not considered a prediction problem at all.

As AI becomes mainstream, the value of human prediction skills will decrease simply because machine prediction will provide a cheaper and more accurate substitute for human predictions. However, such a scenario may not spell doom for human jobs, as several expert projections seem to indicate. That is because human judgement is a complement to machine prediction and hence, when the cost of machine prediction goes down, the value of human judgement would be expected to rise. Management would thus seek out more human judgement in such a scenario.

A survey of nearly 1800 managers from 14 countries together with inputs from 37 executives who are in charge of digital transformations in their organizations, helped identify five practices that managers need to master as disruptive technologies make major inroads in their workplaces. Briefly put, these are:

1. Leave Administration to AI 
Managers have been seen to be spending over 50% of their time on administrative control and coordination activities. AI will automate most of these tasks and relieve managers of the responsibility of having to carry them out.

2. Focus on Judgement Work
Many decisions require insights beyond what AI can arrive at, based on data and prediction algorithms alone. The application of experience and expertise to critical business decisions and practices is the essence of human judgement. As stated earlier, the value of human judgement will rise as the cost of machine driven predictions go down.

3. Develop Social Skills and Networks
In a world where AI carries out many of the administrative and analytical tasks that managers perform currently, the social skills critical to networking, collaborating and coaching will help managers to add value and stand out.

4. Work Like A Designer
As AI takes over more & more of the administrative and analytical tasks, manager-designers need to bring together their own creative ideas and harness the creativity of others to come up with integrated, workable and appealing solutions to problems.

5. Treat Intelligent Machines as 'Colleagues'
There is no need to race against 'intelligent machines' or treat them as competitors. Managers must recognize the fact that AI machines can greatly help in decision support systems and data-driven simulations as well as help in search and discovery activities. They should learn to value the advice of AI machines while making decisions based on  their own 'judgements'.

-- Raja Mitra

Saturday, October 29, 2016

While the Internet is Getting to be Ubiquitous it is Also Edging Closer to Being Taken Down

As sites, pages and servers hosting them multiply exponentially with the growth and spread of the world wide web, Domain Name Servers (DNS) and sites which host the DNS infrastructure become key to the availability of the Internet. A sustained DDoS attack against a bunch of major DNS hosts could thus render inaccessible a whole host of sites and pages which register huge page views regularly, leading to what could be stated in common parlance as large parts of the Internet being down and unavailable. 

Before we get any further, let us understand how Domain Name Servers work. DNS is essentially the internet's phonebook. They facilitate the user's request to go to a certain site or webpage and make sure that the user reaches the page he or she is looking for. Put another way, the DNS is a large database that, among other things, converts a domain name into a more complex IP address from which data can be retrieved. Taking down a DNS server means that the user's browser can't use it to resolve which IP address it should use to get the files for a particular Webpage. 

DNS hosts may be visualized as sites which host several such 'phonebooks' at least. If a DNS host or provider is targeted by a sustained DDoS attack, it will be unable to direct users to a large number of sites whose 'phonebook' it hosts, thus making them inaccessible & 'down' to users. This is what a DDoS attack during the second half of October, 2016 did, by targeting Dyn, a major DNS host, leading to a large number of sites including Twitter, Reddit, Github, New York Times, BBC, Fox News, Time, Soundcloud, Pinterest, Spotify, Netflix and Paypal - to name some - becoming inaccessible to users in certain geographical locations for long periods.

 Next, let us get into a little background about DDoS. If you want to take a network off the Internet, the easiest way to do it is to mount a DDoS attack. A Distributed Denial of Service (DDoS) attack is an attempt to make an online service unavailable by overwhelming it with traffic from multiple sources. Shorn of the nuances and complexities, a DDoS attack essentially means blasting so much of data at the site that it is simply overwhelmed. DDoS attacks are not new and have been used by hackers to target sites that they don't like and even by criminals bent on extortion. While there is an entire industry, with a whole arsenal of weapons, devoted to defence against DDoS attacks, it largely boils down to an issue of bandwidth utilized. If the DDoS attacker has a bigger fire hose of data than the defender has, it wins out, for a while at least. 

Recently, some of the major companies that make the internet work (like Dyn and Verisign ) have seen a significant increase of DDoS attacks against them. These attacks are becoming longer, more sophisticated and are happening at various levels, giving the impression that someone is carefully probing. 

There are many different ways to launch a DDoS attack. The more attack vectors the attacker uses simultaneously, the more defenses the defender has to use to try and ward off the attack. Some of the recent attacks have been employing three or four different vectors, forcing the companies to use every weapon they have in their arsenal to defend themselves, thus opening up to the attacker the absolute limit of their defence capabilities.

Schneier, CTO of Resilient (an IBM company) also mentions probing attacks, in addition to DDoS attacks, which test the ability to manipulate Internet addresses and routes and check out how long it takes the defenders to respond. Clearly some people out there are testing the core defensive capabilities of the companies that are tasked with providing critical Internet services. It is unlikely that activists, researchers or criminals may be doing such probing, which leads to the possibility of certain countries doing this for reasons which are unclear.

The good news is that the DNS, by definition, is a distributed database, which means that copies of the same information can be found across the Internet. This makes it fairly robust. However, it still takes time for DNS servers to recover from these attacks and, if several servers can be taken down at once, the resultant outage could be both widespread and prolonged.

What could be the disruptions in case of a prolonged Internet outage? The possibilities are many and a cross-section of these are listed below.

1. Failure of the electrical grid supplying power.
2. Phone and cellular services becoming unavailable.
3. Unavailability of basic information even, like weather and news.
4. Unavailability of all email and messaging services.
5. Most financial transactions, particularly across different locations and countries, getting disrupted.
6. Logistic networks and services getting majorly disrupted, leading to unavailability of products at retail outlets.
7. E-commerce coming to a halt.
8. Online retailing getting totally disrupted.

End-to-end Encryption Explained

Thursday, July 7, 2016

How Messaging Apps and Bots are Dramatically Changing the Social Media and Apps Scenario

For the last several years, Messaging Apps have been experiencing a meteoric growth, both in terms of number of users and the average time spent by a user within his or her favourite Messaging App. Already, there is a steady move away from aspiring to stand out publicly in the News Feeds and Streams of the major Social Media Apps to engaging privately, as social activity increasingly transitions to communities, groups and particularly messaging apps. As interactions and engagements move away from news feeds and timelines to one-to-many or one-to-one messaging, the rise of the dark social will challenge a lot of things that we have learned about social media in the last decade. This transformation will open up major challenges and opportunities for individuals, marketers and brands and will have major implications for apps (including messaging apps), bots and AI driven chatbots. We will briefly look at some of these scenarios in this particular post.

Smartphone software is currently in a state of flux. Download numbers are still growing but the app economy is rapidly approaching maturity. The twenty most successful apps account for over half of the total revenues from apps in Apple's app store. As users find downloading apps and navigating between them a hassle, their enthusiasm about doing so is clearly waning. A quarter of all downloaded apps are abandoned after a single use. Only instant messaging bucks this trend with over 2.5 billion people having at least one messaging app installed on their smartphone. Facebook Messenger & WhatsApp currently lead the pack. Activate estimates that within a couple of years, this number will reach about 3.6 billion or about half of humanity. Growing numbers of teenagers are now spending more time on messaging apps, sending messages, rather than posting or perusing content on social media networks. When it comes to sharing, private messaging already dominates, with over 70% of all referrals coming from dark social. Dark social channels typically include messaging apps, email and private browsing.

Let us now cast a glance at chatbots and start with defining what they are. Simply put, chatbots are computer programs that allow businesses to build automated response systems, capable of interacting with potential customers on a one-on-one basis, using the current advances in Artificial Intelligence (AI) and Deep Learning. Chatbots are changing the ways users interact with the internet, creating an all-inclusive environment often within messaging apps. For example, while going on a trip, users will no longer have to download multiple apps to perform different activities. Typically, they should be able to book a flight, hail a cab and book a table at a restaurant all within one messaging platform. Put another way, chatbots have the potential to replace individual apps altogether. Messaging apps, together with their bots, will provide the environment for direct, instant and multi-pronged interactions with potential customers. From customer service to purchasing products, the entire ecosystem will now become easier and seamless as there will no longer be a need to download a separate app, sign up, and create a separate account for a one-time transaction. Thus, by simplifying the entire process, brands will make it easier for potential customers to engage with them and buy their products or services.

Over a period of time, bots will come to be known as the new breed of 'invisible apps'. Typically, installation will take mere seconds and switching between bots will not involve tapping on yet another app icon. In quite a few cases, talking to a bot may be more appealing than interacting with the customer service agent of a bank or an airline and, as messaging apps keep growing in popularity, engaging with the bots on their platforms will become commonplace. Much of course will depend on 'Killer Bots' which essentially will be hugely popular services that work best in the form of bots. Businesses, over a period of time, won't just have phone numbers and web pages but their own bots too. In the healthcare segment, bots could deal with routine ailments and only direct difficult ones to a doctor. Similarly, restaurants could take orders via instant messaging, as some already do in China. Of course bots will need a lot of experimentation to find their rightful place and this, in turn, will depend on how well providers manage their platforms.

Among the several brands taking the dive into Dark Social, the senior director of global brand communications for Adidas Football had this to say:
"As long as the dark social platforms continue to innovate, we'll find new ways to use the technology. In the past you could only send text but now, you can send video and image, which has opened doors. Using a mix of content, we can reward advocacy with personalized approaches like inviting customers to a dialogue with Adidas stars, offering live coverage from events or simply handle customers service queries. There is excitement about the opportunity dark social presents and how it can help Adidas become the most personal brand, so we expect it to play an increasingly important role in our strategy."
The piece on this in The Drum, can be read here.

AI in social media networks is primarily being used as an efficient way to sort through large clusters of user-generated information. The term 'Deep Learning', often used in this context, signifies a high-level knowledge arrived at by analyzing and establishing patterns in large data sets. For social media, this means that AI can help with anything from personalized product suggestions, based on previous engagements, to image and voice recognition, to deep sentiment analysis.

AI will also impact the social media analytics business in a major way. It's going to affect a number of areas, ranging from the analysis itself to certain recommendations that can be offered to users. For example, one can say that a particular tweet from an influencer is likely to be further amplified with a certain promotional. One can then go on to recommend that, if promoted, it will likely be amplified by say 10 times than what it is now. Recommendations of this kind can create a major impact.

AI can also help ingest proprietary corporate data, like chat logs, more efficiently, and be able to then sort through that data, make sense of it, and gain new insights through analysis.

Sunday, May 22, 2016

The Evolution of the World's Largest Social Network and the Resultant Effects on Digital Tribes

Facebook started out as a 'closed social network', one essentially meant for friends, associates and colleagues who know each other, to share personal information, photos and events that they may have been involved in.

As the number of active users grew and the need to look after bottomlines became imperative, the initial objectives started receding. The quest to increase the MAUs (monthly active users) by leaps and bounds and to increase content which in turn could drive contextual advertising, resulted in products, services and celebrities with huge fan followings getting precedence over what the guy-next-door desired to share. Driven by herd instinct, a lot of people got into the race to expand their network to overtake that of their associates and friends. 

Dunbar's number was thus given short shrift by many individuals. More about what Dunbar's number is all about can be read here.

As the resultant noise, fluff and spam in the News Feed (which had replaced the more intimate 'Wall' Facebook had started out with ) became off-putting and distracting for many, the filters to curate and edit the 'News Feed' kicked in. Since the amount of time spent by the average user in a day or a month became integral to the health and the growth of Facebook's revenue model, algorithms got busy dissecting and analyzing what each user 'liked' seeing and reacting to. The ultimate echo chamber and comfort bubble started taking shape.

Also, while Facebook vowed to remain free for all, given its business model, it decided that users couldn't have their entire network, steadily burgeoning in most cases, viewing their content for free. In any case, algorithms and filters had already established that many in their network didn't want to look at quite a bit of their content anyway. Since most individual users weren't willing to pay and 'promote' their content regularly to make it visible to a wider audience, products, services, publications and 'news agencies' became increasingly important in their scheme of things. The individual content creator was thus pegged back to having somewhere between 5 - 7% of his or her network getting to see any content that he or she posted, for several of the reasons already mentioned above. The transformation of the 'closed' social network to a 'walled social media which mainly thrived on impersonal news feeds' was complete. 

The resultant fallouts have been manifold. One of them, generally referred to as 'context collapse', is the phenomena where users are beginning to share less and less personal content on Facebook, opting instead to cross-post content from others on their network or other media platforms. A detailed dive into 'context collapse' and its implications can be taken here.

As 'trending news and events' assumed growing importance for advertisers and many users, certain sections started accusing Facebook of a distinct bias, possibly in the filters and algorithms it deployed, to slant its trends away from certain categories of news and events. While this is still being looked into and there is a school of thought that attributes this phenomena more to individual judgements rather than to filters and algorithms, the trust deficit continues to grow.

Facebook reacted to this recently by telling advertisers and news agencies that 'status updates' from friends and associates will hereafter take precedence over their posts in the News Feed. Check out the embedded post just below for an excellent elucidation on this.

Zuckerberg's vision is not to connect people in distant lands all over the globe but rather to bring them all to one big island and keep them there. At the end of the day, while Facebook may advocate bridges, it has succeeded in building a whole bunch of 'comfort bubbles' and divided the world more than ever before.

Monday, March 21, 2016

The Web of Devices is Now Commonly Known as The Internet of Things

The Industrial Internet or the Web of Devices, now most commonly known as The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items— embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data among each other, mostly without human intervention.

One of the fastest growing segments among emerging Disruptive Technologies of the 21st century, IoT is projected to be a USD 1.7 trillion market by 2020 (IDC projection) with over 30 billion interconnected devices globally. Most of this projected amount will be spent by corporations and institutions on endpoint devices, infrastructure support, connectivity and companion IT services.

Given the widespread availability of broadband, more and more devices with Wi-fi capabilities and sensors built-in, will be getting made. Together with this, smartphone penetration is already quite high in most countries and will be skyrocketing in the future. Consequently, more and more devices with an on/off switch and with a sensor will keep getting connected to each other, to the internet and, on occasions, to people. The list of such things will include everyday devices like cellphones, coffee makers, washing machines, headphones, lamps, various wearable devices as well as components of machines, like a jet engine of an aircraft or the drill of an oil rig.

IoT will essentially be built around cloud computing, networks of data-gathering sensors and mobile, virtual and instantaneous communication. Cloud-based applications to interpret and transmit the leveraged data from the host of sensors would be key to the success of IoT. The endpoint devices deployed will be broadly of two types viz., sensors which would be primarily gathering data and ‘kinetic’ devices which would be capable of executing specific actions. The latter category of devices would consist of alarms, locks and valve actuators among others.

Big Data and Predictive Analytics will play a key role in interpreting and analysing the petabytes of data being generated. Real-time analysis of all this data and more importantly, decision making based on all this data cannot always be manual and will need to be automated in several cases. Machine learning and Artificial Intelligence ( AI ) applications will thus play a key role in automating the process and making the 'right decisions' dynamically.

Several last mile issues need to be worked out before the Internet of Things becomes virtually ubiquitous in a connected future. These would include, among others, standards for interfaces and an universally accepted system of gateways so that all these devices can talk to each other and to applications fairly seamlessly. Uninterrupted power and energy requirements for devices operating on a 24x7 basis would also become an important consideration. Many of the devices today have security vulnerabilities and/or poor privacy controls. Several devices still have weak web interfaces and do not have encrypted transmissions built-in, making them vulnerable to hacking. Inadequate software protection in several cases is another vulnerability which can be exploited by hackers to download and install malware. 

Smart cities is an oft-heard buzzword used these days, globally. By itself, the phrase does not mean much unless the realistically attainable goals for a given city or geographical location are defined beforehand. The way these goals can be realized would be through the capture of relevant data and the use of analytics and AI for the interpretation, analysis and initiation of prompt actions based on the data being generated. An earlier piece in this blog touches on several use cases for smart cities.

Some of the most common applications of IoT are illustrated in the Infographic just below. As can be seen, smart homes, wearables, smart cities and smart grids occupy the top four slots in terms of popularity.

Sunday, January 17, 2016

Predictive Analytics Use Cases

As some of you may be aware, predictive analytics is the branch of advanced analytics that uses a host of techniques & processes from data mining, statistics, modelling, machine learning and artificial intelligence to analyze internal and external data in order to make future projections which ideally would have a fair degree of accuracy.

This piece covers a number of use cases about the adoption of predictive analytics in various functional areas. Let's get started by diving into these right away.

The U.S. special forces use predictive analytics to assess and decide on candidates for an open position. After identifying a host of factors, the question that needs to be answered is, 'how much does each factor matter?'. A good target variable is to consider an optimum pool of people who have stayed and succeeded and then identify the more important factors based on this pool. Acceptable trade-offs can be found with a model that gives pertinent weightage to questions more vital to say intelligence & training, assuming that those are among the key factors for the position that is sought to be filled.

Big retailers are always looking to minimize the churn factor. Churn is the phenomenon where businesses lose existing customers to competition and have to compensate by acquiring new customers to make up for the loss. Since the cost associated with acquiring new customers is much higher than that for retaining existing customers, several large retailers use predictive analytics to prevent churn as far as possible by first identifying signs of dissatisfaction among their customers and then identifying those customers or customer segments that have the highest risk of leaving. Using this information, they then proceed to make the necessary changes that would minimize the risk of these customers or customer segments leaving, thus minimizing churn.

It is very difficult for service providers to be present everywhere, all the time, more so in the online world. Likewise, capturing & reviewing everything that is said about your product and organization is a near-impossible task. A workable alternative to this can be found by combining web search and web crawling tools with customer feedbacks, posts and comments in various social media networks and using predictive analytics to do sentiment analysis. This would give the organization a fair idea of its reputation and standing and that of its product in key markets and across major demographic segments. It would also help in coming up with proactive recommendations about what the organization needs to do to enhance that reputation and improve its standing.

Predictive analytics is used by several customer support organizations, particularly call centres and helpdesk operations. By analyzing data pertaining to caller id history, time of day, call volumes, products owned, churn risk, LTV ( loan-to-value ratios ) and several other parameters, they work out operational aspects like the following:

  • Call routing ( determining wait times & optimizing them at various times during the day ).
  • Message optimization ( putting the right data on the operator's screen ).
  • Volume forecasting ( predicting call volumes for the purpose of staff rostering ).
Some more use cases of Predictive & Big Data Analytics can be viewed and heard in this Hangout conducted by us awhile ago.

Predictive & Big Data Analytics is being used increasingly across various domains and functions in a host of organizations eager to regain, maintain or sharpen their competitive edge. With the recent Forrester Research findings that companies using Predictive Analytics are making more money than those that aren't doing so yet, adoption of Predictive Analytics is increasingly becoming a necessity rather than an option for twenty-first century businesses.

Friday, October 2, 2015

Disruptive 21st Century Technologies for Businesses - Virtual and Augmented Reality

To start with, it is necessary to understand the fundamentals of Virtual Reality and Augmented Reality while correcting some popular misconceptions about both VR & AR.

Augmented Reality may be defined as a live, direct or indirect view of a scenario, whose elements are augmented (supplemented ) by computer-generated sensory inputs like sound, graphics, video or GPS data. Augmented Reality is closer to a more general concept known as Mediated Reality, in which a view of reality is modified, possibly diminished even, by a computer. Technology thus functions as an enhancer of contemporary, live reality.

Virtual reality, on the other hand, often referred to as 'Computer Simulation', is the creation of a scenario or a series of scenarios which are in no way linked to reality or superimposed on them but depicts a planned and fantasized scenario highly realistically, making the viewer suspend disbelief temporarily and accept it as the real environment at that point in time. It primarily acts through two of the five senses, viz., sight and sound.

The simplest form of virtual reality is a 3D image that can be viewed interactively on a computer screen and manipulated, mostly using the keyboard or the mouse, so that the image can be moved on different planes and in different directions and also zoomed in or out. More sophisticated approaches could involve wrap-around display screens, rooms augmented with wearable computers or haptic devices that let a viewer 'feel' the image on the screen.

(View this clip at the highest resolution you can set it at and use your mouse to drag & move the picture  around, up down or sideways )

The principal usages of Virtual Reality (VR) can be divided into two major areas. These are:

  1. The development of an imagined environment for a game or an interactive story.
  2. The simulation of a real environment for training or education.
Augmented Reality (AR) is the integration of digital information or image with live video or the user's actual environment in real time. AR starts with an existing image or a series of images and transposes digital information or additional images on to it. Sophisticated AR programs used for training may include machine vision, object recognition or gesture recognition technologies. Google Glass is reputed to be incorporating a no. of AR features.

Currently, AR appears to be ahead of VR as several products are already out in the market. While AR hardware devices are expected soon from Google with the launch of Google Glass, Microsoft is also expected to launch something along similar lines. VR is meanwhile just about stepping up to the plate but with Facebook's acquisition of Oculus Rift, the first major launch could be expected sometime during the first half of 2016. Magic Leap with major investments from Google, Qualcomm, Andreeessen Horowitz & KPCB, has a lot of people excited about the possibilities that AR could afford. Here is a sneak peek at some of the stuff that they showcased recently.

Some of the current and planned uses of AR could include:

  • The changing maps, with visual depictions of certain weather conditions, that one can see behind weather reporters on TV.
  • Navigational, heads-up displays, embedded in the windshield of a car.
  • Displaying historical and pertinent information about a tourist attraction when a smartphone's camera is pointed at it.
  • Virtually trying on clothes through a webcam while shopping online.
  • Mobile marketing, involving product highlights and relevant information, displayed over that product or its location.
  • Layar, the free app available on Android & iOS, is a leader in the domain of 'Interactive Print' and allows users to view digital content within a variety of sources such as posters, magazine pages, advertisements and product QR codes.
VR enables one to escape one's present environment and the surrounding reality into a world of make-believe and fantasy. AR, on the other hand, enhances and enriches the real world and the surrounding reality through pertinent and relevant information and images. The many possibilities of both these technologies will become increasingly visible and clearer over the next year or two and would hold out many exciting possibilities for 21st century businesses.