How to assess AI's risk to a particular job

Hello everyone,

As AI is continually advancing in its capability, so does its threat level to existing jobs. The WEF predicted that only 48% of the workforce may comprise of humans by the end of this decade. In light of this, it becomes important to find a simple way that current and future professionals can use to assess whether their job is at risk in the near future, so that we can prepare in advance and make better decisions.

To help with this in easy-to-understand terms, I attempted to devise a simple framework that we can build further:

Step 1: Think about the tasks that are essential to a role. We do this because most AI solutions today are built to handle specific tasks, nor entire job profiles.

Step 2: Check whether these tasks:

1. Are rule-based: This implies the tasks can be executed by following specific pre-determined rules

2. Deal with predictable situations

If the answer is 'Yes' to one or both of these questions, the job is fairly at risk with AI. The level of risk depends on how many tasks comply with these two points, and to what extent. 

Step 3: Check whether these tasks:

1. Involve critical decision making: Decisions critical to an organization, society or environment cannot be made without human involvement.

2. Involve the use of instinct/intuition/sentiment: These are primarily human strengths that AI can mimic more and more, but will take time to be adopted. For example, think about tasks where you have to let go of an employee, or represent the company in public.

If the answer is 'Yes' to one or both of these questions, the job is less at risk with AI. The level of risk depends on how many tasks do not comply with these two points, and to what extent. 

The thesis behind this methodology, how it works, and how it looks like in practice is a longer discussion that I recently covered at the Reach 2020 conference. In case you missed it, there's a similar session focussed on the field of Marketing that I will cover as the closing speaker at the European Digital Week on Sept 26. You can find it here: 

Please feel free to ping me in case you'd like to discuss it more here or at

jobs AI careers JobsRobotisation


Profile picture for user n003rya5
Objavio Dietmar Koering uto, 22/09/2020 - 11:39

Dear Malay,

thank you for sharing your interesting thoughts, probably you are aware of the paper by Frey and Osborne from 2013?


Nevertheless, yes - specific jobs will be obsolete, but also other generated. But it is an important debate. "critical decision making" - this is quite general, and yes it should involve humans, but we need humans with expertise in this field. AI might assist this - hence we need to talk about a new coexistence with algorithms - an algorithmic-governmentality.



In reply to by Dietmar Koering

Profile picture for user n0029cz9
Objavio Malay Upadhyay pet, 12/02/2021 - 17:12

Thanks, Dietmar, for sharing the paper. I wasn't aware and it's a wonderful read. To your point on "critical decision making" being generic, you are right. As I mentioned to Juan, biases and criticality in human society are always contextual. While they can have a clear definition at any one point of time, they can also evolve with time. I'm very curious to hear more about your views on algorithmic-governmentality to see if they can dynamically address this.

Profile picture for user n002oz5m
Objavio Juan Marcos Mervi čet, 14/01/2021 - 08:28

Interesting post from Malay and interesting reply from Dietmar (with whom I agree). As humans we are biased, and our thinking results (algorithms) are biased too.. so we still need to fix a problem. We have to resolve a lot of problems behind the scenes !

In reply to by Juan Marcos Mervi

Profile picture for user n0029cz9
Objavio Malay Upadhyay pet, 12/02/2021 - 17:08

Thanks, Juan, and yes, it needs a fix. However, I find that many biases are contextual and time-bound, and fixing them may still lead to a solution that is deemed biased in the future. So, what we also need are systems that can dynamically clean up evolving biases relevant to a particular point of time in human society. Btw, I invite you to look at IBM's data bias dictionary as they are pooling together as many as they can, and they are pretty revealing.