AI will not take away jobs, will just change the nature of it, says Eric Loeb of Salesforce

While Salesforce’s top leadership recognises the transformative power of generative AI (artificial intelligence), it believes that cannot succeed without trust and ethics at its core. ET Online caught up with the company’s executive vice-president, Global Government Affairs, Eric Loeb in New Delhi on the sidelines of Carnegie India’s Global Technology Summit this month. Loeb is responsible for leading Salesforce’s public policy efforts in the AI domain. In an exclusive chat he speaks on data and AI, the Indian tech policy, and geo-politics of AI technology. Below are the edited excerpts:

What will be the impact of AI on India’s policy environment?

AI is poised to make a significant impact across various domains. It will also have a transformative effect on the job market. We recognise our responsibility of skilling and upskilling the workforce and that is why at Salesforce we are offering free online AI skilling initiatives. Salesforce’s free learning platform, Trailhead, and other skilling programs are training a significant number of individuals with skills of the future. In the realm of technology and product development, AI tools are revolutionising coding processes. Teams, even with extensive workforces, report that tasks that once took months are now accomplished in days. From a societal perspective, the positive impact of AI presents an opportunity to uplift several sectors, for instance, in education there is potential to support students who lack tutoring through personalised AI tutors.

The AI revolution holds tremendous potential particularly for the youth in India, providing opportunities for skill development and upward social mobility. Businesses should consider a comprehensive approach to addressing risks while embracing the transformative potential of AI reflecting a balanced perspective.

Do you see jobs getting impacted because of AI? Is there some analysis around it?

India possesses remarkable assets and substantial growth potential. The country is a globally interconnected job market, and technology market for many companies. AI will not take jobs. Just as with any industrial revolution it will change the nature of jobs. For instance, imagine the transformative impact of AI on customer service, agents can become exceptionally efficient due to AI’s ability to handle calls and streamline processes. Customer service agents can also evolve into both service and sales roles, leveraging AI prompts to identify opportunities during interactions. Some of our global customers have already experienced success integrating generative AI into their service centres, enhancing problem resolution, and creating cross-selling opportunities. Rather than displacing roles, AI has the potential to reshape them.

The most important examination is understanding areas crucial to the Indian workforce that are likely to be impacted. It involves harnessing the capabilities of AI to actively consider how roles can evolve and explore new opportunities that emerge.

Can you share the importance of prioritising responsible AI adoption in a rapidly evolving tech landscape? How are companies looking at it?

AI is a continuum from predictive AI to generative AI to personal assistant to ultimately Artificial General Intelligence. These are just the waves in evolution. Emphasizing responsibility and safety from day zero is important for success, irrespective of the risk level associated with a use case. Responsibility today is what leads you to have a safe future.

Now, there is something particular about where Salesforce sits in the AI ecosystem, which means that not only are we focused on it because of our values, but due to exactly what our different customers demand. From a value standpoint, the number one value of Salesforce is trust. Since the beginning, we have been committed to developing and deploying trusted AI. We have also built the Einstein Trust Layer, which safeguards sensitive data within generative AI apps and workflows, preventing it from leaving customers’ trust boundaries. The Einstein Trust Layer establishes a new industry standard for secure generative AI while addressing critical data privacy, security, residency, and compliance goals. To ensure responsible AI usage, we support risk-based AI regulation, that differentiates contexts and uses of the technology and assigns responsibilities.

I’ve been meeting with top AI policymakers all over the world, and I show them what we have created. Based on the feedback from our customers we are already addressing most of the regulatory concerns that are coming up even before we launched the service.

How is the Indian government looking at AI?

India is very engaged in global discussions, which I think is of tremendous importance and absolutely pivotal at this point in time. Now, you know, there has been a lot of really significant policy leadership coming out of the G7 countries, and the EU, and the US, and the UK. It has been one of the most encouraging examples of very agile policy coordination that I’ve seen. And, all of those countries very much want India to be incredibly involved.

What is Salesforce doing to shape policy and practices to ensure responsible innovation?

At Salesforce, we support risk-based AI regulation, but our framework is such that we have always implemented checks on the AI services we provide rather than waiting for regulations to ensure that we are doing the right thing and mitigating potential risks and ethical concerns. While some organisations are new to this, we have been around for 24+ years and are very clear on our values, which centre around trust. Like all our innovations, we embed ethical guardrails and guidance across our products to help customers innovate responsibly.

What sort of policy support you are looking for from India that you are already getting in the US? What are the policy loopholes or areas for improvement in policy space?

It is still relatively new when it comes to artificial intelligence, which is why we need to be actively engaged in the process of implementing principles and regulations. In India, the recent development concerning the data privacy law is moving in a good direction. As we delve into these details, Salesforce has every intention of being involved in the process.

Regarding the policy-making process, with new technologies like AI and Cloud, it is important to ensure that we review the laws periodically rather than simply stapling old rules onto new ones. For example, a procedure like a biannual review in the US, where they assess all the laws can help ensure whether a particular regulation put in place decades ago is still relevant, whether we need to eliminate or add a new rule, and whether a rule applies to the current technology landscape.

Is Salesforce doing some specific investments in India for AI?

Globally we do have a very significant Salesforce Ventures’ Generative AI Fund, which we recently expanded from USD250 million to USD500 million. It is an active portfolio and our thesis for investment has always been in organisations that are in some manner, part of the broader enterprise software ecosystem. We have a great model of identifying companies with promise. We not only provide seed investment, but we also always pair one of our executives, to be a mentor for the startup leadership to bolster the startup ecosystem and spark the development of responsible generative AI.

What sort of geopolitics or geo-economic impact of AI can we expect?

Recently, we had an AI Safety Summit in the UK at Bletchley Park. The summit achieved several important milestones. Firstly, it set a date on the calendar that serves as a forcing function, emphasising the urgency and significance of the matter. Following that, there have been similar events planned for South Korea in six months and in France six months after that, indicating a continued commitment to address AI safety concerns globally. The day before the summit, the G7 finalised its AI principles, and the US administration issued its executive order on AI, showcasing the event’s impact as a forcing function. The Bletchley Park Declaration was signed by 28 countries. It outlines principles regarding the responsible and safe use of AI. Countries like China, Kenya, the Philippines, etc., were signatories to it.

On the second day, the UK went into detail on safety and security, including national security with a smaller group of countries, and I do think that there’s some degree of inevitability in geo-politics and geo-technology, suggesting that there are going to be some areas of more detailed work.

What is your view of countries like China in AI space, post Bletchley summit.

I think there’s a pretty good understanding that there are different models of governance and economies in the world. Thus, there will be different models of technology development that stem from these differences.

Leave a Reply

Your email address will not be published. Required fields are marked *