Advertisment

Transformative Trends in Interactive Streaming: Ranga Jagannath, Senior Director, Agora

In a recent interview with Ranga Jagannath, Senior Director of Agora, a video, voice and live interactive streaming platform.

author-image
Punam Singh
New Update
Senior Director of Agora

Senior Director of Agora

In a recent interview with Ranga Jagannath, Senior Director of Agora, a video, voice, and live interactive streaming platform, uncovered the importance of data privacy, ethics, and collective efforts in the industry. The discussion covered various aspects from current AI trends to the challenges and opportunities in the evolving AI space. Here, he highlighted some concerns related to data privacy, ethical use, and interoperability and emphasized the need for collective efforts, including regulations, and self-governance to overcome the challenges in industry.

Advertisment

The interview provided a balanced exploration of Agora’s role in the AI landscape, showcasing current trends, challenges, and the platform's strategic positioning for future developments.

Excerpts

Advertisment

Ranga Jagannath: The first one that comes to mind is something about deep learning and neural networks. There's a lot of research, there's a lot of effort that's going on that has gone into deep learning and neural networks.

The second trend that I can think of is probably not making as many waves, but I'm sure there are, there is a lot of research and work going on that's in healthcare. So we can get into details of that separately. But that's another trend that I've seen around AI in healthcare in 2020. And if I were to talk specifically from the domain in which I have a little more expertise, I would say things like AI for software-based noise suppression.

Advertisment

There are NLP models for real-time, translation and transcriptions of conversations. Some things can be done with AI to generate content in real conversations. So to kind of make the human-machine interactions a lot more natural. And of course, in some domains, there's also a lot of work that's happened using AI for spatial audio, because audio lends itself to some of these things. So I think these are some of the trends that I have seen, again, with these are some of the positives, if I were to touch upon something that may not be the best outcome for AI is obviously around Deep Bay, but we can probably part that for a later part of the discussion. So these are some of the trends that I have seen for 2023.

You asked me what can be done, or what I foresee in 2024. So things that, at least from our perspective, we have started using a lot of AI in our products and services to make live interactive conversations, extremely rich, and high quality. So AI helps us a lot in that aspect. There are a lot of things that are happening in the human and workspace collaboration space that we are seeing, and that we have seen a little bit of from what Microsoft and others have done, but that's the trend that we think we'll pick up in 2024.

Specifically coming back to what and where we see ourselves, I think a lot of effort will be on AI to help us in entertainment, media, and of course in healthcare.

Advertisment

DQ: We are seeing a significant rise of Gen AI, particularly when we see the popularity of ChatGPT, Bard and these kinds of AI tools, how do you see it influencing the industries? And how Agora perceives the impact of generative AI on various sectors.

Ranga Jagannath: So as with any technology, there are first of all, two or three different phases that new technologies go through. One is denial, then there is acceptance, and then of course, there is normalization of the technology itself in our day-to-day life. So I think there's been enough said and written about what gen AI can take over from humans, and what will remain.

 So I won't get too deep into that, because that's already out there for most people to already see the impact and the effect and the benefits of that. Where we see in our context, where we see a lot of use for generative AI, I can give you an example. So for example, in a conversation that could be happening with educators and with students, there could be a need for the transcription to be made available in real-time for engines to understand who is participating, what are the same? What is the tone? What is the tonality of that conversation?

Advertisment

How effective the teacher's conversation is? That's one example. The other example is, in the case of E-commerce, if somebody's trying to sell something live, then are the audiences engaged or not? Or is the person who's selling the product able to convince people or not? So there are a lot of conversations that happen, but how do you make sense of it?

So these are areas that these are some of the areas where we see AI playing a role, apart from what we have seen in some contexts recently with the World Cup as well as with IPL earlier this year, where you had experts that were holographically transported from remote locations to the studios or the stadium.

 So, these are some of the applications that are adjacent to what we do. And we are extremely excited about the possibilities with AI.

Advertisment

DQ: What challenges do you foresee for various sectors in India when adopting generative AI? And how these challenges can be overcome.

Ranga Jagannath: As with advantages, there are a few challenges and disadvantages. Some of them that come to mind are around data privacy because AI by its very nature requires very large sets of data. And some of it could be personal information as well. So how do you protect this data?

 So what are the what are the policies that you have as organisations governments, or as individuals around data privacy? So that's one challenge that the industry will have to work towards collectively. The other one is around ethical use and issues. Deep fake is a great example of that. So how do you protect and prevent such kind of use, which could be harmful to people and their reputations?

Advertisment

 So that's the other one. The third one is interoperability. So while I might develop a platform that's AI-based? How will it operate with a different platform, which could benefit from this? So an example would be our medical device will be seamlessly interoperable. Our IoT device is going to be seamlessly interoperable.

I just platform software platforms going to be seamlessly interoperable. So these are some of the challenges that I see as a segment that's kind of growing. I don't have an answer or a magic wand to say what we'll solve. I think it's going to be a combination of regulations. It's going to be a combination of self-governance. It's going to be checks and balances that organizations themselves put in place. And of course, I think it loads it'll be a combination of how people are exposed to these tools and what tools are made available for them to use this, so it's as we speak, it's an evolving space, very fluid, only time will tell which one of these will get handled, how and how soon.

DQ: How is AI transforming customer services and business operations? What value can users expect from these innovations?

Ranga Jagannath: So starting with something as simple as being able to just have a seamless audio and video conversation with poor or sluggish networks and devices, how do you handle that that itself is a big challenge.

 Agora has invested extremely heavily in making sure that the network is robust enough to handle these situations. We call it a Software-defined real-time network. Various components are part of the software-defined real-time network as the RTL as we call it. Some of them include AI-based noise suppression, for example, not everyone may have access to very high-end devices that can suppress noise. But should that be a roadblock for seamless experiences, we feel no.

 And that's why we have what's called AI-based noise suppression, which is, again, software-driven. And it can cut out a majority of common noises and sounds that we hear in our environments every day. So that's an example of AI-driven innovation from Agora.

Another one is real time transcription and translation where we can transcribe and translate between languages in real time. Number one could be 3D spatial, for gaming experiences, and Metaverse kinds of experiences.

These technologies are extremely useful for you to be able to figure out who is at what point in a given situation or a room or a virtual environment, then of course, there is the quality of experience and service that I briefly spoke about, and how do you manage this at scale.

 So currently, the power is almost more than 60 billion minutes of Live Interactive Voice and Video worldwide. Now, all this can't be done manually, or it cannot be monitored manually. So there's a lot of AI that's under the hood, that takes care of all these optimizations. So these are some of the things that Agora does, which is kind of under the hood. But like I said, it drives extremely high and significant business outcomes for our customers and prospects.

DQ: How does Agora through its software-defined real-time network (SD-RTN) leverage AI to manage voice and video to provide an interactive streaming experience to its users?

Ranga Jagannath: A platform like Agora needs to be able to adjust itself to varying degrees of either hardware or network conditions. For example, if I'm on a poor network, you should still be able to hear me till the network completely dies down.

So how do you optimize for that, that's where one of the components of SD Ibn comes in which optimizes the entire audio and video packets to make sure that we control automatically through AI, either the packet loss or jitters or any other variables that are there as part of that whole audio video suite.

We take care of it using different AI-based technologies that we have implemented in that network itself. The other one, which I mentioned, for example, is again, noise suppression. Because not everyone may have access to high-end devices that can suppress noise. So that's another area in which we have invested a lot of time, effort, and money.

 And like I said, How do you allocate bandwidth smartly to devices and networks that require it? How do you make sure that there is traffic in one particular geography? How do you make sure that you auto-scale your resources in that geography and limit your resources in another geography? So all these are AI-based decisions that the network takes care of.

DQ: When we look ahead to 2024, we anticipate that there are going to be transformational changes in the AI Landscape. So how is Agora positioned to stay ahead at the core of most of these developments?

Ranga Jagannath: We have very recently launched a pilot SDK, which uses the ability to generate content using AI. An example of that would be let's say, for example, you're in a classroom, and you have a question that you'd like to ask the teacher, and the teacher is probably not available to answer you in real-time, it can, the same information can be paid to the AI engine, which will generate content and communicate back with the student.

 And it will be extremely seamless to the student, whether the teacher or the teaching assistant, or an artificial artificial intelligent robot has responded to it. But it looks extremely seamless, and real-time and real life to the participants or the students. So that's an example. So these are some of the areas that we are putting a lot of effort into trying to see how we can integrate AI-based content generation tools to make experiences better.

 The other one is to give you an example there are a lot of handsets that may not be very high-end, however, the experience that a customer or an end-user expects is very high. So in this context, we have technology that takes most low-quality streams and converts them into a higher-quality stream on the device itself but on the network itself.

And it presents itself in a manner that the hardware resources are not consumed any more than what is required. So even for low-end devices and low network conditions, we can give an extremely high-quality experience to our customers and their end users. These are just some of the examples that I mentioned, there are a lot more that we are working on. Some of it is probably a little too early to speak about.

 But these are some of the things that I can share. Another one that I can probably share is the ability to gamify certain experiences. So if you are having a conversation amongst a social group, how can you gamify that experience by putting elements in that conversation, that are AI-driven and AI-based?

 So those are the things which we are also working towards. And we already have some examples of that. The last one that I can think of, at least from this perspective is, for example, AI-based proctoring in education where you have to deliver a test or examination to a lot of participants, and how do you ensure that there is no malpractice happening?

So AI comes in there through the proctoring use case and the proctoring capability. And that's another area that we help them and telehealth a lot of things that are structured around virtual healthcare. So a lot of these components require, in some situations, AI-based decisions to be made. And that's where we come in.

punams
Advertisment