Are You a Modern Professional?

Here are six suggested best practices—in alphabetical order—to hire and retain the modern tech professional.

author-image
DQI Bureau
New Update
Are-You-a-Modern-Professional
Listen to this article
0.75x1x1.5x
00:00/ 00:00

Max was a rising star in the corporate world. He had all the gear—dual monitors, noise-canceling headphones, a smartwatch that tracked everything from his heart rate to his caffeine intake, and a standing desk that adjusted via app—a posterchild of the modern professional.

Advertisment

The CEO had given him one task: Present the quarterly corporate results to the leadership team. This Monday morning Max strutted into the office and proudly showed off his new smartwatch. “It’s got AI that tells me when I’m stressed,” he announced. “It even buzzes when I need to breathe.”

Max adjusted his tie and announced he had synced the Q2 report “to the cloud.” The CEO nodded. Minutes into his presentation, the screen went blank. “No worries,” Max said, tapping his watch. “I uploaded it last night… to my fridge.” Turns out, he emailed the report to his fridge’s internal storage. The team spent an hour trying to retrieve the data from a kitchen appliance while his smartwatch kept buzzing: “Elevated stress detected.”

If that anecdote made you wonder, these statistics should make you ponder: Worldwide end-user spending on GenAI models is set to reach USD 14.2 billion this year, while end-user spending on specialized GenAI models, which include DSLMs (domain-specific language models), will cross USD 1.1 billion, says Gartner Inc.

Advertisment

Specialized GenAI models are trained or finetuned on industry or business process specific data. By 2027, more than 50% of GenAI models used by enterprises will be domain-specific, or specific to an industry or business function, up from 1% in 2024.

Experience & Expertise

Foundation GenAI models such as LLMs are trained on vast amounts of data and used for many different tasks. “These are the first models supporting GenAI and will continue to represent the largest area of spending by organizations in the coming years,” says Arunasree Cheparthi, a Gartner senior analyst. “Organizations are also turning to more domain-specific or vertical GenAI models because they offer improved performance, cost, reliability and relevance in specific enterprise use cases compared to foundation models.”

Why does it matter? Because DSLMs can usurp jobs that human “experts” or those with qualifications and experience perform. “GenAI will transform the legal, risk, compliance, tax, accounting, and audit professions, along with global trade over the next three years,” notes Steve Hasker, president and CEO of Thomson Reuters. “Already, professionals report increased efficiency, productivity and cost savings as the most significant benefits, making AI a crucial tool for organizations navigating a rapidly evolving set of business challenges.”

Early this year Reuters polled 2,275 professionals worldwide to compile its Future of Professionals 2025 report. What makes a modern professional, whether tech or otherwise? “The skills needed to thrive in many service professions have dramatically changed over the years and will very likely continue to change,” the study reports. “A new modern professional is emerging. While the core abilities remain the same, the modern professional uses tech and AI to augment their abilities. More than 50% professionals said they had experienced significant changes in their day-to-day role over the past 12 months or are expecting to within the next 12 months.”

The biggest barriers? An overreliance on tech that would crimp professional development and lead to job losses. As well as holding AI to a higher ROI. “More than 90% of professionals said they believe computers should be held to higher standards of accuracy than humans,” the report notes. “About 40% said AI outputs would need to be 100% accurate before they could be used without human review, meaning that it’s still critical that humans continue to review AI-generated outputs.”

Talent & Trouble

GenAI creeping into human resources scares most modern professionals. Only 26% of job candidates trust AI will fairly evaluate them, and 52% believe AI screens their application information. A Gartner poll of 2,918 job applicants found that 32% were concerned about AI potentially failing their applications, and 25% said they would trust employers less if they used AI to evaluate their information. Only 50% of candidates thought that the jobs they were applying for were genuine.

“It’s getting harder for employers to evaluate candidates’ true abilities, and in some cases, their identities,” says Jamie Kohn, a Gartner senior research director. “Employers are increasingly concerned about candidate fraud. Candidate fraud creates cybersecurity risks that can be far more serious than making a bad hire.”

Indeed, another Gartner survey of 3,000 job candidates found 6% admitted to participating in interview fraud—either posing as someone else or having someone else pose as them in an interview. Gartner predicts that by 2028, one in four candidate profiles worldwide will be fake.

On June 30, I was part of a panel at the Oxford Future of Professionals Roundtable where we discussed three gaps: “First, the responsibility gap stems from the distributed accountability among model developers, application builders, and end users—raising the question of who is liable for accuracy and potential harm,” says Dr Mari Sako, professor of management studies at the University of Oxford’s Saïd Business School. “Second, the principles-to-practice gap reflects the challenge of translating broad ethical guidelines into actionable, domain-specific steps. For instance, transparency demands disclosure—but of what, and to whom? Third, the goals gap arises when stakeholders lack alignment on AI’s purpose: is it for efficiency, innovation, cost reduction, or superhuman accuracy? With competing priorities like privacy, safety, sustainability, companies must navigate trade-offs to define what optimal means.”

Practice & Practicality

Professionals are involved across the AI landscape—as developers, providers, deployers and users—as defined by the EU AI Act. “While this provides opportunities, it also exposes professionals to risks at every stage—from biases, hallucinations, dependencies, misuse and more,” notes Dr Florence G’Sell, professor of private law at the Cyber Policy Center at Stanford University. “Opacity complicates the situation, as it makes assessing model performance difficult. To mitigate these risks, organizations could seek independent external assessment. But developers are reluctant to provide auditors access to data sources, model weights and code. This limits the ability to evaluate and ensure compliance with responsible AI principles.”

For example, cross examination is core to the judicial process. “But how might a lawyer interrogate an AI model, especially when it is proprietary? Or how can a judge assess AI enhancements to video evidence, when they can’t validate that the AI wasn’t manipulating the footage?” asks Michael Buenger, executive vice president at the US National Center for State Courts. “I would advocate for a public repository that tracks how AI tools perform across the legal system, and stronger ethics rules (such as sanctions for citing hallucinated cases). Such safeguards could widen access to justice while protecting the integrity of evidence and truth.”

Uncertain regulatory issues are already taking a toll on professionals, with more than 60% of enterprises in the Asia-Pacific experiencing moderate to significant disruption to their IT operations. “Some countries, like South Korea, Singapore and Japan are aligning closely with EU GDPR-style frameworks,” reports IDC (International Data Corp). “Others such as India, Indonesia and Vietnam are advancing sovereignty-driven mandates tailored to domestic priorities.” For example:

  • India: DPDP (Digital Personal Data Protection) Act 2023 establishes consent-first data governance, introduces data protection officer’s requirements, and imposes localization in sensitive sectors.
  • Singapore: PDPA (Personal Data Protection Act) 2024 and Cybersecurity Act 2018 expand breach notification and third-party oversight while piloting AI governance frameworks.
  • Australia: Privacy Act Reforms (2025) focus on algorithmic accountability, data portability and levy steep penalties of up to AUD50 million for breaches.
  • China: AI and cybersecurity regulations enforce AI explainability, content controls and mandatory data localization for industries covered under China’s CII (critical information infrastructure) categorization.

Why should India bother about this? Because software that runs most corporate applications has to factor in regulatory issues. Also, because India accounts for 20.5% of the Indo-Pacific region’s (excluding Japan and China) software market. India’s software revenues will reach USD 18.4 billion by end-2025, up from USD 15.2 billion in end-2024, up 22% over 2023, says IDC. By 2029 it will be worth USD 36 billion.

Here are six suggested best practices—in alphabetical order—to hire and retain the modern professional:

  • Automate: Set a long-term automation vision across operations, logistics, quality control, inspections, assembly, material handling, human resources to identify roles and responsibilities for professionals to work in collaboration with AI.
  • Build: Partnerships and ecosystems. This may mean partnering with robotics startups, joining standards-setting groups, or investing in modular infrastructure in factories or other workplaces that can easily be shifted to accommodate different robot types, advises McKinsey.
  • Clarify: Expectations. Communicate hiring standards. Explain to candidates how employers define acceptable use of AI and emphasize their fraud detection efforts—including the legal consequences—if fraudulent behavior is detected.
  • Data: Invest in data for foundation models and data infrastructure. Document what information is used so that professionals can replicate their methodology when creating new robotic or application platforms.
  • Engage: People to interview people. Gartner says more than 60% of candidates are more likely to apply to a position if the organization requires or does in-person interviews.
  • Fraud: Prevention must extend beyond the initial hiring phase. Focus on system-level validation rather than individual surveillance—tightening background checks, using risk-based data monitoring and embedding detection tools like identity verification and anomaly alerts in recruiting systems.

Since we started with a joke on professionals, let’s end with another: Pat, the tech-savvy project lead, had his home fully automated—smart lights, smart locks, and a baby monitor that streamed in HD. During a big client pitch on Zoom, he activated a “professional” virtual background and shared his screen. A few seconds later, a tiny voice interrupted: “Daaa-deee, I pooped.”

Pat had accidentally shared the baby monitor feed instead of the slide deck. The client watched in stunned silence as Pat’s toddler waddled onscreen, proudly holding up a soiled diaper as a trophy. Pat’s smartwatch buzzed: “High stress detected.” The client deadpanned: “Well, that’s one way to present your bottom line.”

Raju Chellam is a former Editor of Dataquest and is currently based in Singapore, where he is the Editor-in-Chief of the AI Ethics & Governance Body of Knowledge, and Chair of Cloud & Data Standards.

maildqindia@cybermedia.co.in