/dq/media/media_files/2026/01/27/shrish-ashtaputre-2026-01-27-12-24-42.jpg)
Shrish Ashtaputre, Senior Technical Director Engineering, Calsoft
Shrish Ashtaputre, Senior Technical Director Engineering, Calsoft takes us around this new software ‘proving ground’ that shows how AI is influencing shift-left testing, AI-assisted quality engineering, preventive testing mindsets, security-integrated validation, intent validation, synthetic monitoring, API-level load simulations and edge-case testing; while he also confronts the dangers of automation, jailbreaking/ prompt manipulation risks and the challenges of skeuomorphism.
What has changed about QA in the last four to five years; and how proactive, shift-left, AI-ready, AI-driven and security-sharp is this frontier becoming?
QA used to come in at the end, checking if things worked. Now it’s part of how things are - built in the first place — driving quality decisions instead of just verifying them. We’ve seen the rise in shift-left testing, AI-assisted quality engineering, and security-integrated validation. Test engineers are now expected to understand pipelines, cloud-native architectures, and even prompt engineering for AI tools. The mindset has become more preventive than detective. AI has become part of QA’s toolkit, helping predict weak spots and optimise testing. At the same time, QA must validate the integrity and fairness of AI systems — making it both a user and a guardian of AI.
Test engineers are now expected to understand pipelines, cloud-native architectures, and even prompt engineering for AI tools. The mindset has become more preventive than detective.
Are QA and testing blurring their boundaries with Development- specially in the early phases of application-builds? What is changing here with DevOps and SecOps?
Those boundaries are dissolving, and that’s a good thing. With DevOps, QA became embedded into the pipeline — automated test execution, environment provisioning, and feedback loops are all part of CI/CD now. With SecOps, we’re adding security scans and penetration checks earlier, creating a DevTestSecOps model. QA is no longer a separate stage. It’s a mindset that exists throughout the lifecycle — from requirements to observability in production.
With AI-based testing, ‘jailbreaking’ or prompt manipulation risks can creep in — especially when validating generative systems.
Is automation a good change in QA? What about jail-breaking concerns here?
Automation is both a necessity and a responsibility. It accelerates validation cycles, eliminates repetitive work, and allows engineers to focus on exploratory and edge-case testing. However, blind automation is dangerous. With AI-based testing, ‘jailbreaking’ or prompt manipulation risks can creep in — especially when validating generative systems. The right approach is controlled automation — where humans design intent, and automation executes within guardrails. In QA, automation should assist judgment, not replace it.
What is/should be changing with red teams, blue teams and purple teams in the software lifecycle?
These team models, traditionally from cybersecurity, are now influencing mainstream QA. Red teams (attackers) simulate adversarial behavior; blue teams (defenders) improve detection; purple teams bridge the two. We’re seeing QA functions adopt a similar pattern: offensive testing, defensive hardening, and collaborative resilience. Especially in cloud-native environments, QA engineers must think like adversaries to ensure security, scalability, and performance from day one.
Vibe coding, low-code and no-code tools, AI tools- do they help QA or slow/complicate it? Anything you can share about edge cases and QA?
These tools are democratising development — but they also complicate QA in interesting ways. Low-code and AI-assisted environments create abstraction layers that obscure the underlying logic. QA teams need new testing models that can handle intent validation instead of line-by-line verification. Edge cases become trickier. For instance, an AI tool might auto-generate UI flows that behave inconsistently across browsers or devices. The key is adapting QA strategies to validate behavior, not just code.
What new implications have been observed in areas like regression testing, SDET (Software Development Engineer in Test), API (Application Programming Interface) testing etc.
Regression testing has become AI-augmented and data-driven. Instead of re-running all test cases, systems now prioritise based on change impact analysis. The SDET role is also evolving — they now bridge coding, observability, and automation frameworks, often owning quality gates within CI/CD. In API testing, the focus has moved beyond correctness to contract validation, resilience testing, and chaos validation — especially with distributed and microservices-based systems.
Has skeuomorphism changed QA goals also?
Interesting question. Skeuomorphism — the design principle of mimicking real-world elements — may seem like a UI concern, but it indirectly impacts QA goals. Testing now needs to validate user perception and trust, not just functional accuracy. When software behaves in familiar yet abstract ways, QA ensures that cognitive expectations match real outcomes. So yes, skeuomorphism has subtly shifted QA from “it works” to “it feels right”.
Have security and performance/SDLC time in software come closer or are they still conflicting goals?
They used to conflict, but modern architectures and tooling have brought them closer. Security checks are now embedded as automated gates within pipelines. Performance testing, too, is moving earlier — with synthetic monitoring and API-level load simulations. In effect, security and speed can coexist, provided teams integrate validation rather than treat it as an afterthought.
When software behaves in familiar yet abstract ways, QA ensures that cognitive expectations match real outcomes. So yes, skeuomorphism has subtly shifted QA from “it works” to “it feels right”.
What has been evolving at your company and its solutions after the advent of AI- have customer expectations and investments changed?
AI has definitely shifted how we look at quality engineering. At Calsoft, a lot of our recent work has been about moving from traditional automation to intelligent automation, especially with platforms like CalTIA, our AI-assisted testing accelerator. It helps teams analyse change impact, prioritise the right test paths, and reduce the time spent on repetitive regression cycles. What we’re seeing on the customer side is a clear shift in expectations. Earlier, the ask was faster releases. Now the conversation is around continuous validation, risk prediction, and how QA can keep up with highly dynamic, AI-enabled systems. Customers are also investing more in tooling that can adapt to frequent changes — not just execute scripts. So yes, expectations and investments have evolved. AI is pushing QA to become more analytical, more data-driven, and much closer to development and operations than before.
pratimah@cybermedia.co.in
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us