Steering AI, The Critical Role of Quality Engineering[Testμ 2024]

LambdaTest

Posted On: August 22, 2024

view count1646 Views

Read time17 Min Read

As AI continues to transform industries, quality engineering’s role in ensuring its success and safety becomes crucial. In this session, the panel explores how quality assurance, ethical considerations, and robust testing frameworks are vital for managing risks and ensuring the reliability of AI systems.

The experts further discuss best practices, methodologies, and challenges, providing insights on how organizations can navigate the complexities of AI technologies effectively.

Let’s dive into the session and learn from industry experts featuring:

Chris Manuel – SVP, Global Head of Quality Engineering, Coforge, focusing on AI-driven innovations and quality transformation.

Kiran Rayachoti – Vice President at Publicis Sapient, has 20 years of engineering experience and is a pioneer of GenAI integration in quality engineering.

Manish Potdar – Head of Strategy, LTIMindtree, specializes in AI assurance and next-gen skills.

Richa Agrawal – Digital QA Head for APAC at GlobalLogic, has 20+ years in IT, specializing in AI & ML. She’s a career coach and advocate for women in QA.

Subba Ramaswamy – Managing Director at Accenture, drives AI-powered quality engineering in North America and champions diversity and talent development.

Vikul Gupta – Head of NextGen CoE at Qualitest, has over 20 years of experience in quality engineering and focuses on AI/ML, DevOps, and digital transformation.

If you couldn’t catch all the sessions live, don’t worry! You can access the recordings at your convenience by visiting the LambdaTest YouTube Channel.

Adapting Traditional Security Engineering Practices to AI Systems

Sudhir Joshi began the discussion by addressing Chris with a fundamental question: How do traditional security engineering practices adapt to the unique challenges presented by AI systems?

 Traditional Security Engineering

Answering the question, Chris discussed the need for both evolution and revolution in adapting security engineering to AI. He highlighted that AI introduces unique challenges that require new skills, ethical considerations, and a focus on fairness beyond the traditional emphasis on speed and effectiveness. Chris stressed the importance of evolving traditional security practices to address AI’s complex vulnerabilities and the critical role of data quality and security in AI systems.

Adding to the discussion, Vikul highlighted two critical intersections in AI and quality assurance: testing AI-infused applications (QA for AI) and leveraging AI to improve testing processes (AI for QA). He explained that AI systems present non-deterministic outputs, requiring testers to move beyond traditional rules-based methods.

He introduced a three-level pyramid for QA in AI, focusing on data assurance, model accuracy, and ensuring that AI meets business objectives. Vikul also highlighted how advancements in Generative AI are revolutionizing the testing landscape, enabling more efficient test creation and prioritization, thereby leveling the playing field for QA professionals.

Manish also expanded on the discussion by comparing traditional and AI-driven testing approaches. He pointed out that traditional testing relied on standardized, deterministic data, whereas AI models require handling a broader range of data variations, necessitating new strategies. He also addressed the shift in validating business processes, noting that AI introduces complexities like fairness, accuracy, and potential hallucinations that must be accounted for.

Manish further discussed non-functional testing, emphasizing the importance of evaluating AI models’ performance and sustainability. He concluded by stressing the need to adapt testing strategies to meet evolving regulatory requirements for AI systems.

The Role of Transparency in AI Quality Assurance

Moving on, Sudhir raised a question specifically for Subba and Richa: What role does transparency play in the quality assurance processes for AI, and how can it be effectively implemented?

Subba kicked off the discussion by reflecting on the concept of responsible AI, sharing an anecdote from 2016 when her team was pioneering ideas around AI accountability, even before the term “responsible AI” gained traction. She emphasized that transparency is a foundational element in AI, particularly in quality assurance processes. Subba discussed how, back then, his team developed a simplistic Excel workbook to assess the responsibility of AI systems, focusing on their fairness and effectiveness.

Now, years later, the industry is catching up, and transparency is more critical than ever. She highlighted the importance of checking not only for functionality and non-functional aspects of AI systems but also for operability and auditability.

Subba stressed the necessity of algorithmic transparency, noting that in the past, the correctness of AI outputs was the primary concern, without much attention to the algorithms used. Today, the focus has shifted to understanding and assessing these algorithms for transparency, bias, and fairness. She pointed out that traditional pass-fail metrics are insufficient, and there’s a need for scoring systems that can continuously evaluate and tweak AI models to minimize bias. She concluded by emphasizing that while achieving 100% unbiased AI is impossible, continuous improvement and transparency are essential in the QA processes.

Richa expanded on Subba’s points by underscoring the role of transparency in building trust, identifying biases, and ensuring ethical standards in AI systems. She emphasized that transparency allows for better interpretability of AI models, making it easier to identify errors and biases, especially in critical fields like healthcare. Richa highlighted the importance of clear documentation and thorough audits to maintain consistency and compliance, particularly in regulated industries like finance. She cited Tesla’s transparent approach to autonomous driving AI as a successful example of how openness can build public trust in new technologies.

Richa also discussed practical ways to implement transparency, such as adopting explainable AI tools like LIME and SHAP, which help make AI decisions more understandable. She stressed the importance of regular audits, including compliance with regulations like GDPR, to ensure data privacy and transparency. Finally, Richa advocated for fostering a culture of openness where challenges in AI are discussed openly, promoting transparency as essential for managing risks, ensuring compliance, and building trust in AI systems.

Richa Agrawal’s insightful points about transparency in AI quality assurance led the panel to a pivotal discussion about the critical role of quality engineering in steering the AI ecosystem. Emphasizing the importance of responsible AI, Richa highlighted how quality engineering serves as a guiding force, ensuring that AI systems are compliant, ethical, and reliable. Without this rigorous oversight, the AI industry risks repeating the mistakes of the past, where unchecked systems could lead to biased or inaccurate outcomes.

The Critical Role of Quality Engineering (QE) in Shaping AI Ecosystems

The discussion moved towards the integration of continuous quality improvement into the AI development lifecycle, raising a crucial question: How can organizations integrate continuous quality improvement into their AI development lifecycle?

Kiran answered this question by discussing the ongoing transformation within the software development life cycle (SDLC), emphasizing how testing has evolved over the years. He pointed out that testers were once considered mere analysts, but now their role has expanded significantly. Kiran stressed the importance of integrating testing into the development process rather than treating it as an isolated task.

He emphasized that quality engineers must collaborate closely with developers, especially in the AI era, where traditional testing methods may no longer suffice. He advocated for a shift in mindset, encouraging teams to embrace new tools and techniques rather than relying solely on familiar ones like Selenium. Kiran also introduced the concept of “unlearning,” suggesting that adapting to new technologies requires letting go of outdated practices and learning fresh approaches.

Manish built upon Kiran’s points by discussing the parallels between the AI model development lifecycle and the traditional SDLC. He outlined a four-stage AI development process: data injection, feature definition, model training, and integration into business processes. Manish emphasized the need for continuous quality monitoring throughout these stages, drawing attention to aspects like performance, scalability, fairness, accuracy, and regulatory compliance.

He suggested that continuous quality improvement can be achieved by monitoring these elements in real-time and feeding the insights back into the development process. By doing so, organizations can ensure that their AI models remain effective, compliant, and free from issues like bias or inaccuracy, ultimately leading to better decision-making and more reliable AI systems.

It’s clear that the integration of AI tools is pivotal for enhancing testing processes. AI-driven test assistants such as KaneAI by LambdaTest is a prime example of this integration.

KaneAI is an AI-powered test assistant for end-to-end software testing, offering advanced AI-driven capabilities for test creation, debugging, and management. By leveraging KaneAI, quality engineering teams can streamline their workflows, reduce manual efforts, and focus on refining their strategies, ultimately driving more effective and efficient testing outcomes.

Addressing Responsible AI: The Role of Quality Engineering in Mitigating Risks

Moving ahead, the focus of the panel shifted to the intersection of responsible AI and quality engineering. Subba Ramaswamy’s insights on responsible AI set the stage, emphasizing the importance of transparency and accountability in AI systems. This gave rise to another question: How can quality engineering help mitigate the risks associated with AI, and how does it contribute to ensuring responsible AI practices?

Chris emphasized the growing complexity and risks associated with AI systems, highlighting that traditional considerations such as performance, accessibility, and security are now amplified by new dimensions like bias and data privacy. He pointed out that quality engineering must evolve to address these enhanced challenges. This involves not only adapting to new types of vulnerabilities and edge cases but also integrating robust security testing and data privacy measures into the quality assurance process.

Chris also stressed the importance of factoring in ethical reviews and regulatory compliance, noting that these aspects are becoming critical in evaluating AI systems and will need to be closely monitored in future developments.

Subba discussed the critical role of quality engineering in mitigating the risks associated with AI, particularly through the lens of responsible AI practices. She highlighted that quality engineering must not only focus on traditional testing but also incorporate considerations for ethical implications and regulatory compliance.

This includes ensuring that AI systems adhere to relevant laws and guidelines and addressing potential biases and inaccuracies in the models. Subba’s perspective underscored the need for quality engineering to adapt continuously and address the evolving challenges posed by AI technologies, ensuring that they are both robust and aligned with ethical standards.

Approaches to Quality Engineering Across Industries: Lessons Learned

Following Subba’s insights and the discussion on Tesla’s example, Sudhir asked Richa to share her insights on the common question: How do you see different Industries approaching quality engineering for AI, and what can they learn from each other?

Richa brought a dynamic perspective to the discussion by exploring how different industries approach quality engineering for AI and the valuable lessons they can share. She emphasized that while each industry faces unique challenges, there are significant takeaways that can benefit others. In healthcare, the focus is on rigorous validation and strict regulatory compliance, crucial for ensuring AI systems’ reliability and safety when lives are at stake. This approach underscores the importance of thorough testing and adherence to regulations across all sectors.

In finance, Richa highlighted the emphasis on risk management and the need for AI models to be interpretable and explainable. This sector’s approach to stress testing and transparency in decision-making provides a model for how to build trust with regulators and customers. Richa also noted the automotive industry’s focus on real-time decision-making and safety-critical testing, particularly in autonomous vehicles. The lesson here is the value of extensive simulations and real-world testing in dynamic environments.

Lastly, she pointed out the retail industry’s focus on scalability and agility, emphasizing the need for continuous improvement and flexibility to meet evolving customer needs. By learning from these diverse approaches, industries can enhance AI quality engineering and build more robust and reliable systems.

Unique Perspectives on Enhancing Test Automation with AI

The panel discussion transitioned to exploring how AI is transforming traditional test automation practices. The conversation opened with a reflection on the dynamic nature of the retail sector, where personalization and rapid changes are critical. With this Sudhir asked his next question to Vikul and Kiran: How is AI complementing or enhancing your traditional test automation use cases?

Leading the conversation, Vikul highlighted two significant ways AI is transforming traditional test automation. Initially, AI aids in optimizing automation by identifying reusable components and consolidating test cases. For example, in a project for a large healthcare client, AI helped streamline the design process by pinpointing reusable elements and reducing redundancy in test cases. This approach not only modularized the design but also optimized automation efforts.

He also discussed AI’s role in managing testing priorities amidst delays. AI tools can analyze test cases to identify those with the highest defect yield and adjust priorities accordingly. This capability allows teams to focus on critical tests, even when facing tight deadlines. Moving forward, generative AI further enhances automation by creating test scenarios and scripts from user stories, increasing productivity without replacing human testers. Instead, it enables testers to focus on more strategic tasks while automating routine processes.

Kiran shared insights on the practical applications of AI in test automation, particularly during framework migrations and test case management. He noted how AI has streamlined the migration process between frameworks, making it more efficient and less time-consuming. This transformation allows for quicker adaptations and reduces the manual effort involved in updating test cases.

Further, he also emphasized the shift in quality assurance practices, advocating for a balanced approach that includes both “shift left” (testing early in development) and “shift right” (testing post-deployment). AI facilitates this by integrating feedback from production systems and customer experiences, enhancing overall usability and ethical considerations in testing. This evolving landscape requires continuous adaptation and openness to new technologies to maintain quality and efficiency in test automation.

Final Words…

As it was time to wrap up the panel discussion, Sudhir asked each panelist to share their views on: With the growing emphasis on AI and its applications, what steps are being taken to develop the talent pool to meet this demand? Additionally, how are you ensuring that AI is used ethically and responsibly?

Starting in alphabetical order, Chris gave his views by emphasizing the importance of continuous learning and highlighted Ko Forge’s approach to talent development. The company is focused on offering not just basic AI and ML training but also specialized certification programs in AI ethics and data.

Chris pointed out that Ko Forge is bringing together professionals from diverse backgrounds, including statistics and mathematics, to enrich their engineering expertise. The goal is to cultivate a comprehensive skill set that goes beyond traditional engineering, ensuring that their talent pool is well-rounded and prepared for the future.

Kiran shared that his organization is also heavily invested in talent development, particularly through innovative programs like prompt engineering for testing. He stressed the need for a paradigm shift away from traditional, rule-based testing to more dynamic approaches. Additionally, Kiran emphasized the importance of unlearning outdated methods, suggesting that organizations should create spaces for professionals to refresh their perspectives and adapt to the evolving landscape of AI-driven testing.

Manish reflected on the evolution from functional to automation testing, emphasizing the need for a sharper technological edge in today’s testing environment. He believes that success in this space requires a blend of engineering skills, creativity, social intelligence, and interdisciplinary knowledge. Manish highlighted the importance of strengthening business acumen alongside technical expertise, as this combination is critical for thriving in an AI-dominated industry.

Richa painted a vivid picture of what it takes to be a quality engineer in a high-tech AI environment. She outlined a clear pathway of continuous learning, starting from foundational AI and ML knowledge, advancing through data science, and mastering AI testing tools. Richa underscored the importance of not only learning but also unlearning to stay agile and effective in managing AI systems. Her perspective highlighted the need for adaptability and ongoing skill development to keep pace with the rapidly changing AI landscape.

Moving on, Subba Ramaswamy boiled down the essence of testing AI systems into three actionable steps: mastering prompting, pursuing courses in AI, ML, and data, and focusing on responsible AI. She emphasized that these are key areas for anyone involved in AI testing and stressed the importance of leadership in ensuring that AI systems are developed and deployed ethically. Subba’s insights underscored the critical role of continuous education and ethical considerations in AI.

Lastly, Vikul took a broad perspective, emphasizing that AI is now as essential as Java once was—permeating every industry. He highlighted his organization’s efforts to integrate AI education into their core training, establishing a clear career path that includes roles like Data Science Engineering Test (DSET).

Vikul also shared their hands-on approach to learning by providing Docker containers with local LLMs, allowing testers to experiment and learn practically. He emphasized that AI education isn’t limited to engineers; leaders at all levels are actively learning and setting an example for their teams.

The key takeaway from this panel is the vital need for continuous learning and adaptation in the AI era. As AI becomes central across industries, quality engineers and testers must acquire new skills in AI, ML, and data science while unlearning outdated practices. Ethical considerations, hands-on training, and a blend of technical and creative skills are essential to succeed in an AI-driven future.

This panel discussion didn’t answer your questions. Feel free to drop them on the LambdaTest Community.

Author Profile Author Profile Author Profile

Author’s Profile

LambdaTest

LambdaTest is a continuous quality testing cloud platform that helps developers and testers ship code faster.

Blogs: 175



linkedintwitter

Test Your Web Or Mobile Apps On 3000+ Browsers

Signup for free