Enhancing engineering productivity

10 questions about real-world application of advanced AI at Ascendion

Ascendion is already engineering and deploying advanced AI systems. And we’ve been doing it for a few years already. The Ascendion AVA (A.AVA) engineering platform uses advanced AI technology to shape how we practice software engineering and the impact we deliver to clients. We see generative AI as enhancing the enterprise, boosting productivity, and embracing innovation in a way that fits business.

In the near term, where business happens, we are getting loads of questions about what artificial intelligence means to the business of software engineering. This is where the fun is, and it’s where we work every day.

Generative AI isn’t just about implementing new technology;

it’s about weaving it into the very fabric of the business.

JumpStart generative artificial intelligence in your organization.

As a company configured for our post-pandemic digital economy, our strategy is tightly coupled with the use
of new AI tools to improve engineer productivity, security, client transparency, and software quality.

Our enabling platform for software product engineering is something we call A.AVA. The platform offers radical transparency to serve clients and the engineering ecosystem. It can be deployed across integrated teams, including clients, our own Ascender employees, and other ecosystem partners serving clients.

This helps build trust, solve problems early, and elevate engineering quality. We have already created a highly-pixelated view of our engineering processes and embedded this into our A.AVA platform. Artificial intelligence was then built into our enabling engineering platform to improve how we deliver experience engineering, platform engineering and operations, data engineering, and quality engineering.

Our strategy, simply, is to continue to ride the growing wave of transformational change offered by our next generation of Machine Learning (ML)-based platforms and tools that “enhance” the productivity of our engineers.

We have found that A.AVA — our “system of intelligence” for software engineering — delivers 50% to 60% higher program efficiency and dramatically improved transparency along the journey. And we are just getting started.

With A.AVA as our foundation, we are exploring and experimenting to find where next-generation systems
(e.g., ChatGPT, Bard, and others) can be injected to improve productivity, security, quality, and transparency
at the specific engineering task level.

The Ascendion AVA platform comprises multiple models that are already in use. Here are some of the primary elements:

  • OneView ensures Engineering Excellence through Best Practices (100+) driven Software Delivery and benchmarking for continuous improvement in engineering solutions (encoded as roughly 600 KPIs). OneView delivers near real-time performance information on all stages of software delivery. We aggregate data from existing SDLC, STLC tools to offer features like best practices driven software delivery, benchmarking for continuous improvement, visibility & benchmarking for leaders, role-based visibility, and multiple dashboards.
  • Intelligent Test Automation (ITA) supports Ascendion’s Quality Engineering solutions through requirement analysis, test design, test planning, and test execution. We apply natural language processing, ML algorithms, and process orchestration to automate testing across the end-to-end engineering process.
  • Intelligent accessibility framework (IAF) is the auto-remediation system that improves content access for software users with disabilities. IAF uses AI and ML to help improve user experiences, help businesses meet legislative requirements and standards (WCAG 2.1 (A, AA) and ADA), and receive compliance certification in experience engineering solutions.
  • MLOps streamlines machine learning operations (MLOps) and governance processes to shorten development cycles. This capability helps implement an automated MLOps framework for on-premise, cloud, hybrid model, and ML projects.
  • DataSwitch is an insight-driven intelligent engineering platform that accelerates the pace of data modernization through no-code/low-code data engineering to improve data modernization and management solutions.
  • Intelligent Cloud Economics (ICE) is an intelligent digital engineering platform that maximizes cloud-related value as we accelerate the pace of change across business, IT, and operations. ICE uses data and intelligence to provide a single-point dashboard of cloud economics with real-time views of cloud spend visibility, recommendations, and optimization. ICE uses AI to generate recommendations, anomaly detection, compliance, cost and utilization visibility, unit economics, governance and controls, cloud cost optimization (usage and rates), and more.

We’ve engineered A.AVA modules while leveraging an entire ecosystem of available tools we commonly deploy to manage different engineering work processes in line with metrics and best practices.

As Artificial General Intelligence (AGI)-based tools continue to develop rapidly, we are continuing to test new tools (e.g., GitHub Copilot) to find specific ways that can enhance our engineers. This is, of course, a rapidly developing opportunity in which we are engaged actively.

We measure everything—currently around 600 available Key Performance Indicators (KPIs)—and have embedded the logic of 300 best practices into A.AVA to ensure engineering quality, transparency, security, and productivity across the software development lifecycle.

For example, A.AVA helped deliver outcomes to clients while improving engineer efficiency with data modernization. In the quality engineering space, A.AVA drives significant automation along the lifecycle.

The A.AVA platform is already deploying certain generative AI/ML capabilities to enhance developer productivity and improve overall practice and experience. For example, we are leveraging Curie from the GPT Codex for our coding standard validator. We are actively and daily exploring ideal use cases specifically for ChatGPT in our engineering work (including digital talent orchestration). We are leveraging ChatGPT and generative AI overall for the following, to name a few:

  • Creating stories from epics
  • Documentation from code
  • Code conversion (from one language to another)
  • Code creation from scratch
  • Testing
  • Test data generation
  • Knowledge management

Some of these AI tools are fresh from the foundry, and are still being figured out. Aren’t clients concerned about security, privacy, and compliance?

We are seeing significant interest from clients in how we leverage applied intelligence in the A.AVA platform. Enterprise technology leaders, quite naturally, have concerns about every emerging technology, including generative AI systems. This is a good sign and we welcome the scrutiny needed to keep enterprise clients running smoothly and safely.

Client concerns are unsurprisingly related to data privacy and security. In the near term, we anticipate a period of pilots and exploration as we, together with clients, explore the optimal use cases for specific AGI systems like those provided by OpenAI. Ascendion is committed to ensuring that safe computing practices regarding security, privacy, and compliance are firmly maintained regardless of the enabling systems being deployed. We also fully expect rapid development driven by tool providers — e.g., OpenAI, Microsoft, etc. — who have a keen interest in ensuring that their enabling systems meet enterprise requirements.

We fully expect to do the disrupting over the next five years, and applied AI systems will be a critical element of our ability to do this.

Yes! We deploy a talent model that we call Ascendion Circles. A Circle is a formalized, well-organized group of people with similar interests in becoming masters of an engineering craft (e.g., cloud, data, quality engineering, AI, etc.). Circles are designed to:

  • Enable learning, share knowledge, support each other, and expand capabilities and careers of its members
  • Inject new engineering practices, such as the use of enhancing AI systems, into our engineering practice
  • Create mastery of craft with mentorship, Internet Protocol (IP) development, events, experiential practice, celebration and more
  • In addition to these communities of practice, we offer instructor-led training. We sponsor training for everyone in Ascendion on how to leverage these kinds of systems to do our jobs more productively. In the near future, we will continue on our trajectory related to hiring engineers, and we will deliver training and mentorship to ensure they can use software to build better software. For us, this is a continuation of what we have already started.

We are in the early days of adoption of our AI ML-based toolsets. We are leveraging these for coding, documentation, and testing among other areas of software engineering. Based on early insights, we are currently seeing a 10% to 15% improvement in coding productivity overall. We will continue to use these generative AI tools across engagements and expect to have comprehensive data in a couple of quarters.

Code quality, IP protection, and encoded biases are absolutely challenges to be managed related to AI/ML coding assistants. What we are keeping in mind is that these challenges are the same for human-only generated code and for enhanced-human generated code. It is true that coding assistants can introduce risk, but, as we are finding, they can also decrease risk if properly and wisely deployed and managed.

For example, A.AVA OneView helps us understand, at a very granular level, code quality and engineer productivity. A.AVA DataSwitch helps ensure effective data migration, also lowering business risk.

That said, the material added risk related to the new “bots” is risk generally related to unintended bias. We know the bots can, for example, inadvertently and unpredictably introduce biases in text and imagery. Here again, our model of enhancement vs. pure automation is a risk-mitigation strategy. Our delivery model includes a “human in the loop,” to ensure we are decreasing risks related to unintended consequences.

Anyone asserting they’ve figured out how to mitigate all risks related to deploying AI/ML coding assistants is perhaps a bit exuberant. Our belief is that we can ensure the optimal software and business impact by focusing on the generally “known” risks related to engineering great software — quality, security, IP protection, etc. — with conventional risk management tools and tactics, coupled with keeping a human involved to decrease negative unintended consequences.