Maximising Efficiency: The Power Of Generative AI Code Assistants for Growth

David Sugden, Head of Engineering at Axiologik, discusses how adopting Generative AI code assistants can lead to effective business growth.

Many organisations have established Developer Experience (DX) teams, tasked to look at what drives developer satisfaction and productivity within their company, and taking a deep dive into several dimensions spanning organisational culture, tools, and processes.

And with good reason.

Mature organisations are reaping the benefits from optimising the ‘how’ of software delivery and driving ‘fast flow’, focussed on outcomes over output.

Back in 2021, the Good Day Project identified metrics that helped their engineers define ‘flowing days’ and ‘disrupted days’ – an engineer losing their state of flow saw that day’s productivity drop to just 14% and their quality-of-work dropped off also. The study identified multiple factors that drive flow state, including the need to balance ‘good’ and ‘bad’ interruptions.

Over a similar timeframe, Generative AI developer & code assistants have become integral to many organisations’ developer toolkits and in the right conditions can drive a reduction in time-to-market, an increase in quality and security, and continuously maintain team morale.

Let’s discuss some of the benefits of adapting Generative AI Code Assistants to determine which tool offers the most effective proposition and value for money.

Benefits of Generative AI Coding Assistants

Time Efficiency

As we know, the complexity of development work varies wildly. Step forward tools that can reduce the repetitive work, and in so doing can create more time and mental bandwidth for creative, higher-value, and more complex tasks. Allowing an opportunity for rapid development based on natural language documentation, code block autocompletion, and boilerplate implementations allows developers to focus their time and effort on the ‘last mile’.

Improves Code Quality

Engineers also report the improvement of documentation, which aids understandability and readability in the longer term. Tools achieve this both retrospectively and proactively; firstly, for legacy code, they help explain selected modules, classes, or code blocks and can generate natural language descriptions of its functionality and purpose. And for new code, as part of the code generation cycle, developers will write a comment block that describes the intended functionality in natural language, and the coding assistant will autogenerate code blocks – thus promoting ongoing code documentation as part of all new feature development.

Being able to identify, highlight, and resolve common coding errors will improve the code quality from both individual contributions and more generally across the codebase. A reduction in bugs and coding errors also ensures that issues are detected before they can become a problem, thus saving time to debug later – the sooner issues are found the cheaper they are to fix, so engineering teams are constantly striving to shift left and reduce feedback loops.

And where a bug has already leaked, these tools can help identify the root cause and will propose fixes.

Automatic Testing

These generative AI tools can also be adept at generating complete and complex test cases, suggesting input parameters and expected output values based on the method signature, code, context, and documented functional intent. This includes edge cases, boundary conditions, null checks, and other conditions that might be difficult to identify manually.

Identifying Security Flaws

Finally, developers can use a Generative AI coding assistant to identify and suggest fixes to security flaws in code blocks – the tools achieve this by making recommendations based on examples found in the training set that did not exhibit the same flaw, pushing code iteratively through SAST scanning engines.

What Should We Be Concerned About?

What might you need to be worried about when adopting Generative AI code tools? Firstly, let’s dispel the myth they are replacements for humans.

The first area to highlight is security risks; specifically, whether code can expose sensitive information or introduce vulnerabilities. You should always review and scan (SAST) the generated code thoroughly. Security vendors provide IDE extensions that will identity security flaws and suggest fixes.

Secondly, consider whether the code that trained the model was allowed to be used for such purposes. There are possible legal exposures to licences as well as possible matches with public code or code that was in the training set. You should always take the same precautions as you would with any code that you did not independently produce, including precautions that ensures its suitability.

Finally, suggestions may appear to be valid but may not actually be semantically or syntactically correct (aka “hallucinations“). Additionally, code may compile but it may not accurately reflect the intentions of the developer. You should carefully review and test the generated code, particularly when dealing with critical or sensitive applications, and ensure that code adheres to your best practices and design patterns, architecture, and styles.

Determining Your Code Assistant

While not extensive, this is a set of core factors for organisations to focus in on when determining which coding assistant tool to adopt.

Supported IDEs – It is important that the tool integrates into the common development environments (IDEs) such as VS Code. The extent to which there is a seamless integration will impact on workflow efficiency, with less context switching and ultimately a more productive coding experience for the developer.

Supported Languages & Frameworks – While there are several dedicated tools for niche languages, organisations are more likely to favour tools that support a wide range of programming languages and frameworks, rating the importance of versatility. Tools that support developers with multi-language projects are likely to be more powerful and valuable, especially with the prevalence of diverse development scenarios, multi-stack services, and monorepos.

Core Features, Accuracy & Relevance – Given that these tools are primarily used to suggest code snippets and solve problems, then the accuracy and relevance of the suggestions is paramount. The tools should be able to understand the problem and provide relevant and syntactically correct solutions – this includes minimising ‘hallucinations’ and providing code in the wrong language.

Security & Privacy – Code suggestions can potentially expose sensitive information or introduce vulnerabilities – some tools provide security scanning capabilities for suggestions, and organisations should assess how highly they rate these additional features. Equally, and especially in a corporate environment, the tool must handle code and data securely, ensuring that code does not leave your environment and that proprietary data is not leaked back into training the model.

Legal & Compliance – While tools generate new code in a probabilistic way, a suggestion may match code in the training set. Models that are trained on permissive open-source repositories will minimise the risk of legal and license concerns, as will checking suggestions against public and open-source repositories for matches. Finally, the capability to filter out suggestions that resemble open-source code will provide additional reassurance.

Cost & Licensing – The cost of any tools may be a deciding factor for many organisations, especially those with small teams, and start-ups. While many tools offer free options for individual developers, most are priced on a per-user/seat basis.

Final Thoughts

In addition to the themes listed above, organisations should also consider response time – that is, the speed at which the tool provides back suggestions – documentation and/or community knowledge, how frequently new release and bug fixes are rolled out, and support agreements.

With just a small taster of some of these benefits, it’s little wonder that developers using Generative AI-based tools in their workplace have been found to be twice as likely to report overall happiness, fulfilment, and a state of flow.

All events

All sponsors