AutoDev provides two types of assistance functions:
auto-completion, i.e. inferring completions based on context when editing
assistance functions that act on code snippets, which use an instruction-following model to reason about the code snippet and either present an analysis thereof or act directly on the piece of code in question
Integration into JetBrains IDEs is provided for both types of functions via an IDE plugin which interacts with an inference service. An overview of the main components and their interactions in shown in Figure 1.
For auto-completion models, AutoDev supports
- fine-tuning your own model (optionally using LoRA),
- quantitative and qualitative analysis of completion quality,
- optimisation for inference as well as performance benchmarking.
Auto-completion models are always (locally) hosted within the inference service.
For other assistance functions built on instruction-following models, you have the option of using either a (fine-tuned) open-source model, or, alternatively, a proprietary model such as OpenAI’s ChatGPT.
AutoDev was developed as part of an endeavour to investigate the potential of LLMs for the development of custom coding assistants. A blog post provides further details and summarises some of our findings.