This repository contains the replication package for the paper:
Exploring Direct Instruction and Summary-Mediated Prompting in LLM-Assisted Code Modification
VL/HCC 2025 — IEEE Symposium on Visual Languages and Human-Centric Computing
The arXiv preprint is available here.
The package is organized as follows:
-
pasta-plugin/
Source code and resources for the IntelliJ plugin PASTA used in the study. The plugin is also available on the JetBrains Marketplace. -
qual-analysis/
Materials for qualitative analysis.codebook.pdf: Codebook for interview analysis.coded_quotes.csv: Anonymized, coded interview segments.
-
quant-analysis/
Materials for quantitative analysis.requirements.txt: Python dependencies. After installing them, all Jupyter notebooks can be run successfully.analysis/: Jupyter notebooks, data files, and result figures for quantitative analysis.data/: Contains task outcomes, prompt strategy selections, and questionnaire responses (e.g., NASA-TLX, perceived utility, and self-reported experience).figures/: Contains all result figures presented in the paper, generated by the notebooks.
interactions/: JSON files of all participants' interaction logs.transcription/: Audio transcription scripts.analysis.ipynb: Main analysis notebook for all quantitative results reported in the paper.utility.ipynb: Focused analysis of Likert-scale utility ratings.
-
study-protocol/
Study protocol documents in PDF format, including all questionnaires and a facilitator-used study procedure (e.g., introduction for PASTA, scripts for semi-structured interviews). -
study-tasks/
Programming tasks used in the study.buggy-code/: Initial code given to participants.ground-truth/: Reference solutions.task-descriptions/: Task instructions and related images.
If you use or reference this package, please cite our paper:
@inproceedings{tang2025exploring,
title={Exploring Direct Instruction and Summary-Mediated Prompting in LLM-Assisted Code Modification},
author={Tang, Ningzhi and Smith, Emory and Huang, Yu and McMillan, Collin and Li, Toby Jia-Jun},
booktitle={Proceedings of the 2025 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)},
year={2025},
organization={IEEE}
}For questions or collaboration inquiries, please contact Ningzhi Tang at [email protected] or [email protected].
This research was supported in part by an AnalytiXIN Faculty Fellowship, an NVIDIA Academic Hardware Grant, a Google Cloud Research Credit Award, a Google Research Scholar Award, and NSF grants CCF-2211428, CCF-2315887, and CCF-2100035. Any opinions, findings, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors.