A GitHub Action that lets you Prompt AI directly in your workflows.
- uses: FidelusAleksander/prompt-action@v1
with:
prompt: 'What is the meaning of life?'
- uses: FidelusAleksander/prompt-action@v1
with:
system-prompt: 'You are Gilfoyle from Silicon Valley.'
prompt: 'Tell me about your latest project.'
You can ensure the model returns data in a specific format by providing a JSON Schema.
- uses: FidelusAleksander/prompt-action@v1
id: prompt
with:
prompt: |
Will humanity reach Mars by 2035?
response-schema-file: path/to/your-schema.json
- name: Use the output
run: |
echo "Response: ${{ fromJSON(steps.prompt.outputs.text).response }}"
echo "Confidence: ${{ fromJSON(steps.prompt.outputs.text).confidence }}"
echo "Tags: ${{ fromJSON(steps.prompt.outputs.text).tags }}"
Example schema
{
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "The main response text"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence level from 0 to 1"
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Relevant tags or categories"
}
},
"required": ["response", "confidence", "tags"],
"additionalProperties": false
}
You can create dynamic prompts using {{ variable }}
syntax with the usage of the vars
parameter.
- uses: FidelusAleksander/prompt-action@v1
with:
system-prompt: |
You are a {{ language }} expert translator.
You will be provided with text to translate to {{ language }}.
Respond with nothing but the translated text.
prompt-file: README.md
vars: |
language: Spanish
For more advanced templating features like loops, conditionals, and filters, see the Nunjucks templating documentation.
Tip
Variable templating makes most
sense when using prompt-file
or system-prompt-file
inputs, as it allows
you to maintain reusable prompt templates with dynamic content.
This actions requires at minimum the following permissions set.
permissions:
contents: read
models: read
Input | Description | Required | Default |
---|---|---|---|
prompt |
Text that will be used as user prompt | No* | - |
prompt-file |
Path to a file containing the user prompt | No* | - |
token |
Personal access token | No | ${{ github.token }} |
model |
The AI model to use. See available models | No | gpt-4o |
system-prompt |
Text that will be used as system prompt | No | "You are a helpful assistant." |
system-prompt-file |
Path to a file containing the system prompt | No | - |
response-schema-file |
Path to a file containing the response JSON Schema for structured outputs | No | - |
vars |
YAML-formatted variables for Nunjucks variable substitution in prompts | No | - |
* Either prompt
or prompt-file
must be provided
Output | Description |
---|---|
text |
The AI's response to your prompt |