Skip to content

Local Environment Setup for BFD Development

Brick Green edited this page Jun 20, 2025 · 23 revisions

BFD Development Setup Guide

In order to develop and run BFD locally, there are a number of required pieces of software that will need to be installed. This guide intends to walk through the installation steps for that software in order to quickly get you up to speed on contributing to BFD.

Mac Instructions

Most developers who contribute to BFD are using Mac, so this setup guide will be focused on Mac-centric setup steps. If you need windows-specific help contributing to BFD, please contact the BFD team for support.

Homebrew

Homebrew is a command line tool for easily downloading and installing scripts, software, and artifacts.

Install it by running the following in your terminal:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Homebrew allows very easy installation of things with simple commands like brew install X where X is the name of the software/package/installation you want to download. Homebrew will be used for many of the installation instructions in this guide, since it simplifies the installation process.

Java

BFD is currently using Java 21. There are many ways to get the SDK for java installed on your machine. BFD currently recommends the amazon corretto SDK, to align with our deployed SDK versions. https://docs.aws.amazon.com/corretto/latest/corretto-21-ug/macos-install.html

Another option is to use openJdk with brew:

brew install openjdk@21

Either SDK should be functionally equivalent, but corretto may have minor bugfixes and performance improvements over openJdk.

After installing Java, set your JAVA_HOME environment variable in the terminal:

export JAVA_HOME="$(/usr/libexec/java_home)"

Maven

Maven is used to manage building the application and artifacts/dependencies. Installing it is easy:

brew install maven

Once installed you can test it with mvn -version which should look something like this:

image

You'll need to configure your maven toolchain to point to your recently installed Java SDK. Configure your ~/.m2/toolchains.xml file to look like the following (change the jdkHome and version as needed to match your setup): xml >

<toolchains>
  <!-- JDK toolchains -->
  <toolchain>
    <type>jdk</type>
    <provides>
       <version>21</version>
       <vendor>sun</vendor>
    </provides>
    <configuration>
        <jdkHome>/Library/Java/JavaVirtualMachines/amazon-corretto-21.jdk/Contents/Home</jdkHome>
    </configuration>
   </toolchain>
</toolchains>

gRPC

If you're on a Mac with Apple Silicon, you may run into some build issues with gRPC.

At the time of writing this, grpc-java currently doesn't publish native binaries for aarch64. As a result, the x64 binaries will be downloaded even though it may say aarch64 in the executable name. To ensure you can run these, make sure you have Rosetta installed. You can install Rosetta via the CLI like this:

softwareupdate --install-rosetta

Python

Python is a programming language used in ancillary BFD scripting and tool installations.

Your mac may come installed with python 3, but if not we will need the ability to run with python 3. Another easy brew install:

brew install python

🔧 Git/Github

You’ll need to grab git locally as well to start downloading/working on any code.

brew install git

After installing this, you’ll need to locate your github account (or create one if you don’t have one.) It would be a good idea at this point to adjust your github account to have your real name in your profile if it’s not set up.

BFD Repo

You'll need to download the bfd repository once you have Git installed.

First, create a directory to store your Git projects:

mkdir -p ~/git/

Once you have this directory, clone the BFD repository into it:

git clone [email protected]:CMSgov/beneficiary-fhir-data.git ~/git/beneficiary-fhir-data.git

Once cloned, install pre-commit and pre-push hooks by running Maven from the apps/ folder. These ensure clean, valid commits before pushing:

Once you've cloned the repo, you'll need to do a few important setup steps:

  • Install pre-commit and pre-push hooks by building the repo from the apps folder (we can skip tests for now):
cd beneficiary-fhir-data/apps
mvn clean install -DskipITs -DskipTests=true -Dmaven.build.cache.enabled=false

Note: if you ever delete and re-clone the repo, this step will need to be run again, as these hooks are dropped in the repo as a local file.

  • Change to the apps/ directory and mvn clean install to run all the tests and ensure your docker environment is set up correctly and everything is good from maven.

    • If tests fail (particularly E2E tests) ensure you have docker open and no running containers. The tests will create their own containers when needed, and clean them up after. Having running containers can create port conflicts sometimes. Not having docker open will cause the test to fail as we use containers to run the servers in the end-to-end tests.
  • It may also be wise to run an E2E test in your IDE and ensure that everything works from within the IDE too

Although rarely needed, you may wish to test that you can start up a local server/db; see Testing Your Local Environment Setup for instructions on how to do so.

👥 Contributing to the Repo

You’ll need to set up a few things to connect your environment to the CMS repository in github:

SSH — This key is used to encrypt the data between your computer and github. Adding a new SSH key to your GitHub account

GPG - This is a key that tells github you are who you say you are when you make a commit. See GPG below for installing GPG and generating a key. Adding a new GPG key to your GitHub account

PAT (Personal Access Token) - This must be used in place of a password when authenticating your push actions in the CMS repo. Creating a personal access token

2-factor authentication is required by the CMS organization for access. You can/should use Google Authenticator (phone app) for this and any other 2FA you can use it for at Ad Hoc. Configuring two-factor authentication

✅ Committing

Committing to the repository requires the following tools:

brew install gitleaks
brew install shellcheck

These will be used by the pre-commit hooks to catch secret leaks and shell script issues.

🔐 GPG

You’ll need GPG to generate a key to sign commits with.

brew install gpg
gpg --full-generate-key

Hit return/enter to accept default values. When prompted for an email, use the email associated with your GitHub account.

🔐 Accesses

To get access to push to the CMS GitHub repo:

Once added to the organization, navigate to the team link above and click the “Request Team Access” button located at the upper right corner of the roster.

Once you have the organization added to your account, are included in the above group, and have github set up with the appropriate keys, you should be able to push to the main repo.

AWS

Amazon Web Services (AWS) are heavily used in the project. The command line tool (CLI) lets you communicate with AWS with easy terminal commands and is occasionally useful for transferring files to and from your local machine during development.

Follow the instructions here to install the CLI: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

You should just need to download/run the PKG, but you can also install via the command line if you’d like.

Initially, you won’t have an AWS account to connect to and use AWS. Get an account by following the instructions here under “VPN and AWS”: https://confluence.cms.gov/pages/viewpage.action?pageId=302580951

Once you have an AWS account, you can log into AWS console site, which is used to manage the databases, load files, and other aspects of the deployed application environments.

You’ll want to properly setup your access key and 2FA in the AWS console as well:

  1. Log into AWS (and proceed to security credentials) via https://console.aws.amazon.com/iam/home?#/security_credentials
  2. Set up your 2 factor authentication by following the steps here: Enabling a virtual multi-factor authentication (MFA) device (console) - AWS Identity and Access Management
  • It is recommended to use Google Authenticator for this.
  • Once set up, your device should appear on the security credentials page at the bottom like this, with a unique device ID for your device: image
  1. Click “Create Access Key”
image 4. Once clicked, it will create an access key and allow you to get the secret value once. Leave this window open (dont complete the setup of the key yet)

AWS uses a temporary credentialing system to restrict access via a token with a short expiry. The long-term credential is connected to your account details, and the other profile (your “short-term” profile) has its credentials generated from a call using the long-term credentials, which only have access to generate a short-term access token. Let's set up the files that will control these credentials locally.

  1. Next we'll set up AWS vault to store our AWS credentials and profiles.

Install AWS vault: brew install --cask aws-vault

Initialize AWS profile files and folders: aws configure (fill in anything, it will soon be replaced, we're just getting the .aws directory set up)

Run the first two scripts (sample generated aws-vault configuration and bundle installation) in the IAM database auth page page to set up your profiles with aws vault.

Set your new default profile as an environment variable: export AWS_PROFILE=bfd

This will tell AWS which profile to use, which will align to the profiles the script created.

Run aws-vault and add bfd in order to create a new keychain

Now that everything is set up, you can get a temporary token (and test things worked) by entering:

aws-vault exec bfd -- aws sts get-caller-identity

This will generate the short-term token under your cms profile in the credentials file. Note that this is (by design) a short-term token. After the token expires (after about a day) you'll need to run the command again to generate a new token.

You will likely be prompted for your local machine password, and 2-factor authenticator value. If it worked, you should see output that prints your userId, Account, and arn value. From here you should be able to perform aws cli commands and the token generated will be automatically used until it eventually times out.

Note: You'll need to rotate your aws credentials every 90 days; you can do this at any time easily by using aws-vault rotate bfd

KION Programmatic & AWS Access (Vault-Alternative and mandatory for greenfield)

Programmatic Access

Note: Your cloud access role must have access to generate API keys. This is granted by the KION Support team and you can request this via Jira ticket using similar steps found later in this document.

CMS Kion Documentation - Programmatic Access Setup

Programmatic access for KION has a few distinct steps. In order for programmatic access to work, you must already have a Cloud access role, the process for receiving one of these is found above.

Install kion-cli

brew install kionsoftware/tap/kion-cli

Configure Kion-CLI at ~/.kion.yml

Configure kion in the file located in your home directory, ~/.kion.yml as below:

---
kion:
  url: https://cloudtamer.cms.gov
  username: ABCD # <- Your EUA
  idms_id: 2 # `idms_id` maps to "CMS Cloud Services" in the cloudtamer.cms.gov for EUA authentication. I think.

# Set the following to use firefox (because you really want to use multi-account containers)
browser:
  firefox_containers: true

# Favorites for use in e.g. a notional `foo-bar` favorite accessible on the command line with `kion f foo-bar`
favorites:
  - name: foo-bar # Just a friendly name for your own use
    account: <account id> # Use `kion stak` if you're unclear which account number you'd like
    region: us-east-1
    cloud_access_role: <Cloud Access role name> # Again, use interactive `kion stak` command to find the name of the access role,
                                                # e.g. for "Blue Button Application Admin (1005)", use "Blue Button Application Admin" sans "(1005)".
    browser: firefox # Use firefox

Once this is in place, try running kion s (or kion stak) and you will be prompted for your EUA password. After authenticating, follow the tui/cli prompts to select your Kion Project, Account, and the Cloud Access Role.

Configure your aws cli at ~/.aws/config?

  • This isn't strictly necessary if you wrap your aws commands in a kion stak or kion fav foo-bar session...
  • This is just an example for your foo-bar favorite
[profile foo-bar]
region=us-east-1
cli_pager=
credential_process=/opt/homebrew/bin/kion favorite --credential-process foo-bar

Do familiarize yourself with e.g. --credential-process to produce a json object for use in various applications outside of the aws cli.

Multi-Account Containers in Firefox

If you choose to use Firefox multi-account containers...

Install Firefox

brew install --cask firefox

Install Necessary Firefox Extensions

  1. Run AWS Commands!

Example Kion Commands:

# open the sandbox AWS console favorited in the config above
kion fav sandbox
 
# generate and print keys for an AWS account
kion stak --print --account 121212121212 --car Admin
 
# start a sub-shell authenticated into an account
kion stak --account 121212121212 --car Admin
 
# start a sub-shell authenticated into an account via an account alias
# NOTE: that account alias only supports Kion versions 3.9.9 and 3.10.2 and up
kion stak --alias Prod --car Admin
 
# start a sub-shell using a wizard to select a target account and Cloud Rule
kion stak
 
# federate into a web console using a wizard to select a target account and Cloud Rule
# NOTE: that Firefox users will have to approve pop-ups on the first run
kion console
 
# federate into a web console using an alias
# NOTE: that Firefox users will have to approve pop-ups on the first run
# NOTE: that account alias only supports Kion versions 3.9.9 and 3.10.2 and up
kion console --alias Prod --car Admin
 
# federate into a web console using an account number
kion console --account 111122223333 --car Admin

Generating and Managing API Keys

Note: Your cloud access role must have access to generate API keys. This is granted by the KION Support team and you can request this via Jira ticket using similar steps found later in this document.

Note: You must be on Zscaler in order to login to Kion

  1. Login to Kion
  2. Find your profile and click on it. Locate the "App API Key" option. If this option is not available, none of your current cloud access roles have access to generate api keys.
  3. Click "App API Keys"
  4. Click "Add"
  5. Name your API key something unique to what it is used for.

API Key Renewal and Expiration:

API keys will automatically expire every 7 days. This means that your local configuration will work for 1 week and then need an API key refreshed. You can automate this process using a script provided below. We recommend setting this script to run on a CRON schedule as your API key will be removed from your user on the 7th day. This should run on the 6th day ideally.

  1. Download and inspect the python script below.
Click to expand the Python code
# cloudtamer.io app API key rotator
#
# This script creates a new API key within cloudtamer.io, and then deletes the
# previous one.
#
# It outputs the new key and it's ID to a local file named ct_apikey_data.json
# The format in this file is exactly what is returned from cloudtamer.io.
# Here is a sample:
# {"id": 38, "key": ""}
#
# You can then use jq to parse the API key out of this file, for example:
# jq .key ct_apikey_data.json | tr -d "\""
#
# To run this script the first time, you'll need to create an initial API key and enter it
# as the value to the initial_apikey variable below.
#
# You may also want to edit the api_key_name_prefix variable to be more descriptive for your application.

# Example usage: python3 app_api_key_rotator.py --ct-url $CT_URL --initial-key $OLD --name-prefix $CT_PREFIX --user-id $USER_ID 
# NEW="$(jq .key ct_apikey_data.json | tr -d "\"")" 
# echo $NEW


import sys
import json
import argparse
import requests
import datetime

PARSER = argparse.ArgumentParser(description='Rotate cloudtamer app API keys')
PARSER.add_argument('--ct-url', type=str, required=True, help='URL to cloudtamer, without trailing slash.')
PARSER.add_argument('--initial-key', type=str, help='The initial key from which to start rotation.')
PARSER.add_argument('--name-prefix', type=str, help='Prefix added to the API key name.')
PARSER.add_argument('--user-id', type=str, help='User ID for the user.')
ARGS = PARSER.parse_args()

def main():

    # init URLs
    ct_url = ARGS.ct_url
    create_url = "%s/api/v3/app-api-key" % ct_url
    existing_keys_url = "%s/api/v3/app-api-key/user/%s" % (ct_url, ARGS.user_id)

    if ARGS.initial_key:
        initial_apikey = ARGS.initial_key
        initial_run = True

    # file for storing key data moving forward
    datafile = 'ct_apikey_data.json'
    api_data = {}

    # prefix for the created api key names
    api_key_name_prefix = ARGS.name_prefix

    # If datafile isn't found, assume this is the initial run
    try:
        # load current key data
        with open(datafile) as data:
            api_data = json.load(data)
    except:
        initial_run = True
        api_data['key'] = initial_apikey
        print("Initial run")

    # init headers now that the API key has been set
    headers = {"accept": "application/json", "Authorization": "Bearer " + api_data['key']}

    # need to get the id of this initial key
    # so that we can delete it
    if initial_run:
        keys = []

        response = requests.get(url=existing_keys_url, headers=headers).json()

        if response['status'] == 200:
            keys = response['data']
            # remove all keys without the api_key_name_prefix so we only consider a
            # subset for the rotation; allows  users to have keys for multiple services
            keys = list(filter(lambda key: key['name'].startswith(api_key_name_prefix), keys))

            if len(keys) == 1:
                api_data['id'] = keys[0]['id']
            else:
                # found more than 1 key, should be no more than 2
                # delete the oldest one. the remaining one should be
                # the one we currently are using
                print("found multiple keys. will delete the oldest one")
                # set original oldest date to today
                # will compare creation dates of the keys to this
                oldest_date = datetime.datetime.utcnow()
                print("now %s" % oldest_date)
                oldest_id = ''

                for key in keys:
                    date = key['created_at']
                    year = int(date[0:4])
                    month = int(date[5:7])
                    day = int(date[8:10])
                    hour = int(date[11:13])
                    minute = int(date[14:16])
                    sec = int(date[17:19])
                    creation_date = datetime.datetime(year, month, day, hour, minute, sec)
                    print("key %s created at %s" % (key['id'], creation_date))
                    if creation_date < oldest_date:
                        print("key %s is current oldest - created on %s" % (key['id'], creation_date))
                        oldest_date = creation_date
                        oldest_id = key['id']

                delete_url = "%s/api/v3/app-api-key/%s" % (ct_url, oldest_id)
                print("will delete oldest key id %s" % oldest_id)

                response = requests.delete(url=delete_url, headers=headers).json()
                if response['status'] == 200:
                    print("successfully deleted oldest key with id %s" % oldest_id)
                else:
                    print('failed deleting oldest key. the rest of the script may fail')
                    print(response)

                # now set the key ID that we will rotate later on
                # its the one we didn't just delete
                for key in keys:
                    if key['id'] != oldest_id:
                        api_data['id'] = key['id']
                        print("key to rotate later is %s" % api_data['id'])

        else:
            print("Failed getting API keys from cloudtamer")
            sys.exit(response)

    # set the delete_url after finding the ID of the initial key
    # or pulling it out of datafile
    delete_url = "%s/api/v3/app-api-key/%s" % (ct_url, api_data['id'])

    # now create a new key
    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    new_key_name = "%s_%s" % (api_key_name_prefix, timestamp)
    data = {'name': new_key_name}
    response = requests.post(url=create_url, headers=headers, json=data).json()

    if response['status'] == 201:
        print("Generated new API key")

        # save to file for storage
        with open(datafile, 'w') as outfile:
            json.dump(response['data'], outfile)

        # now delete the old one using the new one
        headers = {"accept": "application/json", "Authorization": "Bearer " + response['data']['key']}
        requests.delete(url=delete_url, headers=headers).json()
        print("Deleted old key")
        print("Finished key rotation")
    else:
        print("Error creating new API key")
        sys.exit(response)

if __name__ == "__main__":
   main()
  1. Run this python script locally providing the required arguments.

    Example usage:

    python3 kion_app_api_key_rotator.py --ct-url https://cloudtamer.cms.gov --initial-key (old-api-key) --name-prefix "" --user-id ""
    

Note: Name prefix needs to match the existing key name in your account. If you have a key named "test-api-key". The name prefix needs to be "test".

Note: The user-id field is a number associated with your KION user account. You can locate this by navigating to your profile and looking at the URL on your logged in kion session.

  1. This will create a file at the location you ran the script named: ct_apikey_data.json You will need to take this value and put it into your .kion.yml to update the configuration.

  2. Put the new API key to a variable value by running the following command:

    NEW="$(jq .key ct_apikey_data.json | tr -d "\"")"

  3. Replace the value in your .kion.yml by running:

    sed -i '' "s/api_key: .*/api_key: $NEW/" .kion.yml

Postgres

Postgres is a database tool used for creating/managing the database flavor we use in BFD. We can easily install this with homebrew:

brew install postgresql

Podman

Podman is used to create containers that run applications in sandboxes. This can be used initially to run your code locally by emulating some of the AWS pieces, and is used by the end-to-end tests to spin up mock servers.

You can alternatively use some other container runtime if you'd prefer. But do not install Docker Desktop as we do not have a license for it.

Installation

Install Podman using the instructions provided. We recommend installing from homebrew as that will ensure the package will be kept up-to-date. Installing Podman Desktop is optional if you prefer to use GUI tools.

Setup

By default, Podman will run in rootless mode. Unfortunately, this doesn't seem to be compatible with our setup.

Disable rootless mode by running podman machine set --rootful.

You will also need to enable Docker compatibility mode. Install podman-mac-helper and follow the instructions listed.

You also need to create a script so that docker commands will be correctly routed to podman. Create a script called docker with the following contents and make sure it's on your PATH:

#!/usr/bin/env bash

podman "$@"

Let's setup the docker database container which will be helpful in local development.

Postgres is the database variety used in BFD.

We can set up a Postgres database with the create-bfd-db script located in the bfd repo under apps/utils/scripts to create the db. Apply the latest schema to the db with the run-db-migrator script afterwards. This is not needed to run tests or do local development, only if you need to set up a local server for specific manual testing. Our end-to-end tests will automatically create their own DB container when they run.

localstack

You will need to have localstack running if you wish to run the application completely locally with no connections into AWS. See installation instructions.

If you're using Podman, the normal localstack start command won't quite work properly. Create a script with the following contents and save it somewhere on your PATH

#!/usr/bin/env bash

podman run --rm --name localstack_main -p 4566:4566 -p 4571:4571 \
        -e DEBUG=1 \
        -e TEST_AWS_ACCOUNT_ID=000000000000 \
        -e DOCKER_HOST=unix:///var/run/docker.sock \
        -v /var/run/docker.sock:/var/run/docker.sock \
        --privileged \
        localstack/localstack

If you want to easily use the AWS CLI with localstack, you may want to install awslocal. This is a simple wrapper around the AWS CLI that will route requests to localstack. Instead of aws <command>, just run awslocal <command>.

Ansible

This tool is used for encrypting/decrypting sensitive files. First let’s install it.

brew install ansible

Once installed, you can use ansible to decrypt files as such: ansible-vault decrypt --ask-vault-pass ~/path/to/file/to/decrypt.ex

And encrypt files similarly:

ansible-vault encrypt --ask-vault-pass ~/path/to/file/to/encrypt.ex

Running these commands will ask for a password to use. This password can be found in Box and is rotated on a schedule. This password should be used when encrypting and decrypting files.

Terraform

Terraform is used to automate the creation of cloud resources like provisioning infrastructure. This is used in many of our deployment flows and automates the creation of many permissions and AWS resources so they are easily recreated if needed.

While we can install terraform directly, there is a helpful tool tfenv to help with using various versions of terraform, which may be needed.

brew install tfenv

Once this is installed, you can install the latest terraform with

tfenv install latest

or if you need a specific version:

tfenv install <version>

Further information can be found at the tfenv docs: https://spacelift.io/blog/tfenv

SSH Access

You will need to generate an SSH key to SSH into running server instances. These keys are stored in AWS

First, generate a new RSA key locally (we'll need this later):

ssh-keygen -t rsa -b 4096 -C <emailaddress> -f ~/.ssh/bfd_aws_id_rsa

Follow the steps in How To Setup User SSH Key to add your SSH key to the appropriate location. (If you don't have permissions/access, ask for help on this step.)

SonarQube

SonarQube is our static analysis tool for the bfd project. See SonarQube Setup wiki page for setup instructions.

You only really need to initially worry about getting an ITOPs ticket created for you to be added as an admin to the project so you can see the project at the hosted sonarQube instance at https://sonarqube.cloud.cms.gov/.

Other Useful Resources

Here are some other links/resources you can check out that will help understand the system.

FHIR Specification - FHIR is an industry standard format for medical data that is used as the return format of the data from BFD. The return is in JSON and the spec of which can be found here:

Note that only some fields in the spec are used in BFD, as many are optional and/or conditional.

Useful Tools/Installs

Oh-my-zsh

Adds functionality/customization to the default zsh terminal. Includes current directory and git branch (if the current path is a git repo) in its default plugins:

image

Iterm2

A great functional terminal replacement that adds some nice features like split-screen and better tabs

IntelliJ

Modern IDE for Java development; most of the team uses this IDE

Run Scripts

The apps/utils/scripts directory contains bash scripts that simplify running the various components of BFD on a UNIX-like system. Refer to the README files in that directory for detailed information about the scripts.

IntelliJ Tips

Project Setup -

It's recommended to import the bfd project as a maven project, and import from the "apps" directory to make sure all the projects link their classes correctly.

Checkstyle Plugin

BFD uses checkstyle to keep documentation consistent and does automatic checks on build that documentation is valid. To see checkstyle violations in IDE instead of having to wait to build, you can install the checkstyle plugin:

image

Once installed, point it at the BFD checkstyle file which can be found at the root level of the apps folder. The checkstyle plugin settings can be found in Settings > Tools > Checkstyle. Add the file with the + under configuration file.

image

Java Formatting

The main IntelliJ styling matches the codebase pretty well, but a couple adjustments will help avoid some common pitfalls:

Ensure tabs and indents are set to NOT use tabs, with a tab size/indent of 4:

image

Set imports such that the IDE does not automatically use imports; this can be basically disabled by setting "Class count to use import with ''" to 99.

image

The rest of the standard Code Style default in IntelliJ should be ok.

Clone this wiki locally