Skip to content
atomikspace edited this page Feb 2, 2026 · 35 revisions

Welcome to the TARS-AI Wiki! This Wiki provides detailed information about TARS-AI project, a recreation of the TARS robot from Interstellar, featuring AI capabilities and servo-drive movement.

Introduction

TARS-AI is a project aimed at recreating the TARS robot from the movie Interstellar. This project includes AI functionalities and servo-driven movements, making it a fascinating intersection of robotics and artificial intelligence.

Getting Started

To get started with the TARS-AI project, please refer to the following pages referencing the version you want to build: Versions represent a major overhaul of the Hardware and Software and are technically not compatible between each others. Reference the same branch name for the correct code to use.

It is recommended to use the current version to avoid potential issues.

V3 Updates over V2

πŸ”§ Hardware

  • Added an extra servo in the torso, allowing each leg to lift independently. This enables more movement possibilities and provides more lifting force, which is important since the arms version is heavier.
  • No Soldering (unless you make your own usb cable or use an INA260)
  • Smaller battery, but still around 3 hours of continuous runtime with the screen on.
  • Added support for a dual-screen option, using lower-resolution displays for a cheaper overall cost.
  • Switched to a USB sound card (no drivers required). Microphones now point directly outside the case, resulting in much better audio capture.
  • Redesigned upper legs β€” routing wires to the legs is now much easier.
  • Only one screw type is required now (M3). All other screws are included with the components and used directly in the project.
  • Print-in-place TARS logo, plus a cutout version for those who want to add accents to the build. (More versions coming: CASE, KIPP, PLEX, etc.)
  • New electronics support that holds all electronics except the PCA9685 servo controller. This modular design allows you to remove all electronics by undoing just 4 screws, without disassembling the entire robot.
  • Footpads now simply slide onto the legs.
  • Larger speakers provide significantly improved audio output.
  • Improved ventilation.
  • Parts redesigned to use the servo horns included with the servos, making each joint stronger.
  • Suspension system removed entirely β€” less hardware to buy, cut, or modify.
  • Designed with assembly in mind: easier wiring, better cable management, and more internal space.
  • USB, HDMI, and SD card ports remain accessible β€” no need to disassemble the robot for access.
  • Added a base so you can work on the robot without it tipping over (very useful when adjusting servos).
  • Thanks to the modular design, it's easy to swap components β€” for example, replacing the Raspberry Pi with an ESP32.
  • Overall project cost has been reduced.

πŸ’» Software

  • Completely redesigned UI β€” uses less CPU and looks much closer to the interfaces seen in the movie.
    • New waveform visualization for microphone input
    • Scrollable terminal
    • Conversation history stored in memory
    • Battery level indicator
    • Animated background that reacts to the conversation
    • Shutdown button (exit the program or shut down the Raspberry Pi)
    • Optional camera display
  • Redesigned OpenAI integration β€” reduced from two API calls to a single call that handles both replies and function execution.
  • TARS can now navigate to any webpage and open it directly in a built-in browser.
  • Optional INA260 battery sensor support: if battery voltage gets too low, the Raspberry Pi shuts down automatically to protect both the Pi and the battery, improving battery lifespan.
  • Added a fully in-house custom wake-word system β€” you can train and use any wake word you want.
  • Two movement modes: fast and slow.
  • Automatic airflow protection β€” if the CPU gets too hot, TARS will reposition itself to improve cooling.
  • Improved function system, allowing complex AI responses to trigger multiple actions in a single call.
  • Multi-language support (using OpenAI for STT).
  • Fully automated installation β€” only a few terminal commands are needed to install everything.
  • New servo installation and calibration tool, with both terminal and GUI interfaces. Servos are now nearly plug-and-play, with an easy offset system for precise tuning.
  • New camera detection framework β€” currently supports face detection, with more features planned.
  • WebUI now includes separate control sections for Legs and Arms. Arm controls respect mechanical limits, preventing impossible or damaging movements.
  • Supports Spotify Connect, allowing TARS to function as a wireless speaker.
  • Improved Persona control system with better performance when settings are changed.
  • Movement system updated to support both No-Arms and Arms versions. Since weight distribution differs, switching modes ensures seamless behavior.
  • Improved walking stability with better overall movement logic.
  • Screensavers (not released yet) β€” activate after inactivity to reduce CPU usage while keeping the wake word active.
  • Automatic idle movements (not released yet) β€” when enabled, TARS will occasionally perform subtle movements (posing, balancing, etc.) while staying in place.
  • More to come!

Community and Support

Join our community on Discord to discuss the project, ask questions, and get support from other TARS-AI enthusiasts.

License

This project is licensed under the CC-BY-NU License.

Additional Resources

Inspirations + Credits to:

Media

Discord Invitation Link YouTube Instagram TikTok

Clone this wiki locally