AR Labs is a mobile AR app that lets students perform virtual science experiments of class 11 & 12.
- The app provides an intuitive sandbox within which the student can play around with the apparatuses, materials and receive accurate readings. This encourages the student to explore beyond the experiment, in the true spirit of science.
- We also have an AI Lab assistant as a replacement for instructors in a real lab. This is because the AI Lab Assistant is
- Context-aware, aware of the not just the state of the experiment but also what the user is currently looking at.
- It can respond to queries from the user on how to use the app or a question about the topic at hand
- Can perform actions and hence exert its control over the experiment. It can toggle on/off visualizations or performan actions like placing an apparatus.
Built during GDG Solutions Challenge 2025
Problem Statement: Lack of Access to Quality Education in Underserved Communities
🎥 Demo Video
- Our Approach
- Architecture
- Technologies We Used
- Key Features
- Installation and Setup Guide
- Challenges We Faced
-
Frontend (
/arlabs/):- Mobile AR application running on the Unity engine
- Works as the interface for users to perform virtual experiments
- Voice-access to an AI Lab assistant that is context-aware, explains the experiment and can exert control over the experiment in the form of AI actions.
-
Backend (
/backend/):- Backend running on Python
- Handles API requests using FastAPI
- Orchestrates our LLM architecture using LangGraph,
- Uses Google Cloud Text-to-Speech features
- Includes RAG Troubleshooting for user help requests
-
Storage Fetch Service (
/cloudrun_service/):- Service running on Golang
- Handles fetching apparatus, experiments and visualizations from the storage bucket.
The project follows a modular architecture:
-
Frontend (C#):
- The Unity Engine
- ARCore, AR Kit and AR Foundation
- Google Cloud for Speech-to-Text API
-
Backend (Python):
- FastAPI for API development.
- LangGraph for AI workflow management.
- Google Gemini as our primary LLM.
- Google Cloud for Text-to-Speech API.
-
Additional Tools:
- Blender for 3D model creation of experiment apparatuses.
-
AR-Powered Experiments:
- Real-time object interaction and visualization.
- AR-based science experiments for educational purposes.
-
AI-Powered Assistance:
- Voice-controlled AI assistant for experiment guidance.
- Has actual control over the experiment setup - can change battery voltage, pendulum length, rheostat resistance etc.
- Answers questions related to experiments and concepts.
-
Seamless API Integration:
- Real-time communication between frontend and backend.
- Uses REST APIs for efficient data exchange.
-
Multi-Language Support:
- AI assistant supports multiple languages for a broader reach
- Unity (6000.0.23f1 LTS) for the AR frontend.
- Python (3.8 or later) for the backend.
git clone https://github.com/hemanth2004/gdg-solutions-hackathon/
cd /gdg-solutions-hackathon/arlabs- Open the project in Unity Hub.
- Make sure the android build package is installed. Steps for Xcode and iOS builds are different.
- Get your Speech-to-Text credentials from your Google cloud console and place them as
Assets/StreamingAssets/stt-credentials.json - Open on Unity and use Ctrl+P to run in the editor.
- Make sure Vulkan is disabled and API Level 24+ is chosen in the Player Settings.
- Open File > Build Profiles and Build to build a .apk.
- Navigate to the backend folder:
cd /gdg-solutions-hackathon/backend/assisstant- Install dependencies
pip install -r requirements.txt-
Setup the secret files:
-
- Create a .env file in the backend directory
GOOGLE_GEMINI_API='' -
- Get a text-to-speech secret file and save it in the backend directory as tts-credential.json in
backend/assistant/
- Get a text-to-speech secret file and save it in the backend directory as tts-credential.json in
-
-
Run the backend server:
python api.py- Navigate to cloudrun_service
cd cloudrun_service
- Use Cloud Build to build your docker image. Make sure you have an artifact registry repo.
gcloud build submit
- Use the Cloud Storage GUI to create a bucket with 3 folders in root
/
└--/apparatus/
└--/experiments/
└--/visualizations/
Make sure that your JSONs within follow the schema within /cloudrun_service/models/
- Use the Cloud Run GUI to create a new service that runs the built image in the registry. Configuure GCS_BUCKET_NAME env variable to be your bucket name.
-
AI Assistance
- The main challenge was providing Gemini the ability to exert its control over the experiment. Moreover, these actions must be synchronized with the speech it was outputting. Thus, the llm can now turn on/off visualizations like electron flow, potential gradients etc., and place apparatuses on the command of the user.
-
Intuitive Design Decisions
-
3D, Science Experiments and AR -- all together posed tricky design problems through out development. They varied from
"How would the user try to do X when they are using the app for the first time?"
to
"How do we model things from the real world without being too much or too little detailed?"
-
-
Performance
- 3D rendering, web requests and AR simultaneously on mobile seemed like they might cause performance issues. But with optimization in the models and lighting, even a 5 year old android can have a decent performance.
Hemanth Elangovan
Kaavay Gupta
Shivam Kumar A
Built with ❤️ by Team Isometric.

