Our Research

Applied frontier AI research focused on making advanced AI accessible and deployable in real-world scenarios

FT Accelerator Platform

Multilingual LLM Optimization

Overview

Our FT (Fine-Tuning) Accelerator Platform focuses on making frontier language models accessible to emerging markets through advanced optimization techniques. We aim to achieve 10-100x cost reduction while maintaining model performance.

This research is critical for democratizing AI globally, especially for multilingual applications in regions with limited computational resources.

Key Focus Areas

  • LoRA and QLoRA fine-tuning for parameter-efficient training
  • Knowledge distillation for model compression
  • Multilingual model adaptation for Indian and West Asian languages

Internship Opportunities

Work with state-of-the-art fine-tuning pipelines, experiment with models up to 8B parameters on our in-house GPUs, and contribute to production-ready optimization frameworks.

IPHC

Image Processing under Hardware Constraints

Overview

IPHC focuses on deploying state-of-the-art computer vision models on edge devices with limited computational power. This enables real-time object detection, segmentation, and tracking on robots, drones, and mobile devices.

Our research addresses the critical gap between powerful vision models trained in the cloud and the resource constraints of edge deployment scenarios.

Key Focus Areas

  • Model quantization and pruning for edge devices
  • ONNX and TensorRT optimization pipelines
  • Real-time inference on Jetson and mobile platforms

Internship Opportunities

Build models that run on robots and drones, optimize computer vision pipelines for edge deployment, and work with cutting-edge model compression techniques.

Data Fusion Hub

Multimodal Sensor Fusion

Overview

The Data Fusion Hub integrates multiple sensor modalities—cameras, LiDAR, IMU, and audio—to create robust perception systems for robotics and vision-language models. We focus on real-time multimodal data processing and fusion algorithms.

This research enables next-generation robotics applications and enhances vision-language models with richer multimodal understanding.

Key Focus Areas

  • Sensor calibration and synchronization frameworks
  • Vision-language model integration with sensor data
  • ROS-based robotics platforms for data fusion

Internship Opportunities

Contribute to cutting-edge robotics research, work with multimodal AI systems, and build real-time sensor fusion pipelines for autonomous systems.

Join Our Research Team

Work on these exciting research areas and contribute to making AI more accessible

View Internship Opportunities