Skip to content

Conversation

Copy link

Copilot AI commented Aug 7, 2025

This PR makes vLLM and transformers optional dependencies to enable lighter installations of rank_llm based on user needs.

Problem

Previously, installing rank_llm required all heavy dependencies including vLLM and transformers, even if users only needed core functionality like API-based ranking. This resulted in:

  • Large installation size and long install times
  • Dependency conflicts in constrained environments
  • Unnecessary overhead for users not using local models

Solution

New Installation Options

Core installation (lightweight, API-only functionality):

pip install rank_llm

With vLLM support (for local model inference):

pip install rank_llm[vllm]

With transformers support (for T5-based models like DuoT5, MonoT5):

pip install rank_llm[transformers]

Full installation (all features):

pip install rank_llm[all]

Implementation Details

All modules that depend on optional libraries now use conditional imports with helpful error messages:

try:
    import vllm
    VLLM_AVAILABLE = True
except ImportError:
    VLLM_AVAILABLE = False
    vllm = None

class VllmHandler:
    def __init__(self, ...):
        if not VLLM_AVAILABLE:
            raise ImportError(
                "vLLM is not installed. Please install it with: pip install rank_llm[vllm]"
            )

Changes Made

  • Dependency configuration: Moved vLLM and transformers from core to optional dependencies in pyproject.toml
  • Code updates: Added conditional imports to 10+ modules including VllmHandler, DuoT5, MonoT5, RankFiD, and inference handlers
  • Error handling: Clear error messages guide users to install missing dependencies
  • Documentation: Updated README and added comprehensive usage guide
  • Testing: Added unit tests to verify optional dependency behavior

Backward Compatibility

Existing installations continue to work. Users can migrate by installing the appropriate optional dependencies:

  • For vLLM users: pip install rank_llm[vllm]
  • For T5 model users: pip install rank_llm[transformers]

This change enables more flexible deployment while maintaining full functionality for users who need the complete feature set.

Fixes #293.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copy link
Author

Copilot AI commented Aug 7, 2025

@ronakice 👋 This repository doesn't have Copilot instructions. With Copilot instructions, I can understand the repository better, work faster and produce higher quality PRs.

I can generate a .github/copilot-instructions.md file for you automatically. Click here to open a pre-filled issue and assign it to me. I'll write the instructions, and then tag you for review.

Copilot AI changed the title [WIP] optional vllm and transformers Make vLLM and transformers optional dependencies Aug 7, 2025
Copilot AI requested a review from ronakice August 7, 2025 22:56
Copilot finished work on behalf of ronakice August 7, 2025 22:56
@sahel-sh sahel-sh self-requested a review August 16, 2025 16:16
Copy link
Member

@sahel-sh sahel-sh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep vllm for now, a lot of people still use their own gpus for small open source models. Plus I want to make sure that our PyPI package works out side of the box for rankVicuna/Zephyr and FirstMistral without relying on open router.

@ronakice
Copy link
Member

ronakice commented Sep 4, 2025

Fair point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

optional vllm and transformers

3 participants