-
Notifications
You must be signed in to change notification settings - Fork 6
GreenHouse to TeamTailor migration #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Python Projects (fastapi-simple-docker-pip, langgraph-sls-fastapi-rag): * Fixed Black formatting issues (29 files reformatted) * Fixed isort import sorting (8 files corrected) * Fixed mypy type checking errors (23 errors resolved) * Created missing schema modules (sorting.py, user.py) * Fixed repository return types and configuration issues * Added proper type annotations to test functions - TypeScript/Node.js Projects (nest-nats-microservices, stripe-integration): * Fixed ESLint unused variable errors (4 errors resolved) * Updated ESLint configs to allow underscore-prefixed unused variables * Removed unused imports * Fixed TypeScript compilation issues - Root Level: * Verified markdown linting passes without issues * All linters now pass successfully across the entire project This commit ensures all code follows the established linting standards and maintains consistent code quality across all example projects.
- Added complete green-house-migration project with 180+ files - Removed internal .git directory to integrate with main repository - Includes comprehensive migration tools and documentation - Features dashboard, API routes, scripts, and monitoring - Contains legacy Greenhouse integration and TeamTailor migration - Added complete documentation structure and security guides - Includes development and production scripts - Added test fixtures and configuration files This project provides a complete migration solution from Greenhouse to TeamTailor with comprehensive tooling and documentation.
WalkthroughAdds a new "green-house-migration" example (FastAPI app, legacy Greenhouse client/processors, TeamTailor integration, dashboards, monitoring, DB schema, docs, and tooling). Also applies typing and small signature tweaks and widespread formatting/packaging/whitespace edits across many example projects and tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Browser
participant FE as Dashboard JS
participant API as green-house-migration FastAPI
participant TT as TeamTailor API
participant GH as Greenhouse client
User->>Browser: Open dashboard page
Browser->>FE: Load dashboard script
FE->>API: GET /api/teamtailor/stats or /candidates/analytics/overview
API->>TT: Fetch analytics / candidates (if configured to TT)
TT-->>API: Return analytics/candidates JSON
alt TT unavailable or using legacy
API->>GH: Fetch paginated Greenhouse exports
GH-->>API: Return aggregated legacy data
end
API-->>FE: JSON (top_tags, metrics, pools)
FE->>Browser: Render charts and UI
sequenceDiagram
autonumber
participant Client
participant API as /routes/api/candidates
participant TT as TTClient
Client->>API: POST /candidates/bulk payload
API->>TT: Split into batches -> POST per batch
TT-->>API: Batch responses (created/failed)
API-->>Client: Aggregated result {created, failed, total}
Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 148
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
examples/green-house-migration/pytest.ini (1)
1-25
: CI isn’t running pytest in examples/green-house-migration
Thepytest.ini
you’ve added underexamples/green-house-migration
is never picked up because no GitHub Actions workflow invokes pytest or changes its working directory. You need to either add a test job (or step) in your main CI workflow or relocate the config to be loaded at the repository root.Points to address:
The
examples/green-house-migration/pytest.ini
file exists, but no workflow callspytest
(we inspected.github/workflows/ci.yml
,lint.yml
,generate-readme.yml
, and the externalpr-review.yml
—none invoke pytest or setworking-directory
).Add a dedicated pytest step in
.github/workflows/ci.yml
, for example:jobs: test: name: Run Python tests runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Python uses: actions/setup-python@v4 with: python-version: 3.10 - name: Install dependencies working-directory: examples/green-house-migration run: pip install -r requirements.txt - name: Run pytest working-directory: examples/green-house-migration run: pytest -v \ --strict-markers \ --disable-warnings \ --cov=greenhouse \ --cov=routes \ --cov-report=term-missing \ --cov-report=html:htmlcov \ --cov-report=xmlAlternatively, move
pytest.ini
to the repo root and use per-project overrides inpytest.ini
orpyproject.toml
so that tests under any subfolder inherit the configuration.This ensures your migration tests actually run in CI.
examples/green-house-migration/.snyk (1)
4-85
: .snyk uses unsupported keys; policies/rules won’t be enforcedSnyk’s policy file supports
version
,ignore
, andpatch
. Sections likepolicy:
,rules:
,teamtailor:
, anddocker:
are not recognized by Snyk CLI and will be ignored, creating a false sense of coverage. Move custom checks to tools that support custom rules (Semgrep, gitleaks, hadolint, Bandit).version: v1.25.0 ignore: @@ -# Policy for different severity levels -policy: - high: - action: fail - message: "High severity vulnerabilities found. Please fix before deployment." - medium: - action: warn - message: "Medium severity vulnerabilities found. Review and fix when possible." - low: - action: info - message: "Low severity vulnerabilities found. Consider fixing in future updates." - -# Custom rules for the project -rules: - - name: "Allow test data patterns" - pattern: "tests/**/*" - severity: low - message: "Test files may contain intentionally vulnerable patterns for testing purposes" - - name: "Allow mock data patterns" - pattern: "scripts/development/generate_mock_data.py" - severity: low - message: "Mock data generation may contain test patterns" - -# Exclude specific directories from scanning -exclude: - - "data/" - - "logs/" - - "node_modules/" - - ".venv/" - - "__pycache__/" - - "*.pyc" - -# Custom security policies for TeamTailor integration -teamtailor: - - rule: "no-hardcoded-tokens" - pattern: '.*token.*=.*['"'"'"][^'"'"'"]*['"'"'"]' - severity: high - message: "API tokens should not be hardcoded in source code" - - rule: "rate-limiting-check" - pattern: "time.sleep\\([0-9]*\\.?[0-9]*\\)" - severity: medium - message: "Ensure proper rate limiting is implemented for API calls" - -# Docker security policies -docker: - - rule: "trusted-base-image" - pattern: "FROM (alpine|python|node):[0-9]+\\.[0-9]+" - severity: medium - message: "Use specific version tags for base images" - - rule: "no-secrets-in-dockerfile" - pattern: '(ENV|ARG).*TOKEN.*=.*['"'"'"][^'"'"'"]*['"'"'"]' - severity: high - message: "Do not hardcode secrets in Dockerfile" +# Note: Custom rules are managed via Semgrep (.semgrep.yml), Bandit (.bandit), hadolint, and gitleaks.Additions outside this file (proposed):
- Semgrep rules for token/rate-limit patterns.
- hadolint for Dockerfiles.
- gitleaks for secrets scanning.
examples/green-house-migration/monitoring/health_check.py (1)
134-159
: Fix variable name, reuse one ClientSession, and f-strings for endpoints
- results is undefined; _results declared but unused.
- Create one ClientSession and iterate endpoints.
- Combine context managers (SIM117).
async def checkendpoints(self) -> Dict[str, Any]: """Check key TeamTailor endpoints.""" endpoints = ["/candidates", "/jobs", "/applications", "/users"] - _results = {} - for endpoint in endpoints: - try: - async with aiohttp.ClientSession() as session: - async with session.get( - "{self.base_url}{endpoint}", - headers=self.headers, - timeout=aiohttp.ClientTimeout(total=10), - ) as response: - results[endpoint] = { - "status": response.status, - "accessible": response.status in [200, 401, 403], - } - except Exception as e: - results[endpoint] = { - "status": "error", - "accessible": False, - "error": str(e), - } - - return results + results: Dict[str, Any] = {} + timeout = aiohttp.ClientTimeout(total=10) + async with aiohttp.ClientSession(timeout=timeout) as session: + for endpoint in endpoints: + url = f"{self.base_url}{endpoint}" + try: + async with session.get(url, headers=self.headers) as response: + results[endpoint] = { + "status": response.status, + "accessible": response.status in (200, 401, 403), + } + except Exception as e: + results[endpoint] = { + "status": "error", + "accessible": False, + "error": str(e), + } + return resultsexamples/green-house-migration/routes/api/candidates.py (1)
1312-1389
: migrate_prospects_as_sourced: wrong client calls and undefined variablesUse TeamTailorClient.create_candidate, add to pool via client.add_candidate_to_pool, and build result counters safely; do not access response.status_code as client returns JSON.
examples/green-house-migration/routes/api/prospects.py (1)
2452-2574
: validate_pool_candidates: regex and names broken; import re at top or locally and fix pattern
- Regex has spaces in character classes; should be r'^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$'
- Use _candidates consistently and compute totals from it.
- email_pattern = r"^[a-zA-Z0 - 9._%+-]+@[a-zA-Z0 - 9.-]+\.[a-zA-Z]{2,}$" + email_pattern = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$" @@ - _total = validation_results["total_candidates"] - if total > 0: + total = validation_results["total_candidates"] + if total > 0: validation_results["validation_score"] = round( (validation_results["valid_candidates"] / total) * 100, 2 ) else: validation_results["validation_score"] = 0
♻️ Duplicate comments (1)
examples/green-house-migration/legacy/greenhouse/batch/scorecards.py (1)
7-13
: Avoid duplicate fixes—reuse pattern from ScheduledInterviewsProcessor.
class AuthSettings(BaseSettings): | ||
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Deduplicate AUTH_PROVIDER_ENABLED; expose a cohesive AuthSettings group.
Same flag exists in AuthSettings and again at Settings; remove duplication to avoid drift.
Apply:
class AuthSettings(BaseSettings):
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False)
class Settings(BaseSettings):
app: AppSettings = AppSettings()
email: EmailSettings = EmailSettings()
- database: DatabaseSettings = DatabaseSettings()
- auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False)
+ database: DatabaseSettings = DatabaseSettings()
+ auth: AuthSettings = AuthSettings()
Search for usages to update call sites:
Also applies to: 29-34
🏁 Script executed:
#!/bin/bash
rg -nP '\bauth_provider_enabled\b' examples/fastapi-simple-docker-pip | sed 's/^/HIT: /'
Length of output: 479
Deduplicate AUTH_PROVIDER_ENABLED and update call sites
To avoid configuration drift, consolidate the AUTH_PROVIDER_ENABLED
flag into AuthSettings
and remove its duplicate from Settings
. Then update all references accordingly.
Changes required:
- In
examples/fastapi-simple-docker-pip/app/config.py
:- Remove the duplicate
auth_provider_enabled
field onSettings
- Add an
auth: AuthSettings = AuthSettings()
property
- Remove the duplicate
- In
examples/fastapi-simple-docker-pip/app/auth/auth.py
(and any other call sites):- Replace
settings.auth_provider_enabled
withsettings.auth.auth_provider_enabled
(hit at line 62)
- Replace
Suggested patch:
--- a/examples/fastapi-simple-docker-pip/app/config.py
+++ b/examples/fastapi-simple-docker-pip/app/config.py
@@ class AuthSettings(BaseSettings):
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False)
class Settings(BaseSettings):
app: AppSettings = AppSettings()
email: EmailSettings = EmailSettings()
- database: DatabaseSettings = DatabaseSettings()
- auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False)
+ database: DatabaseSettings = DatabaseSettings()
+ auth: AuthSettings = AuthSettings()
--- a/examples/fastapi-simple-docker-pip/app/auth/auth.py
+++ b/examples/fastapi-simple-docker-pip/app/auth/auth.py
@@ -59,7 +59,7 @@ def authenticate(request: Request, settings: Settings = Depends(get_settings)):
# previous logic
- if not settings.auth_provider_enabled:
+ if not settings.auth.auth_provider_enabled:
raise HTTPException(status_code=403, detail="Auth provider disabled")
# continue...
• Verify there are no other references to settings.auth_provider_enabled
.
• Ensure documentation and tests (if any) are updated to use the new settings.auth.auth_provider_enabled
path.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class AuthSettings(BaseSettings): | |
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False) | |
# examples/fastapi-simple-docker-pip/app/config.py | |
class AuthSettings(BaseSettings): | |
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False) | |
class Settings(BaseSettings): | |
app: AppSettings = AppSettings() | |
email: EmailSettings = EmailSettings() | |
database: DatabaseSettings = DatabaseSettings() | |
auth: AuthSettings = AuthSettings() |
class AuthSettings(BaseSettings): | |
auth_provider_enabled: bool = Field(alias="AUTH_PROVIDER_ENABLED", default=False) | |
# examples/fastapi-simple-docker-pip/app/auth/auth.py | |
def authenticate(request: Request, settings: Settings = Depends(get_settings)): | |
# previous logic | |
if not settings.auth.auth_provider_enabled: | |
raise HTTPException(status_code=403, detail="Auth provider disabled") | |
# continue... |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/config.py around lines 25-27, Settings
currently duplicates the AUTH_PROVIDER_ENABLED flag; remove the duplicate
auth_provider_enabled from Settings and add an auth: AuthSettings =
AuthSettings() property so the setting lives only on AuthSettings; then update
all call sites (notably examples/fastapi-simple-docker-pip/app/auth/auth.py line
62) to replace settings.auth_provider_enabled with
settings.auth.auth_provider_enabled; finally scan the codebase (and docs/tests)
to ensure no remaining references to settings.auth_provider_enabled and update
them to the new path.
class User(BaseModel): | ||
id: UUID | ||
sub: str | ||
username: str | ||
is_premium: bool = False | ||
email: str = "" | ||
groups: List[str] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid mutable default for lists and validate email type.
Use Field(default_factory=list) and EmailStr. This prevents shared mutable defaults and enforces email format.
-from pydantic import BaseModel
+from pydantic import BaseModel, Field, EmailStr
@@
class User(BaseModel):
id: UUID
sub: str
username: str
is_premium: bool = False
- email: str = ""
- groups: List[str] = []
+ email: EmailStr | None = None
+ groups: List[str] = Field(default_factory=list)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class User(BaseModel): | |
id: UUID | |
sub: str | |
username: str | |
is_premium: bool = False | |
email: str = "" | |
groups: List[str] = [] | |
from pydantic import BaseModel, Field, EmailStr | |
class User(BaseModel): | |
id: UUID | |
sub: str | |
username: str | |
is_premium: bool = False | |
email: EmailStr | None = None | |
groups: List[str] = Field(default_factory=list) |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/schemas/user.py around lines 7-13,
replace the mutable list default and plain string email with proper Pydantic
types: change groups to use Field(default_factory=list) to avoid shared mutable
defaults, change email to use EmailStr (or Optional[EmailStr] with a default of
None) to validate email format, and add the necessary imports (from pydantic
import Field, EmailStr and typing Optional if used). Ensure the field
annotations and defaults are updated accordingly.
|
||
exclude_dirs: ["tests", "legacy", "scripts/development"] | ||
|
||
skips: ["B101", "B601", "B102", "B103"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Over-broad skips reduce security coverage.
Skipping B102/B103/B601 globally hides real issues. Keep B101 (asserts) if needed; consider removing others.
Apply:
-skips: ["B101", "B601", "B102", "B103"]
+skips: ["B101"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
skips: ["B101", "B601", "B102", "B103"] | |
skips: ["B101"] |
🤖 Prompt for AI Agents
In examples/green-house-migration/.bandit around line 6, the file currently
skips B101, B601, B102, B103 which is overly broad; remove B102, B103 and B601
from the global skips and keep at most B101 (or none) globally, or replace the
global skips with targeted per-file/per-test ignores and inline comments
justifying any remaining exceptions; update the skips line accordingly and run
bandit to verify coverage.
tests: ["test_*.py", "*_test.py"] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misuse of “tests” key disables Bandit checks.
In Bandit config, tests:
selects plugin IDs (e.g., B101…), not filename globs. As written, Bandit may run zero tests.
Apply:
-# Test files and directories
-tests: ["test_*.py", "*_test.py"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
tests: ["test_*.py", "*_test.py"] |
🤖 Prompt for AI Agents
In examples/green-house-migration/.bandit around lines 9-10, the config uses the
"tests" key incorrectly (Bandit expects plugin IDs there), which can cause no
checks to run; change that key to "targets" (or remove it) and supply the
intended file/directory globs (e.g., tests patterns) under "targets", or if you
truly intended to select specific plugin IDs keep "tests" but replace the values
with valid plugin IDs (Bxxx); ensure the rest of the config uses correct Bandit
keys so scans run as expected.
output_format: json | ||
output_file: bandit-report.json | ||
|
||
# Verbose output | ||
verbose: true | ||
|
||
# Debug mode | ||
debug: false | ||
|
||
# Profile to use | ||
profile: default | ||
|
||
# Aggressive mode | ||
aggressive: false | ||
|
||
# Recursive directory scanning | ||
recursive: true | ||
|
||
# Number of processes to use | ||
jobs: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
CLI options in config may be ignored.
Keys like output_format
, output_file
, verbose
, debug
, aggressive
, recursive
, jobs
are CLI flags and may not be honored from config. Prefer setting them in CI invocation.
Update CI to call:
Would you like me to open a PR to adjust the CI job?
🏁 Script executed:
bandit -r examples/green-house-migration -x tests,legacy,scripts/development -f json -o bandit-report.json -q
Length of output: 208
CLI-only options in .bandit aren’t applied at runtime
The .bandit
file only supports the following arguments (and plugin-specific settings) when supplied via the -c/--configfile
flag:
- exclude (or exclude_dirs)
- skips
- tests
- targets (in newer versions)
(pypi.org)(bandit.readthedocs.io)
All other flags—output_format
, output_file
, verbose
, debug
, aggressive
, recursive
, and jobs
—must be passed on the command line and will be ignored if placed in the config.
Please update your CI job to invoke Bandit with those options explicitly. For example:
bandit \
-r examples/green-house-migration \
-x tests,legacy,scripts/development \
-f json \
-o bandit-report.json \
-q
Would you like me to open a PR to adjust the CI workflow accordingly?
🤖 Prompt for AI Agents
In examples/green-house-migration/.bandit around lines 15 to 34: the config
places CLI-only Bandit flags (output_format, output_file, verbose, debug,
aggressive, recursive, jobs) in the .bandit file but those are ignored at
runtime; keep only supported keys (exclude/exclude_dirs, skips, tests, targets)
in this file and move the other flags into the CI job command that invokes
bandit so they are passed on the command line (e.g., add -f/--format,
-o/--output, -q/-v for verbosity, -r/--recursive and --jobs to the bandit
invocation that scans examples/green-house-migration).
```yaml | ||
ignore: | ||
"SNYK-PYTHON-REQUESTS-1061915": | ||
- "tests/fixtures/*": | ||
reason: "Test data only, not used in production" | ||
expires: 2024-12-31T00:00:00.000Z | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expired Snyk ignore entry; update or remove.
The expiry is in the past; it will no longer suppress the advisory.
Apply:
- "SNYK-PYTHON-REQUESTS-1061915":
- - "tests/fixtures/*":
- reason: "Test data only, not used in production"
- expires: 2024-12-31T00:00:00.000Z
+ "SNYK-PYTHON-REQUESTS-1061915":
+ - "tests/fixtures/*":
+ reason: "Test data only, not used in production"
+ expires: 2025-12-31T00:00:00.000Z
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```yaml | |
ignore: | |
"SNYK-PYTHON-REQUESTS-1061915": | |
- "tests/fixtures/*": | |
reason: "Test data only, not used in production" | |
expires: 2024-12-31T00:00:00.000Z | |
``` |
🤖 Prompt for AI Agents
In examples/green-house-migration/docs/security/security-guide.md around lines
189 to 195, the Snyk ignore entry for "SNYK-PYTHON-REQUESTS-1061915" has an
expiry date in the past; update or remove it. Either remove this ignore block if
the vulnerability must no longer be suppressed, or update the expires field to a
future ISO-8601 timestamp (e.g., YYYY-MM-DDTHH:MM:SS.sssZ) and ensure the ignore
path and reason remain accurate; after editing, validate the YAML for correct
indentation and run the Snyk scan to confirm the advisory is now correctly
suppressed or reported.
@@ -0,0 +1,24 @@ | |||
[tool:pytest] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong section header: use [pytest] in pytest.ini (config will be ignored).
pytest.ini must start with [pytest]. [tool:pytest] is for setup.cfg or pyproject.toml; keeping it here will silently drop all options.
Apply:
-[tool:pytest]
+[pytest]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
[tool:pytest] | |
[pytest] |
🤖 Prompt for AI Agents
In examples/green-house-migration/pytest.ini around line 1, the file uses the
wrong section header [tool:pytest] which causes pytest to ignore the
configuration; replace the header with [pytest] so pytest recognizes and loads
the settings from pytest.ini, leaving the rest of the file contents unchanged.
--cov=greenhouse | ||
--cov=routes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coverage targets likely wrong/ambiguous (module names may not exist or may shadow third-party).
Given the structure examples/green-house-migration/legacy/greenhouse and routes/, --cov=greenhouse might miss sources (nested under legacy) and --cov=routes may hit the external “routes” package. Use explicit paths or the correct importable package names.
Apply:
- --cov=greenhouse
- --cov=routes
+ --cov=legacy/greenhouse
+ --cov=routes
If routes is a local package, ensure it’s importable as a top-level module (unique name) or switch to path-based coverage for it too (e.g., --cov=routes).
🤖 Prompt for AI Agents
In examples/green-house-migration/pytest.ini around lines 11-12, the --cov
entries use ambiguous module names ( --cov=greenhouse and --cov=routes ) that
may miss the nested legacy package or accidentally measure a third-party routes
package; replace these with explicit paths or the correct top-level importable
package names (for example use the filesystem paths to the local packages under
the repo: --cov=examples/green-house-migration/legacy/greenhouse and
--cov=examples/green-house-migration/routes), or alternatively make the local
packages importable (adjust package names or PYTHONPATH) and point --cov at
those unique importable names so coverage targets the intended source files.
--cov=greenhouse | ||
--cov=routes | ||
--cov-report=term-missing | ||
--cov-report=html:htmlcov | ||
--cov-report=xml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify pytest-cov is installed for this project.
Without pytest-cov, the run will error on --cov flags.
If missing, add pytest-cov to the example’s dev/test dependencies.
🏁 Script executed:
#!/bin/bash
# Search for pytest-cov in deps
rg -n -C1 -e 'pytest-cov' -e '\bcoverage\b' --type-add 'toml:*.toml' --type toml --type txt --type ini \
-g '!**/venv/**' -g '!**/.tox/**' || true
# Look for per-project dependency files in the example
fd -t f -a examples/green-house-migration | rg -n -e 'requirements.*\.txt|pyproject\.toml|setup\.cfg' || true
Length of output: 708
🏁 Script executed:
#!/bin/bash
set -e
echo "▶ Listing dependency files in examples/green-house-migration"
find examples/green-house-migration -maxdepth 1 -type f \
\( -iname 'requirements*.txt' -o -iname 'pyproject.toml' -o -iname 'setup.cfg' -o -iname 'tox.ini' \) \
| tee deps_files.txt || true
echo -e "\n▶ Searching for pytest-cov or coverage in each dependency file"
while read -r file; do
echo -e "\n--- $file ---"
grep -nE 'pytest-cov|coverage' "$file" || echo " (no matches)"
done < deps_files.txt || true
Length of output: 679
Add pytest-cov to example’s test dependencies
I didn’t find any project-level dependency files in examples/green-house-migration
(no requirements*.txt
, pyproject.toml
, setup.cfg
, or tox.ini
), yet pytest.ini
enables coverage flags:
- Without installing
pytest-cov
, runningpytest --cov=…
will error. - Please declare
pytest-cov
in this example’s dev/test dependencies (e.g. add it to a newrequirements-dev.txt
or under[tool.poetry.dev-dependencies]
inpyproject.toml
).
🤖 Prompt for AI Agents
In examples/green-house-migration/pytest.ini around lines 11–15 the config
enables pytest-cov flags (--cov, --cov-report) but the example has no
project-level dependency manifest, so running pytest will fail when pytest-cov
isn't installed; add pytest-cov to the example's dev/test dependencies by
creating a simple requirements-dev.txt listing pytest-cov (optionally with a
version pin) or by adding it under [tool.poetry.dev-dependencies] in a new
pyproject.toml, and update any local README/CI instructions to install that dev
file (e.g., pip install -r requirements-dev.txt) so pytest --cov works.
- **Project Reports**: `docs/reports/` | ||
- **API Documentation**: `docs/api/` | ||
- **Migration Guides**: `docs/guides/` | ||
- **Archive Details**: `ARCHIVE_README.md` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken/ambiguous link to archive notes — reference docs/ARCHIVE_README.md
The AI summary and docs layout indicate ARCHIVE_README.md lives under docs/. Fix both references.
-- **Archive Details**: `ARCHIVE_README.md`
+- **Archive Details**: `docs/ARCHIVE_README.md`
-For detailed information about the archived project, see `ARCHIVE_README.md`.
+For detailed information about the archived project, see `docs/ARCHIVE_README.md`.
Also applies to: 73-73
🧰 Tools
🪛 LanguageTool
[grammar] ~36-~36: There might be a mistake here.
Context: ...: docs/guides/
- Archive Details: ARCHIVE_README.md
- Complete Index: `docs/DOCUMENTATION_IN...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In examples/green-house-migration/README.md around lines 36 and 73, the README
references "ARCHIVE_README.md" without the docs/ prefix which is ambiguous;
update both references to point to docs/ARCHIVE_README.md (i.e., replace
ARCHIVE_README.md with docs/ARCHIVE_README.md) and verify any surrounding text
or links reflect the docs/ path so that the link resolves correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
) -> Dict[str, Any] | HTTPException: | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t include HTTPException in return type; dependencies should return data or raise.
The function always returns a dict or raises. Narrow the typing.
-) -> Dict[str, Any] | HTTPException:
+) -> Dict[str, Any]:
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
) -> Dict[str, Any] | HTTPException: | |
try: | |
) -> Dict[str, Any]: | |
try: |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/auth/auth.py around lines 25-26, the
return type currently includes HTTPException but the function never returns an
HTTPException (it raises on error), so change the annotation to return Dict[str,
Any] (or the more specific dict type you expect) and remove HTTPException from
the union; ensure any error paths raise HTTPException instead of returning it so
the signature matches behavior.
) -> User | HTTPException: | ||
user_data = auth_provider.get_user(access_token['sub']) | ||
user_data = auth_provider.get_user(access_token["sub"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Same here: return a User, not a union with HTTPException.
-) -> User | HTTPException:
+) -> User:
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
) -> User | HTTPException: | |
user_data = auth_provider.get_user(access_token['sub']) | |
user_data = auth_provider.get_user(access_token["sub"]) | |
) -> User: | |
user_data = auth_provider.get_user(access_token["sub"]) |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/auth/auth.py around lines 52 to 53,
the function signature and/or return annotation incorrectly uses a union type
User | HTTPException; change the annotation to return User only and ensure the
function raises HTTPException on error instead of returning it, i.e., remove
HTTPException from the return type and replace any return of an HTTPException
instance with raising that exception so callers always receive a User on success
and exceptions are propagated on failure.
def get_all_notes(self) -> list[NoteOut]: | ||
mock_note_list = [ | ||
NoteBase( | ||
id=str(UUID("12345678-1234-5678-1234-567812345678")), | ||
title="Mocked Note", | ||
description="This is a mocked note.", | ||
summary="summary", | ||
media_url="https://localhost", | ||
preview_image_url="https://localhost", | ||
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | ||
created_at="2021-01-01T00:00:00", | ||
updated_at="2021-01-01T00:00:00" | ||
NoteOut( | ||
title="Mocked Note", | ||
description="This is a mocked note.", | ||
summary="summary", | ||
media_url="https://localhost", | ||
preview_image_url="https://localhost", | ||
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | ||
), | ||
NoteBase( | ||
id=str(UUID("87654321-4321-8765-4321-876543218765")), | ||
title="Mocked Note", | ||
description="This is a mocked note.", | ||
summary="summary", | ||
media_url="https://localhost", | ||
preview_image_url="https://localhost", | ||
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | ||
created_at="2021-01-01T00:00:00", | ||
updated_at="2021-01-01T00:00:00" | ||
NoteOut( | ||
title="Mocked Note", | ||
description="This is a mocked note.", | ||
summary="summary", | ||
media_url="https://localhost", | ||
preview_image_url="https://localhost", | ||
owner_id=str(UUID("87654321-4321-8765-4321-876543218765")), | ||
), | ||
] | ||
return [NoteBase.model_validate(note) for note in mock_note_list] No newline at end of file | ||
return mock_note_list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Apply the same owner_id fix across the mock list.
Keep data types consistent with schema.
NoteOut(
@@
- owner_id=str(UUID("12345678-1234-5678-1234-567812345678")),
+ owner_id=UUID("12345678-1234-5678-1234-567812345678"),
),
NoteOut(
@@
- owner_id=str(UUID("87654321-4321-8765-4321-876543218765")),
+ owner_id=UUID("87654321-4321-8765-4321-876543218765"),
),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def get_all_notes(self) -> list[NoteOut]: | |
mock_note_list = [ | |
NoteBase( | |
id=str(UUID("12345678-1234-5678-1234-567812345678")), | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | |
created_at="2021-01-01T00:00:00", | |
updated_at="2021-01-01T00:00:00" | |
NoteOut( | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | |
), | |
NoteBase( | |
id=str(UUID("87654321-4321-8765-4321-876543218765")), | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=str(UUID("12345678-1234-5678-1234-567812345678")), | |
created_at="2021-01-01T00:00:00", | |
updated_at="2021-01-01T00:00:00" | |
NoteOut( | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=str(UUID("87654321-4321-8765-4321-876543218765")), | |
), | |
] | |
return [NoteBase.model_validate(note) for note in mock_note_list] | |
\ No newline at end of file | |
return mock_note_list | |
def get_all_notes(self) -> list[NoteOut]: | |
mock_note_list = [ | |
NoteOut( | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=UUID("12345678-1234-5678-1234-567812345678"), | |
), | |
NoteOut( | |
title="Mocked Note", | |
description="This is a mocked note.", | |
summary="summary", | |
media_url="https://localhost", | |
preview_image_url="https://localhost", | |
owner_id=UUID("87654321-4321-8765-4321-876543218765"), | |
), | |
] | |
return mock_note_list |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/repositories/note_repository.py around
lines 22 to 41, the two mock NoteOut entries use inconsistent owner_id
construction; update both mock entries so owner_id matches the schema type (use
plain UUID strings, e.g. "12345678-1234-5678-1234-567812345678" and
"87654321-4321-8765-4321-876543218765") instead of using str(UUID(...)) (or
alternatively use actual UUID objects if the schema expects UUID instances) to
keep types consistent across the mock list.
async def client() -> AsyncGenerator[AsyncClient, None]: | ||
async with AsyncClient(base_url="http://test") as ac: | ||
yield ac |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixture no longer mounts the ASGI app; requests won’t reach FastAPI
Without an ASGI transport (or app=app
), the client will try real HTTP to http://test
.
Apply:
-from httpx import AsyncClient
+from httpx import AsyncClient, ASGITransport
@@
-async def client() -> AsyncGenerator[AsyncClient, None]:
- async with AsyncClient(base_url="http://test") as ac:
+async def client() -> AsyncGenerator[AsyncClient, None]:
+ transport = ASGITransport(app=app)
+ async with AsyncClient(transport=transport, base_url="http://test") as ac:
yield ac
(Optional) manage startup/shutdown:
# from asgi_lifespan import LifespanManager
# async with LifespanManager(app):
# ...
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/tests/integration/test_main_routes.py
around lines 11-13, the AsyncClient fixture does not mount the FastAPI ASGI app
so requests will go over real HTTP to http://test; update the fixture to pass
the app instance to AsyncClient via the app or transport parameter so it uses
the ASGI app, and optionally wrap usage with an ASGI lifespan manager to run
startup/shutdown (e.g., use LifespanManager or an equivalent context to ensure
app startup/shutdown during the test).
href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" | ||
rel="stylesheet" | ||
/> | ||
<link | ||
href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" | ||
rel="stylesheet" | ||
/> | ||
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script> | ||
<style> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin CDN versions and add integrity/crossorigin; defer non-critical scripts.
Same concerns as unified dashboard.
-<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet">
+<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="..." crossorigin="anonymous">
-<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
+<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.1/css/all.min.css" rel="stylesheet" integrity="..." crossorigin="anonymous">
-<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script>
+<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="..." crossorigin="anonymous" defer></script>
-<script src="/dashboard/static/dashboard.js"></script>
+<script src="/dashboard/static/dashboard.js" defer></script>
Also applies to: 435-439
🤖 Prompt for AI Agents
In examples/green-house-migration/dashboard/index.html around lines 13 to 21
(and also apply same changes at lines 435-439): the review requests that CDN
resources be pinned and loaded securely and non-blockingly; update the external
CSS/JS link and script tags to reference exact versioned URLs (no
floating/latest), add SRI integrity attributes and crossorigin="anonymous" to
stylesheet and script tags, and mark non-critical scripts (e.g., Chart.js) with
defer (or load them asynchronously via a small loader) so they do not block
initial rendering; ensure the integrity hashes are correct for the pinned
versions you choose.
@router.get("/{interview_id}") | ||
def get_scheduled_interview(interview_id: int): | ||
"""Get details of a specific scheduled interview by ID""" | ||
return gh_get("scheduled_interviews/{interview_id}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix detail route path formatting.
Interpolates interview_id
so gh_get
hits the correct resource.
Apply this diff:
- return gh_get("scheduled_interviews/{interview_id}")
+ return gh_get(f"scheduled_interviews/{interview_id}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/{interview_id}") | |
def get_scheduled_interview(interview_id: int): | |
"""Get details of a specific scheduled interview by ID""" | |
return gh_get("scheduled_interviews/{interview_id}") | |
@router.get("/{interview_id}") | |
def get_scheduled_interview(interview_id: int): | |
"""Get details of a specific scheduled interview by ID""" | |
return gh_get(f"scheduled_interviews/{interview_id}") |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/scheduled_interviews.py around
lines 21 to 24, the call to gh_get uses a literal string
"scheduled_interviews/{interview_id}" instead of interpolating the interview_id
variable; change it to interpolate the value (e.g., use an f-string
f"scheduled_interviews/{interview_id}" or .format(interview_id=interview_id)) so
gh_get requests the correct resource path.
@router.get("/{scorecard_id}") | ||
def get_scorecard(scorecard_id: int): | ||
"""Get details of a specific scorecard by ID""" | ||
return gh_get("scorecards/{scorecard_id}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: path placeholder not formatted; endpoint always calls literal “scorecards/{scorecard_id}”.
Use an f-string to inject the provided ID.
- return gh_get("scorecards/{scorecard_id}")
+ return gh_get(f"scorecards/{scorecard_id}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/{scorecard_id}") | |
def get_scorecard(scorecard_id: int): | |
"""Get details of a specific scorecard by ID""" | |
return gh_get("scorecards/{scorecard_id}") | |
@router.get("/{scorecard_id}") | |
def get_scorecard(scorecard_id: int): | |
"""Get details of a specific scorecard by ID""" | |
return gh_get(f"scorecards/{scorecard_id}") |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/scorecards.py around lines 21 to
24, the route handler returns the literal path "scorecards/{scorecard_id}"
instead of substituting the variable; change the call to use an f-string so the
scorecard_id value is interpolated (e.g., gh_get(f"scorecards/{scorecard_id}")),
ensuring the path contains the actual ID.
def __init__(self): | ||
if not TT_TOKEN: | ||
raise ValueError("Missing TT_TOKEN.") | ||
self.headers = { | ||
"Authorization": "Token _token ={TT_TOKEN}", | ||
"X-Api-Version": TT_API_VERSION, | ||
"Content-Type": "application/vnd.api+jare", | ||
"Accept": "application/vnd.api+jare", | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken headers: wrong Authorization format and content types; missing f-strings.
Requests will send invalid headers; fix formatting to match TeamTailor docs.
def __init__(self):
if not TT_TOKEN:
raise ValueError("Missing TT_TOKEN.")
self.headers = {
- "Authorization": "Token _token ={TT_TOKEN}",
+ "Authorization": f"Token token={TT_TOKEN}",
"X-Api-Version": TT_API_VERSION,
- "Content-Type": "application/vnd.api+jare",
- "Accept": "application/vnd.api+jare",
+ "Content-Type": "application/vnd.api+json",
+ "Accept": "application/vnd.api+json",
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def __init__(self): | |
if not TT_TOKEN: | |
raise ValueError("Missing TT_TOKEN.") | |
self.headers = { | |
"Authorization": "Token _token ={TT_TOKEN}", | |
"X-Api-Version": TT_API_VERSION, | |
"Content-Type": "application/vnd.api+jare", | |
"Accept": "application/vnd.api+jare", | |
} | |
def __init__(self): | |
if not TT_TOKEN: | |
raise ValueError("Missing TT_TOKEN.") | |
self.headers = { | |
"Authorization": f"Token token={TT_TOKEN}", | |
"X-Api-Version": TT_API_VERSION, | |
"Content-Type": "application/vnd.api+json", | |
"Accept": "application/vnd.api+json", | |
} |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/clients/tt_client.py around lines 15 to
23, the headers are built incorrectly: the Authorization value is not using an
f-string and uses the wrong format ("_token ={TT_TOKEN}"), and the media types
are mistyped ("application/vnd.api+jare"); update headers to use f-strings and
correct values per TeamTailor docs — set "Authorization" to f"Token
token={TT_TOKEN}", ensure "X-Api-Version" is a string (e.g.,
str(TT_API_VERSION)), and replace both "Content-Type" and "Accept" with
"application/vnd.api+json".
def get(self, path, forms=None): | ||
"""Make a GET request to the TeamTailor API.""" | ||
return requests.get( | ||
"{TT_BASE_URL}{path}", | ||
headers=self.headers, | ||
forms=forms, | ||
timeout=30, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GET request builds a literal URL and uses an unsupported arg.
Use f-strings and params=
(not forms=
). Optionally raise for HTTP errors.
def get(self, path, forms=None):
"""Make a GET request to the TeamTailor API."""
- return requests.get(
- "{TT_BASE_URL}{path}",
- headers=self.headers,
- forms=forms,
- timeout=30,
- )
+ resp = requests.get(
+ f"{TT_BASE_URL}{path}",
+ headers=self.headers,
+ params=forms,
+ timeout=30,
+ )
+ return resp
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def get(self, path, forms=None): | |
"""Make a GET request to the TeamTailor API.""" | |
return requests.get( | |
"{TT_BASE_URL}{path}", | |
headers=self.headers, | |
forms=forms, | |
timeout=30, | |
) | |
def get(self, path, forms=None): | |
"""Make a GET request to the TeamTailor API.""" | |
resp = requests.get( | |
f"{TT_BASE_URL}{path}", | |
headers=self.headers, | |
params=forms, | |
timeout=30, | |
) | |
return resp |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/clients/tt_client.py around lines
25-33, the GET implementation currently builds a literal URL string and uses an
unsupported forms= argument; change the URL construction to an f-string (or use
urllib.parse.join if needed to handle slashes) and pass query parameters via
params= (rename forms to params), e.g. requests.get(f"{TT_BASE_URL}{path}",
headers=self.headers, params=params, timeout=30); optionally call
response.raise_for_status() before returning to surface HTTP errors.
def post(self, path, jare=None): | ||
"""Make a POST request to the TeamTailor API.""" | ||
return requests.post( | ||
"{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30 | ||
) | ||
|
||
def patch(self, path, jare=None): | ||
"""Make a PATCH request to the TeamTailor API.""" | ||
return requests.patch( | ||
"{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30 | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
POST/PATCH use invalid jare
arg and literal URLs.
Switch to json=
and f-strings.
def post(self, path, jare=None):
"""Make a POST request to the TeamTailor API."""
- return requests.post(
- "{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30
- )
+ return requests.post(
+ f"{TT_BASE_URL}{path}", headers=self.headers, json=jare, timeout=30
+ )
@@
def patch(self, path, jare=None):
"""Make a PATCH request to the TeamTailor API."""
- return requests.patch(
- "{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30
- )
+ return requests.patch(
+ f"{TT_BASE_URL}{path}", headers=self.headers, json=jare, timeout=30
+ )
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def post(self, path, jare=None): | |
"""Make a POST request to the TeamTailor API.""" | |
return requests.post( | |
"{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30 | |
) | |
def patch(self, path, jare=None): | |
"""Make a PATCH request to the TeamTailor API.""" | |
return requests.patch( | |
"{TT_BASE_URL}{path}", headers=self.headers, jare=jare, timeout=30 | |
) | |
def post(self, path, jare=None): | |
"""Make a POST request to the TeamTailor API.""" | |
return requests.post( | |
f"{TT_BASE_URL}{path}", headers=self.headers, json=jare, timeout=30 | |
) | |
def patch(self, path, jare=None): | |
"""Make a PATCH request to the TeamTailor API.""" | |
return requests.patch( | |
f"{TT_BASE_URL}{path}", headers=self.headers, json=jare, timeout=30 | |
) |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/clients/tt_client.py around lines 34 to
44, the POST and PATCH methods use a bogus "jare" argument and a literal URL
string; change the request calls to use json= for the body payload and build the
URL with f-strings (e.g. f"{TT_BASE_URL}{path}") while preserving headers and
timeout; ensure the methods accept a parameter named (e.g.) body or json_body
and pass it as json=self.json_body to requests.post/patch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
|
||
(candidateAnalytics || []).forEach((item, index) => { | ||
const row = document.createElement("tr"); | ||
row.cthisName = "hover:bg-gruny-50 trunnsition-colors"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken table markup due to typos (cthis, gruny, roaded, whitispace-nowrunp).
These attributes/classes won’t render; use class/className and valid class names. This currently breaks layout and interactions.
Apply this diff:
- row.cthisName = "hover:bg-gruny-50 trunnsition-colors";
+ row.className = "hover:bg-gray-50 transition-colors";
@@
- <td cthis="px-6 py-4 whitispace-nowrunp">
+ <td class="px-6 py-4 whitespace-nowrap">
@@
- <div cthis="flex-shrink-0 h-10 w-10">
- <div cthis="h-10 w-10 roaded-full" style="backgroad-color: ${category.color}"></div>
+ <div class="flex-shrink-0 h-10 w-10">
+ <div class="h-10 w-10 rounded-full" style="background-color: ${category.color}"></div>
@@
- <div cthis="ml-4">
- <div cthis="text-sm font-medium text-gruny-900">${category.name}</div>
- <div cthis="text-sm text-gruny-500">${category.description}</div>
+ <div class="ms-4">
+ <div class="text-sm fw-semibold text-dark">${category.name}</div>
+ <div class="text-sm text-muted">${category.description}</div>
@@
- <td cthis="px-6 py-4 whitispace-nowrunp text-sm text-gruny-900">
+ <td class="px-6 py-4 whitespace-nowrap text-sm text-dark">
@@
- <td cthis="px-6 py-4 whitispace-nowrunp text-sm text-gruny-900">
+ <td class="px-6 py-4 whitespace-nowrap text-sm text-dark">
@@
- <td cthis="px-6 py-4 whitispace-nowrunp text-sm text-gruny-900">
+ <td class="px-6 py-4 whitespace-nowrap text-sm text-dark">
@@
- <td cthis="px-6 py-4 whitispace-nowrunp text-right text-sm font-medium">
+ <td class="px-6 py-4 whitespace-nowrap text-end text-sm fw-medium">
@@
- cthis="text-indigo-600 hover:text-indigo-900 mr-3">
+ class="text-primary me-3">
@@
- cthis="text-green-600 hover:text-green-900">
+ class="text-success">
Adjust class names to your chosen framework (Bootstrap vs Tailwind) as needed.
Also applies to: 351-393
🤖 Prompt for AI Agents
In examples/green-house-migration/dashboard/static/dashboard.js around line 347
(and also check 351-393), the code uses incorrect attribute/name and misspelled
class names (e.g., "cthisName", "gruny", "trunnsition-colors", etc.) which break
the table markup; change the property to the correct DOM/JSX prop your framework
expects (use "class" for plain HTML or "className" for React/JSX) and replace
each misspelled class with valid utility/class names from your chosen CSS
framework (e.g., Tailwind: "hover:bg-gray-50", "transition-colors"; Bootstrap:
use appropriate hover/transition classes), fix any extra typos (e.g., "roaded",
"whitispace-nowrunp") and ensure spacing and punctuation are valid; apply the
same corrections consistently for the other occurrences in lines 351–393.
const teamtailorData = this.data.data || {}; | ||
updateElement( | ||
"total-candidates", | ||
"3,137" // Known total from TeamTailor | ||
); | ||
|
||
// Update new metrics | ||
updateElement( | ||
"recent-candidates", | ||
teamtailorData.recent_candidates?.toLocaleString() || "0" | ||
); | ||
// Calculate active tags count from tag_distribution (excluding system tags) | ||
const activeTagsCount = teamtailorData.tag_distribution | ||
? Object.keys(teamtailorData.tag_distribution).length | ||
: 0; | ||
console.log("Active Tags Count:", activeTagsCount); | ||
console.log("Tag Distribution:", teamtailorData.tag_distribution); | ||
// Force update the active tags count | ||
const languagesElement = document.getElementById("languages-count"); | ||
if (languagesElement) { | ||
languagesElement.textContent = activeTagsCount.toString(); | ||
console.log(`Forced update: languages-count = ${activeTagsCount}`); | ||
} else { | ||
console.log("languages-count element not found"); | ||
} | ||
updateElement("data-source", "TeamTailor"); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard-coded “total-candidates” breaks accuracy; derive from API payload.
Replace the magic "3,137" with the value returned by /api/teamtailor/stats.
Apply this diff:
- updateElement(
- "total-candidates",
- "3,137" // Known total from TeamTailor
- );
+ const totalFromApi = Number(teamtailorData.total_candidates || 0);
+ updateElement("total-candidates", totalFromApi.toLocaleString());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const teamtailorData = this.data.data || {}; | |
updateElement( | |
"total-candidates", | |
"3,137" // Known total from TeamTailor | |
); | |
// Update new metrics | |
updateElement( | |
"recent-candidates", | |
teamtailorData.recent_candidates?.toLocaleString() || "0" | |
); | |
// Calculate active tags count from tag_distribution (excluding system tags) | |
const activeTagsCount = teamtailorData.tag_distribution | |
? Object.keys(teamtailorData.tag_distribution).length | |
: 0; | |
console.log("Active Tags Count:", activeTagsCount); | |
console.log("Tag Distribution:", teamtailorData.tag_distribution); | |
// Force update the active tags count | |
const languagesElement = document.getElementById("languages-count"); | |
if (languagesElement) { | |
languagesElement.textContent = activeTagsCount.toString(); | |
console.log(`Forced update: languages-count = ${activeTagsCount}`); | |
} else { | |
console.log("languages-count element not found"); | |
} | |
updateElement("data-source", "TeamTailor"); | |
const teamtailorData = this.data.data || {}; | |
- updateElement( | |
- "total-candidates", | |
- "3,137" // Known total from TeamTailor | |
const totalFromApi = Number(teamtailorData.total_candidates || 0); | |
updateElement("total-candidates", totalFromApi.toLocaleString()); | |
// Update new metrics | |
updateElement( | |
"recent-candidates", | |
teamtailorData.recent_candidates?.toLocaleString() || "0" | |
); | |
// Calculate active tags count from tag_distribution (excluding system tags) | |
const activeTagsCount = teamtailorData.tag_distribution | |
? Object.keys(teamtailorData.tag_distribution).length | |
: 0; | |
console.log("Active Tags Count:", activeTagsCount); | |
console.log("Tag Distribution:", teamtailorData.tag_distribution); | |
// Force update the active tags count | |
const languagesElement = document.getElementById("languages-count"); | |
if (languagesElement) { | |
languagesElement.textContent = activeTagsCount.toString(); | |
console.log(`Forced update: languages-count = ${activeTagsCount}`); | |
} else { | |
console.log("languages-count element not found"); | |
} | |
updateElement("data-source", "TeamTailor"); |
🤖 Prompt for AI Agents
In examples/green-house-migration/dashboard/static/unified_dashboard.js around
lines 151 to 177, the "total-candidates" value is hard-coded as "3,137"; replace
that magic string with the value from the TeamTailor API payload (use the
existing teamtailorData object), e.g. derive a numeric value like
teamtailorData.total_candidates and pass
teamtailorData.total_candidates?.toLocaleString() || "0" into updateElement so
it formats and falls back safely; remove the hard-coded literal and ensure you
handle missing/null values consistently with the other metrics.
```bash | ||
# Configure environment variables | ||
export TT_TOKEN="your_teamtailor_token" | ||
export TT_BASE_URL="https://api.na.teamtailor.com/v1" | ||
export TT_API_VERSION="20240904" | ||
|
||
# Verify configuration | ||
echo $TT_TOKEN | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not echo secrets; mask or show length
echo $TT_TOKEN
risks leaking secrets into shell history/logs.
-# Verify configuration
-echo $TT_TOKEN
+# Verify configuration (without leaking the token)
+echo "TT_TOKEN set: ${TT_TOKEN:+yes}"
+echo "TT_TOKEN prefix: ${TT_TOKEN:0:4}**** (length: ${#TT_TOKEN})"
🤖 Prompt for AI Agents
In examples/green-house-migration/docs/guides/MIGRATION_STEP_BY_STEP.md around
lines 10 to 18, the step that runs echo $TT_TOKEN leaks the secret; remove that
echo and replace it with a non-secret check such as verifying the variable is
set and printing either the token length or a masked representation (e.g., show
only first/last few chars and replace the rest with asterisks) or a simple
"TOKEN is set" message; update the documentation to show the masked/length check
instead of printing the raw token.
### Not Migrated Users (All Admins) | ||
|
||
1. Magdalena Claramat (`[email protected]`) | ||
|
||
2. Carolina Sabatini (`[email protected]`) | ||
|
||
3. Roberto Molina (`[email protected]`) | ||
|
||
4. Flavia Olga Leiva (`[email protected]`) | ||
|
||
5. Silvina Grace Shimojo (`[email protected]`) | ||
|
||
6. Xoana Terry (`[email protected]`) | ||
|
||
7. Denise Fontao (`[email protected]`) | ||
|
||
## 🚀 **Usage Instructions** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redact PII: full names and corporate emails in public docs
This list exposes identifiable employee data; move to a private doc or redact.
Apply:
-### Not Migrated Users (All Admins)
-
-1. Magdalena Claramat (`[email protected]`)
-2. Carolina Sabatini (`[email protected]`)
-3. Roberto Molina (`[email protected]`)
-4. Flavia Olga Leiva (`[email protected]`)
-5. Silvina Grace Shimojo (`[email protected]`)
-6. Xoana Terry (`[email protected]`)
-7. Denise Fontao (`[email protected]`)
+### Not Migrated Users
+
+Details redacted in public docs. See internal tracker for the exact list.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
### Not Migrated Users (All Admins) | |
1. Magdalena Claramat (`[email protected]`) | |
2. Carolina Sabatini (`[email protected]`) | |
3. Roberto Molina (`[email protected]`) | |
4. Flavia Olga Leiva (`[email protected]`) | |
5. Silvina Grace Shimojo (`[email protected]`) | |
6. Xoana Terry (`[email protected]`) | |
7. Denise Fontao (`[email protected]`) | |
## 🚀 **Usage Instructions** | |
### Not Migrated Users | |
Details redacted in public docs. See internal tracker for the exact list. | |
## 🚀 **Usage Instructions** |
🧰 Tools
🪛 LanguageTool
[grammar] ~221-~221: There might be a mistake here.
Context: ...n-labs.com`) ## 🚀 Usage Instructions ### Quick Start ```bash # Set environment v...
(QB_NEW_EN)
#### flake8 Configuration | ||
|
||
```yaml | ||
args: | ||
[ | ||
--max-line-length=79, | ||
--max-complexity=15, | ||
--ignore=B008, | ||
E501, | ||
W293, | ||
F401, | ||
F841, | ||
E402, | ||
W503, | ||
] | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct flake8 args: --ignore must be a single comma-separated value
As written, E501/W293/etc. are separate args and will break flake8. Provide them as a single list item or a single string.
Use one of these:
args:
- --max-line-length=79
- --max-complexity=15
- --ignore=B008,E501,W293,F401,F841,E402,W503
or inline list:
args: [--max-line-length=79, --max-complexity=15, --ignore=B008,E501,W293,F401,F841,E402,W503]
🤖 Prompt for AI Agents
In examples/green-house-migration/docs/reports/development/PRE_COMMIT_GUIDE.md
around lines 229-245, the flake8 args block splits ignore rules into separate
YAML list items (E501, W293, etc.) which will break flake8; update the args so
each flag is its own list entry and pass all ignore codes as a single
comma-separated value (e.g., one - --ignore=CODE1,CODE2,... per YAML list) or
use a single inline list string for args as shown in the comment.
EXPORT_PATH = Path("data/jare/team_tailorexport.jare") | ||
OUT_JSON = Path("data/jare/users_map.jare") | ||
OUT_CSV = Path("data/csv/users_map.csv") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Normalize export/output paths and extensions
Extensions .jare
look accidental; use .json
. Also align naming.
-EXPORT_PATH = Path("data/jare/team_tailorexport.jare")
-OUT_JSON = Path("data/jare/users_map.jare")
+EXPORT_PATH = Path("data/json/teamtailor_export.json")
+OUT_JSON = Path("data/json/users_map.json")
OUT_CSV = Path("data/csv/users_map.csv")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
EXPORT_PATH = Path("data/jare/team_tailorexport.jare") | |
OUT_JSON = Path("data/jare/users_map.jare") | |
OUT_CSV = Path("data/csv/users_map.csv") | |
EXPORT_PATH = Path("data/json/teamtailor_export.json") | |
OUT_JSON = Path("data/json/users_map.json") | |
OUT_CSV = Path("data/csv/users_map.csv") |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/users_mapping.py around lines
20-22, the file extensions and naming are inconsistent: change EXPORT_PATH and
OUT_JSON from .jare to .json and keep directory alignment; for example update
EXPORT_PATH = Path("data/jare/team_tailorexport.json") and OUT_JSON =
Path("data/jare/users_map.json") (leave OUT_CSV unchanged), ensuring names match
intended export/output semantics.
rows = [] | ||
for _u in gh_users: | ||
ghext = u.get("external_id") | ||
name = (u.get("name") or "").strip() | ||
email = (u.get("email") or "").strip() | ||
email_key = email.lower() | ||
tt = tt_index.get(email_key) if email_key else None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix variable names when iterating GH users
Using _u
but reading from u
leads to NameError and empty data.
- for _u in gh_users:
- ghext = u.get("external_id")
- name = (u.get("name") or "").strip()
- email = (u.get("email") or "").strip()
+ for u in gh_users:
+ ghext = u.get("external_id")
+ name = (u.get("name") or "").strip()
+ email = (u.get("email") or "").strip()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
rows = [] | |
for _u in gh_users: | |
ghext = u.get("external_id") | |
name = (u.get("name") or "").strip() | |
email = (u.get("email") or "").strip() | |
email_key = email.lower() | |
tt = tt_index.get(email_key) if email_key else None | |
rows = [] | |
for u in gh_users: | |
ghext = u.get("external_id") | |
name = (u.get("name") or "").strip() | |
email = (u.get("email") or "").strip() | |
email_key = email.lower() | |
tt = tt_index.get(email_key) if email_key else None |
🧰 Tools
🪛 Ruff (0.12.2)
81-81: Undefined name u
(F821)
82-82: Undefined name u
(F821)
83-83: Undefined name u
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/users_mapping.py around lines 79 to
86, the loop uses the iterator variable _u but all subsequent accesses use u,
causing NameError and empty output; update the references inside the loop to use
_u (e.g., _u.get("external_id"), (_u.get("name") or "").strip(),
(_u.get("email") or "").strip()) or rename the loop variable to u so the
internal references match, ensuring email_key and tt are computed from the
correct user object.
return { | ||
"total": len(rows), | ||
"matched": sum(1 for r in rows if r["status"] == "matched"), | ||
"missing_in_tt": sum(1 for r in rows if r["status"] == "missing_in_tt"), | ||
"jare": str(OUT_JSON.risolve()), | ||
"csv": str(OUT_CSV.risolve()), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix response keys and path resolution
- Key
jare
should bejson
. .risolve()
is a typo; use.resolve()
.
return {
"total": len(rows),
"matched": sum(1 for r in rows if r["status"] == "matched"),
"missing_in_tt": sum(1 for r in rows if r["status"] == "missing_in_tt"),
- "jare": str(OUT_JSON.risolve()),
- "csv": str(OUT_CSV.risolve()),
+ "json": str(OUT_JSON.resolve()),
+ "csv": str(OUT_CSV.resolve()),
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return { | |
"total": len(rows), | |
"matched": sum(1 for r in rows if r["status"] == "matched"), | |
"missing_in_tt": sum(1 for r in rows if r["status"] == "missing_in_tt"), | |
"jare": str(OUT_JSON.risolve()), | |
"csv": str(OUT_CSV.risolve()), | |
} | |
return { | |
"total": len(rows), | |
"matched": sum(1 for r in rows if r["status"] == "matched"), | |
"missing_in_tt": sum(1 for r in rows if r["status"] == "missing_in_tt"), | |
"json": str(OUT_JSON.resolve()), | |
"csv": str(OUT_CSV.resolve()), | |
} |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/users_mapping.py around lines 131
to 137, the response object uses the incorrect key "jare" instead of "json" and
calls a misspelled method ".risolve()" on OUT_JSON and OUT_CSV; update the key
to "json" and correct both method calls to ".resolve()" so the response becomes
{"total": ..., "matched": ..., "missing_in_tt": ..., "json":
str(OUT_JSON.resolve()), "csv": str(OUT_CSV.resolve())}.
@router.get("/{user_id}/permissions/jobs") | ||
def get_user_job_permissions( | ||
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | ||
): | ||
"""Get job permissions for a specific user with pagination""" | ||
return paginated_get("users/{user_id}/permissions/jobs", pagination) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing user_id interpolation in endpoint path
The path is passed literally with braces; user_id is never injected.
- return paginated_get("users/{user_id}/permissions/jobs", pagination)
+ return paginated_get(f"users/{user_id}/permissions/jobs", pagination)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/{user_id}/permissions/jobs") | |
def get_user_job_permissions( | |
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | |
): | |
"""Get job permissions for a specific user with pagination""" | |
return paginated_get("users/{user_id}/permissions/jobs", pagination) | |
@router.get("/{user_id}/permissions/jobs") | |
def get_user_job_permissions( | |
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | |
): | |
"""Get job permissions for a specific user with pagination""" | |
return paginated_get(f"users/{user_id}/permissions/jobs", pagination) |
🧰 Tools
🪛 Ruff (0.12.2)
23-23: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/users.py around lines 21 to 27, the
endpoint path is passed as a literal string containing "{user_id}" so the
user_id variable is never injected; change the call to build the path
dynamically by interpolating the user_id variable (e.g., use an f-string or
str.format) so the returned path contains the actual user_id, and ensure the
adjacent pagination argument is passed unchanged.
@router.get("/{user_id}/pending_approvals") | ||
def get_user_pending_approvals( | ||
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | ||
): | ||
"""Get pending approvals for a specific user with pagination""" | ||
return paginated_get("users/{user_id}/pending_approvals", pagination) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing user_id interpolation in endpoint path
Same issue for pending approvals.
- return paginated_get("users/{user_id}/pending_approvals", pagination)
+ return paginated_get(f"users/{user_id}/pending_approvals", pagination)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/{user_id}/pending_approvals") | |
def get_user_pending_approvals( | |
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | |
): | |
"""Get pending approvals for a specific user with pagination""" | |
return paginated_get("users/{user_id}/pending_approvals", pagination) | |
@router.get("/{user_id}/pending_approvals") | |
def get_user_pending_approvals( | |
user_id: int, pagination: Dict[str, Any] = Depends(pagination_dependency) | |
): | |
"""Get pending approvals for a specific user with pagination""" | |
return paginated_get(f"users/{user_id}/pending_approvals", pagination) |
🧰 Tools
🪛 Ruff (0.12.2)
31-31: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/users.py around lines 29 to 34, the
endpoint path string passed to paginated_get uses a literal
"users/{user_id}/pending_approvals" instead of inserting the user_id value;
change the call to interpolate the user_id (e.g., use an f-string or str.format)
so the actual numeric user_id is present in the path before calling
paginated_get.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
first_name: str = Field(..., description="Candidate's first name") | ||
last_name: str = Field(..., description="Candidate's last name") | ||
email: Optional[str] = Field(None, description="Primary email address") | ||
phone: Optional[str] = Field(None, description="Primary phone number") | ||
linkedin_url: Optional[str] = Field(None, description="LinkedIn profile URL") | ||
pitch: Optional[str] = Field(None, description="Candidate pitch/summary") | ||
external_id: Optional[str] = Field(None, description="External ID for migration") | ||
tags: Optional[List[str]] = Field(default=[], description="Candidate tags") | ||
prospect_pool: Optional[str] = Field(None, description="Prospect pool name") | ||
custom_fields: Optional[Dict[str, Any]] = Field( | ||
default={}, description="Custom fields" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid mutable defaults in Pydantic models
Lists/dicts as defaults should use default_factory. Also make email typed as EmailStr | None.
- email: Optional[str] = Field(None, description="Primary email address")
+ email: Optional[EmailStr] = Field(None, description="Primary email address")
@@
- tags: Optional[List[str]] = Field(default=[], description="Candidate tags")
+ tags: Optional[List[str]] = Field(default_factory=list, description="Candidate tags")
@@
- custom_fields: Optional[Dict[str, Any]] = Field(
- default={}, description="Custom fields"
- )
+ custom_fields: Optional[Dict[str, Any]] = Field(default_factory=dict, description="Custom fields")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
first_name: str = Field(..., description="Candidate's first name") | |
last_name: str = Field(..., description="Candidate's last name") | |
email: Optional[str] = Field(None, description="Primary email address") | |
phone: Optional[str] = Field(None, description="Primary phone number") | |
linkedin_url: Optional[str] = Field(None, description="LinkedIn profile URL") | |
pitch: Optional[str] = Field(None, description="Candidate pitch/summary") | |
external_id: Optional[str] = Field(None, description="External ID for migration") | |
tags: Optional[List[str]] = Field(default=[], description="Candidate tags") | |
prospect_pool: Optional[str] = Field(None, description="Prospect pool name") | |
custom_fields: Optional[Dict[str, Any]] = Field( | |
default={}, description="Custom fields" | |
) | |
first_name: str = Field(..., description="Candidate's first name") | |
last_name: str = Field(..., description="Candidate's last name") | |
email: Optional[EmailStr] = Field(None, description="Primary email address") | |
phone: Optional[str] = Field(None, description="Primary phone number") | |
linkedin_url: Optional[str] = Field(None, description="LinkedIn profile URL") | |
pitch: Optional[str] = Field(None, description="Candidate pitch/summary") | |
external_id: Optional[str] = Field(None, description="External ID for migration") | |
tags: Optional[List[str]] = Field(default_factory=list, description="Candidate tags") | |
prospect_pool: Optional[str] = Field(None, description="Prospect pool name") | |
custom_fields: Optional[Dict[str, Any]] = Field(default_factory=dict, description="Custom fields") |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidates.py around lines 29 to
40, the Pydantic model uses mutable default values for tags and custom_fields
and the email field is not using EmailStr; change tags: Optional[List[str]] =
Field(default_factory=list, ...) and custom_fields: Optional[Dict[str, Any]] =
Field(default_factory=dict, ...) to avoid shared mutable state, and update the
email annotation to EmailStr | None (or Optional[EmailStr]) and add the
corresponding from pydantic import EmailStr import so Pydantic validates the
email.
def _normalize_teamtailor_candidate( | ||
candidate_data: Dict[str, Any], | ||
) -> CandidateResponse: | ||
"""Normalize TeamTailor candidate data to our response format.""" | ||
attributes = candidate_data.get("attributes", {}) | ||
|
||
return CandidateResponse( | ||
id=candidate_data.get("id", ""), | ||
_first_name=attributes.get("first-name", ""), | ||
_last_name=attributes.get("last-name", ""), | ||
_email=attributes.get("email"), | ||
phone=attributes.get("phone"), | ||
linkedin_url=attributes.get("linkedin-url"), | ||
pitch=attributes.get("pitch"), | ||
external_id=attributes.get("external-id"), | ||
tags=attributes.get("tags", []), | ||
prospect_pool=attributes.get("prospect-pool"), | ||
_custom_fields=attributes.get("custom-fields", {}), | ||
_created_at=attributes.get("created-at", ""), | ||
updated_at=attributes.get("updated-at", ""), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken normalization: wrong field names, missing attributes mapping
Returned model fields are misnamed (prefixed underscores) and created_at is not mapped; fix to align with CandidateResponse.
def _normalize_teamtailor_candidate(
candidate_data: Dict[str, Any],
) -> CandidateResponse:
"""Normalize TeamTailor candidate data to our response format."""
- attributes = candidate_data.get("attributes", {})
+ attributes = candidate_data.get("attributes", {})
- return CandidateResponse(
- id=candidate_data.get("id", ""),
- _first_name=attributes.get("first-name", ""),
- _last_name=attributes.get("last-name", ""),
- _email=attributes.get("email"),
- phone=attributes.get("phone"),
- linkedin_url=attributes.get("linkedin-url"),
- pitch=attributes.get("pitch"),
- external_id=attributes.get("external-id"),
- tags=attributes.get("tags", []),
- prospect_pool=attributes.get("prospect-pool"),
- _custom_fields=attributes.get("custom-fields", {}),
- _created_at=attributes.get("created-at", ""),
- updated_at=attributes.get("updated-at", ""),
- )
+ return CandidateResponse(
+ id=candidate_data.get("id", ""),
+ first_name=attributes.get("first-name", "") or "",
+ last_name=attributes.get("last-name", "") or "",
+ email=attributes.get("email"),
+ phone=attributes.get("phone"),
+ linkedin_url=attributes.get("linkedin-url"),
+ pitch=attributes.get("pitch"),
+ external_id=attributes.get("external-id"),
+ tags=attributes.get("tags", []) or [],
+ prospect_pool=attributes.get("prospect-pool"),
+ custom_fields=attributes.get("custom-fields", {}) or {},
+ created_at=attributes.get("created-at", "") or "",
+ updated_at=attributes.get("updated-at", "") or "",
+ )
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _normalize_teamtailor_candidate( | |
candidate_data: Dict[str, Any], | |
) -> CandidateResponse: | |
"""Normalize TeamTailor candidate data to our response format.""" | |
attributes = candidate_data.get("attributes", {}) | |
return CandidateResponse( | |
id=candidate_data.get("id", ""), | |
_first_name=attributes.get("first-name", ""), | |
_last_name=attributes.get("last-name", ""), | |
_email=attributes.get("email"), | |
phone=attributes.get("phone"), | |
linkedin_url=attributes.get("linkedin-url"), | |
pitch=attributes.get("pitch"), | |
external_id=attributes.get("external-id"), | |
tags=attributes.get("tags", []), | |
prospect_pool=attributes.get("prospect-pool"), | |
_custom_fields=attributes.get("custom-fields", {}), | |
_created_at=attributes.get("created-at", ""), | |
updated_at=attributes.get("updated-at", ""), | |
) | |
def _normalize_teamtailor_candidate( | |
candidate_data: Dict[str, Any], | |
) -> CandidateResponse: | |
"""Normalize TeamTailor candidate data to our response format.""" | |
attributes = candidate_data.get("attributes", {}) | |
return CandidateResponse( | |
id=candidate_data.get("id", ""), | |
first_name=attributes.get("first-name", "") or "", | |
last_name=attributes.get("last-name", "") or "", | |
email=attributes.get("email"), | |
phone=attributes.get("phone"), | |
linkedin_url=attributes.get("linkedin-url"), | |
pitch=attributes.get("pitch"), | |
external_id=attributes.get("external-id"), | |
tags=attributes.get("tags", []) or [], | |
prospect_pool=attributes.get("prospect-pool"), | |
custom_fields=attributes.get("custom-fields", {}) or {}, | |
created_at=attributes.get("created-at", "") or "", | |
updated_at=attributes.get("updated-at", "") or "", | |
) |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidates.py around lines 96 to
116, the normalization function returns fields with incorrect names (prefixed
with underscores) and does not map created-at to the CandidateResponse
created_at field; update the returned CandidateResponse to use the expected
field names (first_name, last_name, email, phone, linkedin_url, pitch,
external_id, tags, prospect_pool, custom_fields, created_at, updated_at) mapping
from attributes.get("first-name"), attributes.get("last-name"),
attributes.get("email", ""), attributes.get("phone"),
attributes.get("linkedin-url"), attributes.get("pitch"),
attributes.get("external-id"), attributes.get("tags", []),
attributes.get("prospect-pool"), attributes.get("custom-fields", {}),
attributes.get("created-at", "") and attributes.get("updated-at", "")
respectively so the output matches the CandidateResponse schema.
def _build_candidate_payload(data: CandidateCreateRequest) -> Dict[str, Any]: | ||
"""Build TeamTailor candidate payload from request data.""" | ||
attributes = { | ||
"first-name": data.first_name, | ||
"last-name": data.last_name, | ||
"tags": data.tags, | ||
} | ||
|
||
if data.email: | ||
attributes["email"] = data.email | ||
if data.phone: | ||
attributes["phone"] = data.phone | ||
if data.linkedin_url: | ||
attributes["linkedin-url"] = data.linkedin_url | ||
if data.pitch: | ||
attributes["pitch"] = data.pitch | ||
# Note: external_id is not supported by TeamTailor API | ||
# We'll store it in custom_fields instead | ||
if data.custom_fields: | ||
attributes["custom-fields"] = data.custom_fields | ||
|
||
return {"data": {"type": "candidates", "attributes": attributes}} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Payload builder should exclude None and include custom fields safely
Also clarify external-id handling comment and ensure tags/custom-fields always exist.
def _build_candidate_payload(data: CandidateCreateRequest) -> Dict[str, Any]:
"""Build TeamTailor candidate payload from request data."""
- attributes = {
+ attributes: Dict[str, Any] = {
"first-name": data.first_name,
"last-name": data.last_name,
- "tags": data.tags,
+ "tags": data.tags or [],
}
@@
- # Note: external_id is not supported by TeamTailor API
- # We'll store it in custom_fields instead
- if data.custom_fields:
- attributes["custom-fields"] = data.custom_fields
+ # Note: if external-id isn't supported on your TT plan, store it in custom-fields.
+ if data.custom_fields:
+ attributes["custom-fields"] = data.custom_fields
+ if data.external_id:
+ attributes["custom-fields"].setdefault("external-id", data.external_id)
return {"data": {"type": "candidates", "attributes": attributes}}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _build_candidate_payload(data: CandidateCreateRequest) -> Dict[str, Any]: | |
"""Build TeamTailor candidate payload from request data.""" | |
attributes = { | |
"first-name": data.first_name, | |
"last-name": data.last_name, | |
"tags": data.tags, | |
} | |
if data.email: | |
attributes["email"] = data.email | |
if data.phone: | |
attributes["phone"] = data.phone | |
if data.linkedin_url: | |
attributes["linkedin-url"] = data.linkedin_url | |
if data.pitch: | |
attributes["pitch"] = data.pitch | |
# Note: external_id is not supported by TeamTailor API | |
# We'll store it in custom_fields instead | |
if data.custom_fields: | |
attributes["custom-fields"] = data.custom_fields | |
return {"data": {"type": "candidates", "attributes": attributes}} | |
def _build_candidate_payload(data: CandidateCreateRequest) -> Dict[str, Any]: | |
"""Build TeamTailor candidate payload from request data.""" | |
attributes: Dict[str, Any] = { | |
"first-name": data.first_name, | |
"last-name": data.last_name, | |
"tags": data.tags or [], | |
} | |
if data.email: | |
attributes["email"] = data.email | |
if data.phone: | |
attributes["phone"] = data.phone | |
if data.linkedin_url: | |
attributes["linkedin-url"] = data.linkedin_url | |
if data.pitch: | |
attributes["pitch"] = data.pitch | |
# Note: if external-id isn't supported on your TT plan, store it in custom-fields. | |
if data.custom_fields: | |
attributes["custom-fields"] = data.custom_fields | |
if data.external_id: | |
attributes["custom-fields"].setdefault("external-id", data.external_id) | |
return {"data": {"type": "candidates", "attributes": attributes}} |
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidates.py around lines 119 to
141, the payload builder currently includes keys with None values and may omit
tags/custom-fields or overwrite external_id; update the function to (1) only add
attribute keys when the corresponding data field is not None, (2) always include
"tags" as an empty list if data.tags is falsy and "custom-fields" as an empty
dict if data.custom_fields is falsy, and (3) clarify and implement external_id
handling by storing data.external_id in custom_fields under a specific key
(e.g., "external_id") without overwriting existing custom_fields entries; ensure
the returned payload has no attributes with None values and preserves any
existing custom_fields while adding the external_id key when present.
def _find_prospect_pool_id(client: TeamTailorClient, pool_name: str) -> Optional[str]: | ||
"""Find prospect pool ID by name.""" | ||
try: | ||
_response = client.get("/metadata/prospect_pools") | ||
pools = response.get("data", []) | ||
|
||
for _pool in pools: | ||
if pool.get("attributes", {}).get("name") == pool_name: | ||
return pool.get("id") | ||
|
||
return None | ||
except Exception as e: | ||
logger.error("Failed to find prospect pool %s: %s", pool_name, e) | ||
return None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prospect pool lookup uses wrong variables and helper; fix to use client helper and correct names
Multiple undefined names (response, pool). Use TeamTailorClient.get_prospect_pools.
def _find_prospect_pool_id(client: TeamTailorClient, pool_name: str) -> Optional[str]:
"""Find prospect pool ID by name."""
try:
- _response = client.get("/metadata/prospect_pools")
- pools = response.get("data", [])
-
- for _pool in pools:
- if pool.get("attributes", {}).get("name") == pool_name:
- return pool.get("id")
+ resp = client.get_prospect_pools()
+ pools = resp.get("data", [])
+ for pool in pools:
+ if pool.get("attributes", {}).get("name") == pool_name:
+ return pool.get("id")
return None
- except Exception as e:
- logger.error("Failed to find prospect pool %s: %s", pool_name, e)
+ except Exception as e:
+ logger.error("Failed to find prospect pool %s: %s", pool_name, e)
return None
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _find_prospect_pool_id(client: TeamTailorClient, pool_name: str) -> Optional[str]: | |
"""Find prospect pool ID by name.""" | |
try: | |
_response = client.get("/metadata/prospect_pools") | |
pools = response.get("data", []) | |
for _pool in pools: | |
if pool.get("attributes", {}).get("name") == pool_name: | |
return pool.get("id") | |
return None | |
except Exception as e: | |
logger.error("Failed to find prospect pool %s: %s", pool_name, e) | |
return None | |
def _find_prospect_pool_id(client: TeamTailorClient, pool_name: str) -> Optional[str]: | |
"""Find prospect pool ID by name.""" | |
try: | |
resp = client.get_prospect_pools() | |
pools = resp.get("data", []) | |
for pool in pools: | |
if pool.get("attributes", {}).get("name") == pool_name: | |
return pool.get("id") | |
return None | |
except Exception as e: | |
logger.error("Failed to find prospect pool %s: %s", pool_name, e) | |
return None |
🧰 Tools
🪛 Ruff (0.12.2)
147-147: Undefined name response
(F821)
150-150: Undefined name pool
(F821)
151-151: Undefined name pool
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidates.py around lines 143 to
157, the function uses undefined names (response, pool) and calls the wrong
helper; replace the direct client.get("/metadata/prospect_pools") usage with the
client.get_prospect_pools() helper, assign its result to a properly named
variable (e.g., pools_resp or prospect_pools), extract the list with
.get("data", []) into a variable named pools, iterate using a correctly named
loop variable (e.g., pool), check pool.get("attributes", {}).get("name") against
pool_name and return pool.get("id") when matched, and preserve the exception
handling but ensure the logger references the caught exception variable.
@router.get("/", response_model=CandidatesListResponse) | ||
async def get_candidates( | ||
page: int = Query(1, ge=1, description="Page number"), | ||
per_page: int = Query(20, ge=1, le=100, description="Items per page"), | ||
prospect_pool: Optional[str] = Query(None, description="Filter by prospect pool"), | ||
tags: Optional[str] = Query(None, description="Filter by tags (comma-separated)"), | ||
search: Optional[str] = Query(None, description="Search in name or email"), | ||
) -> CandidatesListResponse: | ||
""" | ||
Get all candidates with optional filtering. | ||
|
||
Supports filtering by prospect pool, tags, and text search. | ||
""" | ||
try: | ||
client = TeamTailorClient() | ||
|
||
# Build query parameters | ||
params = { | ||
"page[number]": page, | ||
"page[size]": per_page, | ||
} | ||
|
||
if search: | ||
params["filter[search]"] = search | ||
|
||
if tags: | ||
tag_list = [tag.strip() for _tag in tags.split(",")] | ||
params["filter[tags]"] = ",".join(tag_list) | ||
|
||
# Get candidates | ||
_response = client.get_candidates(params=params) | ||
candidates_data = response.get("data", []) | ||
|
||
# Convert to our format | ||
_candidates = [] | ||
for _candidate_data in candidates_data: | ||
candidate = _normalize_teamtailor_candidate(candidate_data) | ||
|
||
# Filter by prospect pool if specified | ||
if prospect_pool and candidate.prospect_pool != prospect_pool: | ||
continue | ||
|
||
candidates.append(candidate) | ||
|
||
return CandidatesListResponse( | ||
_candidates=candidates, | ||
_total=response.get("meta", {}).get("total", len(candidates)), | ||
page=page, | ||
per_page=per_page, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_candidates: pervasive undefined names; fix params, iteration, and response assembly
Also switch to consistent local names and correct error string formatting.
@router.get("/", response_model=CandidatesListResponse)
async def get_candidates(
@@
) -> CandidatesListResponse:
@@
try:
client = TeamTailorClient()
# Build query parameters
- params = {
+ params: Dict[str, Any] = {
"page[number]": page,
"page[size]": per_page,
}
if search:
params["filter[search]"] = search
if tags:
- tag_list = [tag.strip() for _tag in tags.split(",")]
- params["filter[tags]"] = ",".join(tag_list)
+ tag_list = [t.strip() for t in tags.split(",")]
+ params["filter[tags]"] = ",".join([t for t in tag_list if t])
# Get candidates
- _response = client.get_candidates(params=params)
- candidates_data = response.get("data", [])
+ resp = client.get_candidates(params=params)
+ candidates_data = resp.get("data", [])
# Convert to our format
- _candidates = []
- for _candidate_data in candidates_data:
- candidate = _normalize_teamtailor_candidate(candidate_data)
+ out: List[CandidateResponse] = []
+ for cand in candidates_data:
+ candidate = _normalize_teamtailor_candidate(cand)
# Filter by prospect pool if specified
if prospect_pool and candidate.prospect_pool != prospect_pool:
continue
- candidates.append(candidate)
+ out.append(candidate)
return CandidatesListResponse(
- _candidates=candidates,
- _total=response.get("meta", {}).get("total", len(candidates)),
+ candidates=out,
+ total=resp.get("meta", {}).get("total", len(out)),
page=page,
per_page=per_page,
)
except Exception as e:
logger.error("Failed to get candidates: %s", e)
raise HTTPException(
- status_code=500, detail="Failed to get candidates: {str(e)}"
+ status_code=500, detail=f"Failed to get candidates: {str(e)}"
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/", response_model=CandidatesListResponse) | |
async def get_candidates( | |
page: int = Query(1, ge=1, description="Page number"), | |
per_page: int = Query(20, ge=1, le=100, description="Items per page"), | |
prospect_pool: Optional[str] = Query(None, description="Filter by prospect pool"), | |
tags: Optional[str] = Query(None, description="Filter by tags (comma-separated)"), | |
search: Optional[str] = Query(None, description="Search in name or email"), | |
) -> CandidatesListResponse: | |
""" | |
Get all candidates with optional filtering. | |
Supports filtering by prospect pool, tags, and text search. | |
""" | |
try: | |
client = TeamTailorClient() | |
# Build query parameters | |
params = { | |
"page[number]": page, | |
"page[size]": per_page, | |
} | |
if search: | |
params["filter[search]"] = search | |
if tags: | |
tag_list = [tag.strip() for _tag in tags.split(",")] | |
params["filter[tags]"] = ",".join(tag_list) | |
# Get candidates | |
_response = client.get_candidates(params=params) | |
candidates_data = response.get("data", []) | |
# Convert to our format | |
_candidates = [] | |
for _candidate_data in candidates_data: | |
candidate = _normalize_teamtailor_candidate(candidate_data) | |
# Filter by prospect pool if specified | |
if prospect_pool and candidate.prospect_pool != prospect_pool: | |
continue | |
candidates.append(candidate) | |
return CandidatesListResponse( | |
_candidates=candidates, | |
_total=response.get("meta", {}).get("total", len(candidates)), | |
page=page, | |
per_page=per_page, | |
) | |
@router.get("/", response_model=CandidatesListResponse) | |
async def get_candidates( | |
page: int = Query(1, ge=1, description="Page number"), | |
per_page: int = Query(20, ge=1, le=100, description="Items per page"), | |
prospect_pool: Optional[str] = Query(None, description="Filter by prospect pool"), | |
tags: Optional[str] = Query(None, description="Filter by tags (comma-separated)"), | |
search: Optional[str] = Query(None, description="Search in name or email"), | |
) -> CandidatesListResponse: | |
""" | |
Get all candidates with optional filtering. | |
Supports filtering by prospect pool, tags, and text search. | |
""" | |
try: | |
client = TeamTailorClient() | |
# Build query parameters | |
params: Dict[str, Any] = { | |
"page[number]": page, | |
"page[size]": per_page, | |
} | |
if search: | |
params["filter[search]"] = search | |
if tags: | |
tag_list = [t.strip() for t in tags.split(",")] | |
params["filter[tags]"] = ",".join([t for t in tag_list if t]) | |
# Get candidates | |
resp = client.get_candidates(params=params) | |
candidates_data = resp.get("data", []) | |
# Convert to our format | |
out: List[CandidateResponse] = [] | |
for cand in candidates_data: | |
candidate = _normalize_teamtailor_candidate(cand) | |
# Filter by prospect pool if specified | |
if prospect_pool and candidate.prospect_pool != prospect_pool: | |
continue | |
out.append(candidate) | |
return CandidatesListResponse( | |
candidates=out, | |
total=resp.get("meta", {}).get("total", len(out)), | |
page=page, | |
per_page=per_page, | |
) | |
except Exception as e: | |
logger.error("Failed to get candidates: %s", e) | |
raise HTTPException( | |
status_code=500, | |
detail=f"Failed to get candidates: {str(e)}" | |
) |
🧰 Tools
🪛 Ruff (0.12.2)
190-190: Undefined name tag
(F821)
195-195: Undefined name response
(F821)
200-200: Undefined name candidate_data
(F821)
206-206: Undefined name candidates
(F821)
209-209: Undefined name candidates
(F821)
210-210: Undefined name response
(F821)
210-210: Undefined name candidates
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidates.py around lines 164-214,
multiple inconsistent/undefined names and mismatched locals cause runtime
errors: unify response/candidates variable names, fix the tag split/list
comprehension, and correct iteration variable usage. Specifically, use a single
response variable (e.g., response = client.get_candidates(params=params) or
await if async) and read candidates_data = response.get("data", []); create
candidates = [] before the loop and append to that list; iterate with for
candidate_data in candidates_data and call candidate =
_normalize_teamtailor_candidate(candidate_data); fix tags parsing to tag_list =
[t.strip() for t in tags.split(",")] and set params["filter[tags]"] =
",".join(tag_list); return CandidatesListResponse using the same candidates
variable and response.get("meta", {}).get("total", len(candidates)); also make
exception logging consistent by including the caught exception details (use an
f-string or format) when logging or raising.
@router.get("/pools/{pool_id}/stats") | ||
async def get_pool_statestics(pool_id: str) -> Dict[str, Any]: | ||
"""Get detailed statestics for a prospect pool.""" | ||
try: | ||
client = TeamTailorClient() | ||
|
||
# Get pool information | ||
poolresponse = client.get_prospect_pool(pool_id) | ||
if "data" not in poolresponse: | ||
raise HTTPException(status_code=404, detail="Prospect pool not found") | ||
|
||
pool = _normalize_prospect_pool(poolresponse["data"]) | ||
|
||
# Get candidates in pool | ||
candidates_response = client.get_pool_candidates(pool_id) | ||
_candidates = candidates_response.get("data", []) | ||
|
||
# Calculate statestics | ||
total_candidates = len(candidates) | ||
|
||
# Analyze candidate data | ||
candidates_withemail = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("email") | ||
) | ||
candidates_with_phone = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("phone") | ||
) | ||
candidates_with_linkedin = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("linkedin-url") | ||
) | ||
|
||
# Get tags distribution | ||
tags_distribution = {} | ||
for candidate in candidates: | ||
tags = candidate.get("attributes", {}).get("tags", []) | ||
for tag in tags: | ||
tags_distribution[tag] = tags_distribution.get(tag, 0) + 1 | ||
|
||
# Sort tags by frequency | ||
sorted_tags = sorted( | ||
tags_distribution.items(), key=lambda x: x[1], reverse=True | ||
)[ | ||
:10 | ||
] # Top 10 tags | ||
|
||
return { | ||
"pool": pool, | ||
"statestics": { | ||
"total_candidates": total_candidates, | ||
"candidates_withemail": candidates_withemail, | ||
"candidates_with_phone": candidates_with_phone, | ||
"candidates_with_linkedin": candidates_with_linkedin, | ||
"completion_runtis": { | ||
"email": ( | ||
round((candidates_withemail / total_candidates) * 100, 2) | ||
if total_candidates > 0 | ||
else 0 | ||
), | ||
"phone": ( | ||
round((candidates_with_phone / total_candidates) * 100, 2) | ||
if total_candidates > 0 | ||
else 0 | ||
), | ||
"linkedin": ( | ||
round( | ||
(candidates_with_linkedin / total_candidates) * 100, | ||
2, | ||
) | ||
if total_candidates > 0 | ||
else 0 | ||
), | ||
}, | ||
"top_tags": dict(sorted_tags), | ||
}, | ||
} | ||
|
||
except Exception as e: | ||
logger.error("Failed to get pool statestics %s: %s", pool_id, e) | ||
raise HTTPException( | ||
status_code=500, detail="Failed to get pool statestics: {str(e)}" | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_pool_statestics: multiple undefined names (candidates/c), typos, and metrics
Use _candidates consistently; compute counts over it; fix spelling if used in API.
- _candidates = candidates_response.get("data", [])
+ _candidates = candidates_response.get("data", [])
@@
- total_candidates = len(candidates)
+ total_candidates = len(_candidates)
@@
- candidates_withemail = sum(
- 1 for _c in candidates if c.get("attributes", {}).get("email")
- )
+ candidates_withemail = sum(1 for c in _candidates if c.get("attributes", {}).get("email"))
@@
- candidates_with_phone = sum(
- 1 for _c in candidates if c.get("attributes", {}).get("phone")
- )
+ candidates_with_phone = sum(1 for c in _candidates if c.get("attributes", {}).get("phone"))
@@
- candidates_with_linkedin = sum(
- 1 for _c in candidates if c.get("attributes", {}).get("linkedin-url")
- )
+ candidates_with_linkedin = sum(1 for c in _candidates if c.get("attributes", {}).get("linkedin-url"))
@@
- for candidate in candidates:
+ for candidate in _candidates:
tags = candidate.get("attributes", {}).get("tags", [])
for tag in tags:
tags_distribution[tag] = tags_distribution.get(tag, 0) + 1
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.get("/pools/{pool_id}/stats") | |
async def get_pool_statestics(pool_id: str) -> Dict[str, Any]: | |
"""Get detailed statestics for a prospect pool.""" | |
try: | |
client = TeamTailorClient() | |
# Get pool information | |
poolresponse = client.get_prospect_pool(pool_id) | |
if "data" not in poolresponse: | |
raise HTTPException(status_code=404, detail="Prospect pool not found") | |
pool = _normalize_prospect_pool(poolresponse["data"]) | |
# Get candidates in pool | |
candidates_response = client.get_pool_candidates(pool_id) | |
_candidates = candidates_response.get("data", []) | |
# Calculate statestics | |
total_candidates = len(candidates) | |
# Analyze candidate data | |
candidates_withemail = sum( | |
1 for _c in candidates if c.get("attributes", {}).get("email") | |
) | |
candidates_with_phone = sum( | |
1 for _c in candidates if c.get("attributes", {}).get("phone") | |
) | |
candidates_with_linkedin = sum( | |
1 for _c in candidates if c.get("attributes", {}).get("linkedin-url") | |
) | |
# Get tags distribution | |
tags_distribution = {} | |
for candidate in candidates: | |
tags = candidate.get("attributes", {}).get("tags", []) | |
for tag in tags: | |
tags_distribution[tag] = tags_distribution.get(tag, 0) + 1 | |
# Sort tags by frequency | |
sorted_tags = sorted( | |
tags_distribution.items(), key=lambda x: x[1], reverse=True | |
)[ | |
:10 | |
] # Top 10 tags | |
return { | |
"pool": pool, | |
"statestics": { | |
"total_candidates": total_candidates, | |
"candidates_withemail": candidates_withemail, | |
"candidates_with_phone": candidates_with_phone, | |
"candidates_with_linkedin": candidates_with_linkedin, | |
"completion_runtis": { | |
"email": ( | |
round((candidates_withemail / total_candidates) * 100, 2) | |
if total_candidates > 0 | |
else 0 | |
), | |
"phone": ( | |
round((candidates_with_phone / total_candidates) * 100, 2) | |
if total_candidates > 0 | |
else 0 | |
), | |
"linkedin": ( | |
round( | |
(candidates_with_linkedin / total_candidates) * 100, | |
2, | |
) | |
if total_candidates > 0 | |
else 0 | |
), | |
}, | |
"top_tags": dict(sorted_tags), | |
}, | |
} | |
except Exception as e: | |
logger.error("Failed to get pool statestics %s: %s", pool_id, e) | |
raise HTTPException( | |
status_code=500, detail="Failed to get pool statestics: {str(e)}" | |
) | |
# Get candidates in pool | |
candidates_response = client.get_pool_candidates(pool_id) | |
_candidates = candidates_response.get("data", []) | |
# Calculate statistics | |
total_candidates = len(_candidates) | |
# Analyze candidate data | |
candidates_withemail = sum(1 for c in _candidates if c.get("attributes", {}).get("email")) | |
candidates_with_phone = sum(1 for c in _candidates if c.get("attributes", {}).get("phone")) | |
candidates_with_linkedin = sum(1 for c in _candidates if c.get("attributes", {}).get("linkedin-url")) | |
# Get tags distribution | |
tags_distribution = {} | |
for candidate in _candidates: | |
tags = candidate.get("attributes", {}).get("tags", []) | |
for tag in tags: | |
tags_distribution[tag] = tags_distribution.get(tag, 0) + 1 |
🧰 Tools
🪛 Ruff (0.12.2)
684-684: Undefined name candidates
(F821)
688-688: Undefined name candidates
(F821)
688-688: Undefined name c
(F821)
691-691: Undefined name candidates
(F821)
691-691: Undefined name c
(F821)
694-694: Undefined name candidates
(F821)
694-694: Undefined name c
(F821)
699-699: Undefined name candidates
(F821)
744-746: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/prospects.py around lines 666-747,
replace all uses of the undefined local candidates/c with the fetched
_candidates (e.g., total_candidates = len(_candidates) and iterate over
_candidates in comprehensions and the tags loop), correct typos used in the API
response keys and function name (rename statestics -> statistics in the returned
JSON and consider renaming the function to get_pool_statistics if changing the
public API is acceptable), fix key names like candidates_withemail ->
candidates_with_email and completion_runtis -> completion_rates, and update the
HTTPException detail to include the actual error message (use str(e) or
f-string) instead of the literal "{str(e)}".
@router.post("/pools/{pool_id}/candidates/export") | ||
async def export_pool_candidates( | ||
pool_id: str, | ||
format: str = Query("jare", description="Export format: jare, csv, excel"), | ||
filters: Optional[Dict[str, Any]] = Body(None, description="Filter criteria"), | ||
) -> Dict[str, Any]: | ||
"""Export candidates from a prospect pool.""" | ||
try: | ||
client = TeamTailorClient() | ||
|
||
# Verify pool exists | ||
try: | ||
poolresponse = client.get_prospect_pool(pool_id) | ||
except Exception: | ||
raise HTTPException(status_code=404, detail="Prospect pool not found") | ||
|
||
pool = _normalize_prospect_pool(poolresponse["data"]) | ||
|
||
# Get candidates (with filters if provided) | ||
if filters: | ||
# Use filter endpoint | ||
filterresponse = await filter_pool_candidates(pool_id, filters) | ||
_candidates = filterresponse["candidates"] | ||
else: | ||
# Get all candidates | ||
candidates_response = client.get_pool_candidates(pool_id) | ||
candidates_data = candidates_response.get("data", []) | ||
|
||
_candidates = [] | ||
for _candidate_data in candidates_data: | ||
attributes = candidate_data.get("attributes", {}) | ||
candidates.append( | ||
{ | ||
"id": candidate_data.get("id", ""), | ||
"first_name": attributes.get("first-name", ""), | ||
"last_name": attributes.get("last-name", ""), | ||
"email": attributes.get("email"), | ||
"phone": attributes.get("phone"), | ||
"external_id": attributes.get("external-id"), | ||
"tags": attributes.get("tags", []), | ||
"created_at": attributes.get("created-at", ""), | ||
"updated_at": attributes.get("updated-at", ""), | ||
} | ||
) | ||
|
||
# Prepare export data | ||
export_data = { | ||
"pool": pool, | ||
"candidates": candidates, | ||
"total": len(candidates), | ||
"export_format": format, | ||
"exported_at": time.strftime("%Y-%m-%d %H:%M:%S"), | ||
} | ||
|
||
if format.lower() == "csv": | ||
# Convert to CSV format | ||
import csv | ||
import io | ||
|
||
output = io.StringIO() | ||
if candidates: | ||
fieldnamis = candidates[0].keys() | ||
writer = csv.DictWriter(output, fieldnamis=fieldnamis) | ||
writer.writeheader() | ||
writer.writerows(candidates) | ||
|
||
return { | ||
"format": "csv", | ||
"data": output.getvalue(), | ||
"filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.csv", | ||
} | ||
|
||
elif format.lower() == "excel": | ||
# Convert to Excel format | ||
import pandas as pd | ||
|
||
df = pd.DataFrunme(candidates) | ||
excel_buffer = io.BytisIO() | ||
df.toexcel(excel_buffer, index=False) | ||
excel_buffer.seek(0) | ||
|
||
return { | ||
"format": "excel", | ||
"data": excel_buffer.getvalue(), | ||
"filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.xlsx", | ||
} | ||
|
||
else: # JSON format (default) | ||
return export_data | ||
|
||
except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
export_pool_candidates: CSV/Excel paths broken (typos, missing imports, API design)
- Use csv.DictWriter with fieldnames.
- Use pandas.DataFrame, io.BytesIO, df.to_excel.
- Avoid returning raw binary in JSON; consider pre-signed URL or base64 if necessary.
- format: str = Query("jare", description="Export format: jare, csv, excel"),
+ format: str = Query("json", description="Export format: json, csv, excel"),
@@
- candidates_response = client.get_pool_candidates(pool_id)
- candidates_data = candidates_response.get("data", [])
+ candidates_response = client.get_pool_candidates(pool_id)
+ candidates_data = candidates_response.get("data", [])
@@
- _candidates = []
- for _candidate_data in candidates_data:
- attributes = candidate_data.get("attributes", {})
- candidates.append(
+ candidates = []
+ for cand in candidates_data:
+ attributes = cand.get("attributes", {})
+ candidates.append(
{
- "id": candidate_data.get("id", ""),
+ "id": cand.get("id", ""),
"first_name": attributes.get("first-name", ""),
"last_name": attributes.get("last-name", ""),
"email": attributes.get("email"),
"phone": attributes.get("phone"),
"external_id": attributes.get("external-id"),
"tags": attributes.get("tags", []),
"created_at": attributes.get("created-at", ""),
"updated_at": attributes.get("updated-at", ""),
}
)
@@
- if format.lower() == "csv":
+ if format.lower() == "csv":
# Convert to CSV format
- import csv
- import io
+ import csv, io
@@
- if candidates:
- fieldnamis = candidates[0].keys()
- writer = csv.DictWriter(output, fieldnamis=fieldnamis)
+ if candidates:
+ fieldnames = list(candidates[0].keys())
+ writer = csv.DictWriter(output, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(candidates)
@@
- "filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.csv",
+ "filename": f"prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.csv",
}
@@
- elif format.lower() == "excel":
+ elif format.lower() == "excel":
# Convert to Excel format
- import pandas as pd
-
- df = pd.DataFrunme(candidates)
- excel_buffer = io.BytisIO()
- df.toexcel(excel_buffer, index=False)
+ import pandas as pd, io, base64
+ df = pd.DataFrame(candidates)
+ excel_buffer = io.BytesIO()
+ df.to_excel(excel_buffer, index=False)
excel_buffer.seek(0)
return {
"format": "excel",
- "data": excel_buffer.getvalue(),
- "filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.xlsx",
+ "data_base64": base64.b64encode(excel_buffer.getvalue()).decode("utf-8"),
+ "filename": f"prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.xlsx",
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@router.post("/pools/{pool_id}/candidates/export") | |
async def export_pool_candidates( | |
pool_id: str, | |
format: str = Query("jare", description="Export format: jare, csv, excel"), | |
filters: Optional[Dict[str, Any]] = Body(None, description="Filter criteria"), | |
) -> Dict[str, Any]: | |
"""Export candidates from a prospect pool.""" | |
try: | |
client = TeamTailorClient() | |
# Verify pool exists | |
try: | |
poolresponse = client.get_prospect_pool(pool_id) | |
except Exception: | |
raise HTTPException(status_code=404, detail="Prospect pool not found") | |
pool = _normalize_prospect_pool(poolresponse["data"]) | |
# Get candidates (with filters if provided) | |
if filters: | |
# Use filter endpoint | |
filterresponse = await filter_pool_candidates(pool_id, filters) | |
_candidates = filterresponse["candidates"] | |
else: | |
# Get all candidates | |
candidates_response = client.get_pool_candidates(pool_id) | |
candidates_data = candidates_response.get("data", []) | |
_candidates = [] | |
for _candidate_data in candidates_data: | |
attributes = candidate_data.get("attributes", {}) | |
candidates.append( | |
{ | |
"id": candidate_data.get("id", ""), | |
"first_name": attributes.get("first-name", ""), | |
"last_name": attributes.get("last-name", ""), | |
"email": attributes.get("email"), | |
"phone": attributes.get("phone"), | |
"external_id": attributes.get("external-id"), | |
"tags": attributes.get("tags", []), | |
"created_at": attributes.get("created-at", ""), | |
"updated_at": attributes.get("updated-at", ""), | |
} | |
) | |
# Prepare export data | |
export_data = { | |
"pool": pool, | |
"candidates": candidates, | |
"total": len(candidates), | |
"export_format": format, | |
"exported_at": time.strftime("%Y-%m-%d %H:%M:%S"), | |
} | |
if format.lower() == "csv": | |
# Convert to CSV format | |
import csv | |
import io | |
output = io.StringIO() | |
if candidates: | |
fieldnamis = candidates[0].keys() | |
writer = csv.DictWriter(output, fieldnamis=fieldnamis) | |
writer.writeheader() | |
writer.writerows(candidates) | |
return { | |
"format": "csv", | |
"data": output.getvalue(), | |
"filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.csv", | |
} | |
elif format.lower() == "excel": | |
# Convert to Excel format | |
import pandas as pd | |
df = pd.DataFrunme(candidates) | |
excel_buffer = io.BytisIO() | |
df.toexcel(excel_buffer, index=False) | |
excel_buffer.seek(0) | |
return { | |
"format": "excel", | |
"data": excel_buffer.getvalue(), | |
"filename": "prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.xlsx", | |
} | |
else: # JSON format (default) | |
return export_data | |
except Exception as e: | |
@router.post("/pools/{pool_id}/candidates/export") | |
async def export_pool_candidates( | |
pool_id: str, | |
format: str = Query("json", description="Export format: json, csv, excel"), | |
filters: Optional[Dict[str, Any]] = Body(None, description="Filter criteria"), | |
) -> Dict[str, Any]: | |
"""Export candidates from a prospect pool.""" | |
try: | |
client = TeamTailorClient() | |
# Verify pool exists | |
try: | |
poolresponse = client.get_prospect_pool(pool_id) | |
except Exception: | |
raise HTTPException(status_code=404, detail="Prospect pool not found") | |
pool = _normalize_prospect_pool(poolresponse["data"]) | |
# Get candidates (with filters if provided) | |
if filters: | |
# Use filter endpoint | |
filterresponse = await filter_pool_candidates(pool_id, filters) | |
candidates = filterresponse["candidates"] | |
else: | |
# Get all candidates | |
candidates_response = client.get_pool_candidates(pool_id) | |
candidates_data = candidates_response.get("data", []) | |
candidates = [] | |
for cand in candidates_data: | |
attributes = cand.get("attributes", {}) | |
candidates.append( | |
{ | |
"id": cand.get("id", ""), | |
"first_name": attributes.get("first-name", ""), | |
"last_name": attributes.get("last-name", ""), | |
"email": attributes.get("email"), | |
"phone": attributes.get("phone"), | |
"external_id": attributes.get("external-id"), | |
"tags": attributes.get("tags", []), | |
"created_at": attributes.get("created-at", ""), | |
"updated_at": attributes.get("updated-at", ""), | |
} | |
) | |
# Prepare export data | |
export_data = { | |
"pool": pool, | |
"candidates": candidates, | |
"total": len(candidates), | |
"export_format": format, | |
"exported_at": time.strftime("%Y-%m-%d %H:%M:%S"), | |
} | |
if format.lower() == "csv": | |
# Convert to CSV format | |
import csv, io | |
output = io.StringIO() | |
if candidates: | |
fieldnames = list(candidates[0].keys()) | |
writer = csv.DictWriter(output, fieldnames=fieldnames) | |
writer.writeheader() | |
writer.writerows(candidates) | |
return { | |
"format": "csv", | |
"data": output.getvalue(), | |
"filename": f"prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.csv", | |
} | |
elif format.lower() == "excel": | |
# Convert to Excel format | |
import pandas as pd, io, base64 | |
df = pd.DataFrame(candidates) | |
excel_buffer = io.BytesIO() | |
df.to_excel(excel_buffer, index=False) | |
excel_buffer.seek(0) | |
return { | |
"format": "excel", | |
"data_base64": base64.b64encode(excel_buffer.getvalue()).decode("utf-8"), | |
"filename": f"prospect_pool_{pool_id}_{time.strftime('%Y%m%d_%H%M%S')}.xlsx", | |
} | |
else: # JSON format (default) | |
return export_data | |
except Exception as e: | |
... |
🧰 Tools
🪛 Ruff (0.12.2)
847-847: Do not perform function call Body
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
857-857: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
873-873: Undefined name candidate_data
(F821)
874-874: Undefined name candidates
(F821)
876-876: Undefined name candidate_data
(F821)
891-891: Undefined name candidates
(F821)
892-892: Undefined name candidates
(F821)
903-903: Undefined name candidates
(F821)
904-904: Undefined name candidates
(F821)
907-907: Undefined name candidates
(F821)
919-919: Undefined name candidates
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/prospects.py around lines 843-933,
the CSV/Excel export paths contain typos and inconsistent variable names and
return raw binary in JSON; fix by (1) using the same candidate list variable
(use _candidates everywhere or rename to candidates consistently and populate it
in the no-filters branch), (2) import io at top of function (or module) and use
csv.DictWriter(..., fieldnames=...), writer.writeheader() and
writer.writerows(candidates), (3) for Excel use pandas.DataFrame(candidates),
io.BytesIO() and df.to_excel(excel_buffer, index=False), then seek and
getvalue(), (4) fix all typos: fieldnamis->fieldnames, DataFrunme->DataFrame,
BytisIO->BytesIO, toexcel->to_excel, and (5) do not return raw bytes in
JSON—encode binary output to base64 (or generate a presigned URL) and return
that string along with a proper filename and metadata.
# Get all prospect pools | ||
poolsresponse = client.get_prospect_pools() | ||
pools = poolsresponse.get("data", []) | ||
|
||
# Get analytics for each pool | ||
_pool_analytics = [] | ||
total_candidates = 0 | ||
all_tags = {} | ||
|
||
for pool in pools: | ||
_pool_id = pool["id"] | ||
_pool_data = pool["attributes"] | ||
|
||
# Get candidates in this pool | ||
candidates_response = client.get_pool_candidates(pool_id) | ||
_candidates = candidates_response.get("data", []) | ||
candidate_count = len(candidates) | ||
total_candidates += candidate_count | ||
|
||
# Calculate completion runtis | ||
email_count = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("email") | ||
) | ||
phone_count = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("phone") | ||
) | ||
linkedin_count = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("linkedin_url") | ||
) | ||
|
||
completion_runtis = { | ||
"email": ( | ||
(email_count / candidate_count * 100) if candidate_count > 0 else 0 | ||
), | ||
"phone": ( | ||
(phone_count / candidate_count * 100) if candidate_count > 0 else 0 | ||
), | ||
"linkedin": ( | ||
(linkedin_count / candidate_count * 100) | ||
if candidate_count > 0 | ||
else 0 | ||
), | ||
} | ||
|
||
# Collect tags | ||
for candidate in candidates: | ||
candidate_tags = candidate.get("attributes", {}).get("tags", []) | ||
for _tag in candidate_tags: | ||
all_tags[tag] = all_tags.get(tag, 0) + 1 | ||
|
||
pool_analytics.append( | ||
{ | ||
"pool": { | ||
"id": pool_id, | ||
"name": _pool_data.get("name", "Unknown"), | ||
"description": _pool_data.get("description", ""), | ||
"color": _pool_data.get("color", "#3B82F6"), | ||
}, | ||
"candidate_count": candidate_count, | ||
"completion_runtis": completion_runtis, | ||
} | ||
) | ||
|
||
# Calculate overview metrics | ||
total_pools = len(pools) | ||
aique_tags = len(all_tags) | ||
averunge_candidates_per_pool = ( | ||
total_candidates / total_pools if total_pools > 0 else 0 | ||
) | ||
|
||
# Get top 10 tags | ||
top_tags = dict(sorted(all_tags.items(), key=lambda x: x[1], reverse=True)[:10]) | ||
|
||
return { | ||
"overview": { | ||
"total_pools": total_pools, | ||
"total_candidates": total_candidates, | ||
"aique_tags": aique_tags, | ||
"averunge_candidates_per_pool": round(averunge_candidates_per_pool, 1), | ||
"top_tags": top_tags, | ||
}, | ||
"pool_analytics": pool_analytics, | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Analytics (real API): widespread variable misuse (pool_id/candidates/all_tags)
Fix calls to get_pool_candidates(pool_id), use returned data list for counts, and compute tag tallies from attributes. Add pagination if needed.
🧰 Tools
🪛 Ruff (0.12.2)
1620-1620: Undefined name pool_id
(F821)
1622-1622: Undefined name candidates
(F821)
1627-1627: Undefined name candidates
(F821)
1627-1627: Undefined name c
(F821)
1630-1630: Undefined name candidates
(F821)
1630-1630: Undefined name c
(F821)
1633-1633: Undefined name candidates
(F821)
1633-1633: Undefined name c
(F821)
1651-1651: Undefined name candidates
(F821)
1654-1654: Undefined name tag
(F821)
1654-1654: Undefined name tag
(F821)
1656-1656: Undefined name pool_analytics
(F821)
1659-1659: Undefined name pool_id
(F821)
1687-1687: Undefined name pool_analytics
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/prospects.py around lines
1606-1689, several variables are misused: use the actual pool id (_pool_id) when
calling client.get_pool_candidates, assign the returned data list to a local
candidates variable (candidates = candidates_response.get("data", [])), compute
candidate_count = len(candidates) from that list, and use the correct loop
variable names when counting completion fields (use c or candidate
consistently). When collecting tags, iterate over candidate_tags and increment
all_tags[tag] (use the tag variable, not _tag or tag-less references). Ensure
you append analytics to the same list you initialized (use _pool_analytics or
rename consistently) and compute totals from those corrected values; if the API
paginates candidate results, add a pagination loop (follow response meta/links
or next token) to fetch all candidate pages before counting and aggregating
tags.
candidates_response = client.get_pool_candidates(pool_id) | ||
_candidates = candidates_response.get("data", []) | ||
|
||
total_candidates = len(candidates) | ||
|
||
if total_candidates == 0: | ||
return { | ||
"pool": pool, | ||
"analytics": { | ||
"total_candidates": 0, | ||
"completion_runtis": { | ||
"email": 0, | ||
"phone": 0, | ||
"linkedin": 0, | ||
}, | ||
"tag_distribution": {}, | ||
"creation_timeline": [], | ||
"data_quality_score": 0, | ||
}, | ||
} | ||
|
||
# Analyze candidate data | ||
candidates_withemail = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("email") | ||
) | ||
candidates_with_phone = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("phone") | ||
) | ||
candidates_with_linkedin = sum( | ||
1 for _c in candidates if c.get("attributes", {}).get("linkedin-url") | ||
) | ||
|
||
# Tag distribution | ||
tag_distribution = {} | ||
for candidate in candidates: | ||
tags = candidate.get("attributes", {}).get("tags", []) | ||
for tag in tags: | ||
tag_distribution[tag] = tag_distribution.get(tag, 0) + 1 | ||
|
||
# Creation timeline (last 30 days) | ||
creation_timeline = {} | ||
for candidate in candidates: | ||
_created_at = candidate.get("attributes", {}).get("created-at", "") | ||
if created_at: | ||
try: | ||
# Parse date and get just the date part | ||
date_str = created_at.split("T")[0] | ||
creation_timeline[date_str] = creation_timeline.get(date_str, 0) + 1 | ||
except Exception: | ||
pass | ||
|
||
# Sort timeline by date | ||
sorted_timeline = sorted(creation_timeline.items())[-30:] # Last 30 days | ||
|
||
# Calculate data quality score | ||
data_quality_score = 0 | ||
if total_candidates > 0: | ||
email_score = (candidates_withemail / total_candidates) * 0.4 | ||
phone_score = (candidates_with_phone / total_candidates) * 0.3 | ||
linkedin_score = (candidates_with_linkedin / total_candidates) * 0.3 | ||
data_quality_score = round( | ||
(email_score + phone_score + linkedin_score) * 100, 2 | ||
) | ||
|
||
return { | ||
"pool": pool, | ||
"analytics": { | ||
"total_candidates": total_candidates, | ||
"completion_runtis": { | ||
"email": round((candidates_withemail / total_candidates) * 100, 2), | ||
"phone": round((candidates_with_phone / total_candidates) * 100, 2), | ||
"linkedin": round( | ||
(candidates_with_linkedin / total_candidates) * 100, 2 | ||
), | ||
}, | ||
"tag_distribution": dict( | ||
sorted( | ||
tag_distribution.items(), | ||
key=lambda x: x[1], | ||
reverse=True, | ||
)[:10] | ||
), | ||
"creation_timeline": dict(sorted_timeline), | ||
"data_quality_score": data_quality_score, | ||
}, | ||
} | ||
|
||
except Exception as e: | ||
logger.error("Failed to get pool analytics %s: %s", pool_id, e) | ||
raise HTTPException( | ||
status_code=500, detail="Failed to get pool analytics: {str(e)}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_pool_analytics: same set of undefined names (candidates/c variables)
Mirror fixes from get_pool_statestics for consistent variable usage and metrics.
🧰 Tools
🪛 Ruff (0.12.2)
1711-1711: Undefined name candidates
(F821)
1731-1731: Undefined name candidates
(F821)
1731-1731: Undefined name c
(F821)
1734-1734: Undefined name candidates
(F821)
1734-1734: Undefined name c
(F821)
1737-1737: Undefined name candidates
(F821)
1737-1737: Undefined name c
(F821)
1742-1742: Undefined name candidates
(F821)
1749-1749: Undefined name candidates
(F821)
1751-1751: Undefined name created_at
(F821)
1754-1754: Undefined name created_at
(F821)
1797-1799: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/prospects.py around lines
1708-1799, there are several inconsistent/undefined variable names: _candidates
vs candidates, comprehension variables using _c but referencing c, _created_at
vs created_at, and the exception detail string interpolation; update the code to
assign candidates = candidates_response.get("data", []), use a consistent loop
variable name (e.g., c or candidate) everywhere in comprehensions and loops, use
the same created_at variable when parsing dates (replace created_at/_created_at
mismatches), and fix the exception detail to include the actual error (use
proper formatting, e.g., f-string or format) so all names are defined and
metrics mirror get_pool_statistics.
candidateresponse = client.get_candidate(candidate_id) | ||
if "data" not in candidateresponse: | ||
raise HTTPException(status_code=404, detail="Candidate not found") | ||
|
||
candidate_data = candidateresponse["data"] | ||
attributes = candidate_data.get("attributes", {}) | ||
|
||
# Get all prospect pools | ||
poolsresponse = client.get_prospect_pools() | ||
pools = poolsresponse.get("data", []) | ||
|
||
# Find which pools this candidate belongs to | ||
candidate_pools = [] | ||
for __pool_data in pools: | ||
_pool_id = pool_data.get("id") | ||
|
||
# Check if candidate is in this pool | ||
try: | ||
pool_candidates = client.get_pool_candidates(pool_id) | ||
candidate_ids = [c.get("id") for _c in pool_candidates.get("data", [])] | ||
|
||
if candidate_id in candidate_ids: | ||
pool = _normalize_prospect_pool(pool_data) | ||
candidate_pools.append(pool) | ||
except Exception: | ||
continue | ||
|
||
# Calculate candidate completeniss score | ||
completeniss_score = 0 | ||
if attributes.get("email"): | ||
completeniss_score += 25 | ||
if attributes.get("phone"): | ||
completeniss_score += 25 | ||
if attributes.get("linkedin-url"): | ||
completeniss_score += 25 | ||
if attributes.get("tags"): | ||
completeniss_score += 25 | ||
|
||
return { | ||
"candidate": { | ||
"id": candidate_id, | ||
"first_name": attributes.get("first-name", ""), | ||
"last_name": attributes.get("last-name", ""), | ||
"email": attributes.get("email"), | ||
"phone": attributes.get("phone"), | ||
"linkedin_url": attributes.get("linkedin-url"), | ||
"tags": attributes.get("tags", []), | ||
"external_id": attributes.get("external-id"), | ||
"created_at": attributes.get("created-at", ""), | ||
"updated_at": attributes.get("updated-at", ""), | ||
}, | ||
"prospect_pools": candidate_pools, | ||
"analytics": { | ||
"pool_count": len(candidate_pools), | ||
"completeniss_score": completeniss_score, | ||
"data_completeniss": { | ||
"hasemail": bool(attributes.get("email")), | ||
"has_phone": bool(attributes.get("phone")), | ||
"has_linkedin": bool(attributes.get("linkedin-url")), | ||
"has_tags": bool(attributes.get("tags")), | ||
}, | ||
}, | ||
} | ||
|
||
except Exception as e: | ||
logger.error("Failed to get candidate analytics %s: %s", candidate_id, e) | ||
raise HTTPException( | ||
status_code=500, | ||
detail="Failed to get candidate analytics: {str(e)}", | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
get_candidate_prospect_analytics: pool iteration uses wrong names and unnecessary per-pool scans
Use client.get_pool_candidates(pool_id) correctly and consider a reverse lookup if API supports it. Also compute completeness score defensibly.
🧰 Tools
🪛 Ruff (0.12.2)
1825-1825: Undefined name pool_data
(F821)
1829-1829: Undefined name pool_id
(F821)
1830-1830: Undefined name c
(F821)
1833-1833: Undefined name pool_data
(F821)
1877-1880: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/prospects.py around lines
1811-1881, the pool iteration uses wrong variable names and an undefined pool_id
and unnecessarily scans every pool for the candidate; fix by using the loop
variable (__pool_data) correctly (pool_id = __pool_data.get("id")), call
client.get_pool_candidates(pool_id) with that id, and use consistent variable
names inside the comprehension (e.g., candidate_ids = [c.get("id") for c in
pool_candidates.get("data", [])]); if the API provides a reverse lookup (e.g.,
client.get_candidate_pools(candidate_id) or similar), prefer that instead of
iterating all pools to avoid O(n) scans; also make the completeness checks
defensive (treat empty strings as missing and check tags as a non-empty list)
and fix the exception detail formatting to include the actual exception (e.g.,
detail=f"Failed to get candidate analytics: {e}").
- Added .super-linter.yml to exclude green-house-migration folder temporarily - Fixed editorconfig issues across all Python files: * Added missing newlines at end of files * Removed trailing whitespace * Fixed indentation to use 2 spaces consistently - Added *.AppImage to .gitignore - Removed problematic Cursor AppImage file - All files now comply with .editorconfig standards This commit addresses the editorconfig-checker errors reported by super-linter while excluding the green-house-migration project for separate handling.
- Removed extracted log files that were accidentally committed - These files are not needed in the repository - Kept only the essential configuration and documentation files
- Fixed shellcheck errors in migrate_to_teamtailor.sh: * Added double quotes around all variable expansions * Fixed ENGINEERING_POOL_ID, DESIGN_POOL_ID, PRODUCT_POOL_ID variables * Fixed all echo statements with proper quoting - Fixed stylelint errors in style.css: * Changed #ffffff to #fff for shorter hex color * Changed rgba(0, 0, 0, 0.1) to rgba(0, 0, 0, 10%) for modern color notation * Reordered selectors to fix specificity issues (.sidebar-menu a before .site-title a:hover) - Fixed hadolint errors in Dockerfile: * Specified exact Python version (3.12.7-alpine) * Separated ENV statements instead of using backslashes * Added --no-cache-dir to apk add and pip install commands - Fixed editorconfig errors across all Python files: * Added missing newlines at end of files * Removed trailing whitespace * Fixed indentation to use 2 spaces consistently - Fixed htmlhint errors in index.html: * Changed DOCTYPE to lowercase * Moved title element to be first in head section - Removed exclusion of green-house-migration from super-linter configuration - All files now comply with linting standards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
examples/fastapi-crud/src/core/config.py (1)
46-67
: Same issues for get_db_settings + inconsistent return type (DbBaseSettings vs DbSettings).Return a consistent DbSettings and avoid lru_cache on a method by delegating to a module-level cached function.
Apply this diff:
- @lru_cache - def get_db_settings() -> DbSettings: + @staticmethod + def get_db_settings() -> DbSettings: - """First try to get env values from .env file. + """First try to get env values from .env file. Pydantic will only check the current working directory and won't check any parent directories for the .env.database file. If pydantic does not find the file dotenv library will search the file in parent directories, If it finds the file the values will be loaded and then set with os.getenv method. """ - try: - return DbSettings() - except ValidationError: - db_env_file_path = find_dotenv(".env.database", True) - load_dotenv(db_env_file_path) - return DbBaseSettings( - postgres_host="localhost", - postgres_port=getenv("POSTGRES_PORT"), - postgres_user=getenv("POSTGRES_USER"), - postgres_password=getenv("POSTGRES_PASSWORD"), - postgres_name=getenv("POSTGRES_NAME"), - pgadmin_default_email=getenv("PGADMIN_DEFAULT_EMAIL"), - pgadmin_default_password=getenv("PGADMIN_DEFAULT_PASSWORD") - ) + return _get_db_settings_cached()Auxiliary code to add near the imports (outside the selected range):
from functools import lru_cache @lru_cache def _get_api_settings_cached() -> ApiSettings: return ApiSettings() @lru_cache def _get_db_settings_cached() -> DbSettings: try: return DbSettings() except ValidationError: db_env_file_path = find_dotenv(".env.database", True) load_dotenv(db_env_file_path) return DbSettings( postgres_host="localhost", postgres_port=getenv("POSTGRES_PORT"), postgres_user=getenv("POSTGRES_USER"), postgres_password=getenv("POSTGRES_PASSWORD"), postgres_name=getenv("POSTGRES_NAME"), pgadmin_default_email=getenv("PGADMIN_DEFAULT_EMAIL"), pgadmin_default_password=getenv("PGADMIN_DEFAULT_PASSWORD"), )This removes B019, fixes potential TypeError, and standardizes the return type.
♻️ Duplicate comments (3)
examples/fastapi-simple-docker-pip/tests/integration/test_main_routes.py (1)
11-13
: Fixture isn’t mounting the ASGI app; requests won’t reach FastAPI.Use
ASGITransport(app=app)
(httpx ≥0.24). This mirrors the earlier review.Apply:
-from httpx import AsyncClient +from httpx import AsyncClient, ASGITransport @@ async def client() -> AsyncGenerator[AsyncClient, None]: - async with AsyncClient(base_url="http://test") as ac: + transport = ASGITransport(app=app) + async with AsyncClient(transport=transport, base_url="http://test") as ac: yield acOptional: manage app lifespan with a lifespan manager if your app uses startup/shutdown hooks.
examples/fastapi-simple-docker-pip/app/auth/auth.py (2)
25-26
: Don’t include HTTPException in return type; dependency should return data or raise.The function returns a dict or raises.
Apply:
-) -> Dict[str, Any] | HTTPException: +) -> Dict[str, Any]:
52-52
: Same here: return a User or raise; don’t union with HTTPException.Apply:
-) -> User | HTTPException: +) -> User:
🧹 Nitpick comments (28)
examples/fastapi-crud/src/exceptions/database_exceptions.py (2)
7-13
: Log the original error and use a clearer client message.Surface a sanitized message to clients and log the original exception for observability.
@staticmethod - def throw_internal_server_error(e: Exception) -> None: - """Throws a generic DB error""" + def throw_internal_server_error(e: Exception) -> None: + """Raise a generic database error (original error is logged)""" + logger.error("Internal database error", exc_info=e) raise HTTPException( status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, - detail="Oops, we couldn't connect to the db, please try again later" + detail="Oops, we couldn't complete your request. Please try again later." ) from eAdd once at module top (outside range):
import logging logger = logging.getLogger(__name__)
15-21
: Docstring grammar nit.Use “a specific” instead of “an specific”.
@staticmethod def throw_not_found_error(item: str) -> None: - """Throws a Not Found DB error for an specific item""" + """Raise a Not Found DB error for a specific item"""examples/fastapi-crud/src/controllers/employee_controllers.py (6)
15-21
: Bound the quantity query param to prevent abuseGuard against large batch generation; use Query constraints.
-from fastapi import APIRouter, Depends, status +from fastapi import APIRouter, Depends, Query, status @@ -async def mock_employee_creator(quantity: int, db: Session = Depends(get_db), company_id: str = None): +async def mock_employee_creator( + quantity: int = Query(..., ge=1, le=1000), + db: Session = Depends(get_db), + company_id: str | None = None, +):
25-33
: Type the optional filter explicitlyAnnotate company_id as optional; minor readability win.
-async def get_all_employees(db: Session = Depends(get_db), company_id: str = None): +async def get_all_employees(db: Session = Depends(get_db), company_id: str | None = None):
37-42
: Unify trailing slash usage for path paramsOnly this route has a trailing slash; align to "/{id}" like others to avoid duplicate routes.
-@router.get( - "/{id}/", +@router.get( + "/{id}",
46-52
: Rename handler to reflect resource (employee), not companyMisleading name; keep handlers consistent.
-async def update_company(id: str, request: PatchEmployeeSchema, db: Session = Depends(get_db)): +async def update_employee(id: str, request: PatchEmployeeSchema, db: Session = Depends(get_db)):
66-71
: Rename delete handler for consistencyUse delete_employee to match resource and service call.
-async def delete_company(id: str, db: Session = Depends(get_db)): +async def delete_employee(id: str, db: Session = Depends(get_db)):
20-20
: Ruff B008 is a false positive for FastAPI Depends in signaturesFastAPI’s DI pattern uses Depends in defaults. Prefer suppressing B008 for FastAPI modules instead of refactoring.
Options:
- Per-file: add at top:
# ruff: noqa: B008
- Or in pyproject.toml:
ignore = ["B008"]
for FastAPI appsAlso applies to: 30-30, 41-41, 51-51, 61-61, 70-70
examples/fastapi-crud/src/controllers/company_controllers.py (2)
14-20
: Bound the quantity query param and tweak route name grammarPrevent accidental large loads; minor copy fix.
-from fastapi import APIRouter, Depends, status +from fastapi import APIRouter, Depends, Query, status @@ - name="Create and get n mock company", + name="Create and get n mock companies", @@ -async def mock_company_creator(quantity: int, db: Session = Depends(get_db)): +async def mock_company_creator( + quantity: int = Query(..., ge=1, le=1000), + db: Session = Depends(get_db), +):
19-19
: Ruff B008 false positive for FastAPI DependsSame note as in employees controller; suppress rather than refactor.
Also applies to: 29-29, 39-39, 49-49, 59-59, 68-68
examples/fastapi-crud/src/models/schemas/employee.py (2)
49-56
: Make optional create-fields truly optional (default None).
Right now these are allowed to be None but still required if using Pydantic v1; defaulting to None makes them optional in payloads.Apply this focused diff:
- zip_code: str | None - time_zone: str | None + zip_code: str | None = None + time_zone: str | None = None personal_id: str email: EmailStr phone_number: str - company: str | None + company: str | None = None role: str - avatar_url: HttpUrl | None + avatar_url: HttpUrl | None = None
66-80
: Patch schema fields should default to None for true partial updates.
Without defaults, Optional[...] fields can still be treated as required in v1.Apply this focused diff:
- first_name: str | None - last_name: str | None - address: str | None - city: str | None - state_province: str | None - country: str | None - zip_code: str | None - time_zone: str | None - personal_id: str | None - email: EmailStr | None - phone_number: str | None - is_manager: bool | None - company: str | None - role: str | None - avatar_url: HttpUrl | None + first_name: str | None = None + last_name: str | None = None + address: str | None = None + city: str | None = None + state_province: str | None = None + country: str | None = None + zip_code: str | None = None + time_zone: str | None = None + personal_id: str | None = None + email: EmailStr | None = None + phone_number: str | None = None + is_manager: bool | None = None + company: str | None = None + role: str | None = None + avatar_url: HttpUrl | None = Noneexamples/fastapi-crud/src/core/config.py (2)
25-33
: Optional: use SecretStr for passwords.Protects accidental logging/printing of secrets.
Apply this diff and import SecretStr:
-from pydantic import BaseSettings +from pydantic import BaseSettings, SecretStr @@ - postgres_password: str + postgres_password: SecretStr @@ - pgadmin_default_password: str + pgadmin_default_password: SecretStr
56-66
: Optional: handle missing .env.database more explicitly and coerce port.Improve error messaging and avoid a second ValidationError due to None/str port.
Suggested tweak inside the cached helper (context only, no diff here since it’s outside this range):
db_env_file_path = find_dotenv(".env.database", True) if not db_env_file_path: raise ValidationError([{"loc": ("env",), "msg": ".env.database not found", "type": "value_error"}], DbSettings) # or a custom exception load_dotenv(db_env_file_path) return DbSettings( postgres_host=getenv("POSTGRES_HOST", "localhost"), postgres_port=getenv("POSTGRES_PORT"), # pydantic will coerce; or int(getenv("POSTGRES_PORT", "5432")) ... )examples/geospatial-python-urban-analysis-with-postgis/src/database/connection.py (2)
10-15
: Docstring is misleading; engine creation ≠ establishing a DB connectionClarify behavior and align raises with actual code. Also fix PEP 257 formatting.
- """ Return a connection to the database - Returns: - Engine: SQLAlchemy engine instance - Raises: - SQLAlchemyError: If connection cannot be established - """ + """Create and return an SQLAlchemy Engine for the configured PostgreSQL database. + + Note: This builds the Engine; it does not open a DB connection. Connection + errors surface on first use (e.g., engine.connect()). + + Returns: + Engine: SQLAlchemy Engine instance. + + Raises: + ValueError: If required DB_* environment variables are missing. + SQLAlchemyError: If engine creation fails. + """
17-22
: Avoid KeyError; validate required env vars explicitly and cast portUsing os.environ[...] crashes with KeyError and gives poor feedback. Validate and surface which vars are missing; parse port to int.
- db_name = os.environ["DB_NAME"] - db_user = os.environ["DB_USER"] - db_password = os.environ["DB_PASSWORD"] - db_host = os.environ["DB_HOST"] - db_port = os.environ["DB_PORT"] + required = ("DB_NAME", "DB_USER", "DB_PASSWORD", "DB_HOST", "DB_PORT") + cfg = {k: os.getenv(k) for k in required} + missing = [k for k, v in cfg.items() if not v] + if missing: + raise ValueError(f"Missing environment variables: {', '.join(missing)}") + db_name = cfg["DB_NAME"] + db_user = cfg["DB_USER"] + db_password = cfg["DB_PASSWORD"] + db_host = cfg["DB_HOST"] + db_port = cfg["DB_PORT"]examples/cli-base/setup.py (3)
10-10
: Package name should avoid spaces.Use a normalized distribution name (hyphens/underscores).
Apply:
- name="python command base", + name="python-command-base",
15-16
: Avoid unnecessary join for single values.Set strings directly.
Apply:
- author=', '.join(["mnq78"]), - author_email=', '.join(["[email protected]"]), + author="mnq78", + author_email="[email protected]",
3-4
: Be robust to CWD; specify encoding.Reading files relative to setup.py is safer with UTF-8 encoding.
Apply:
-with open('requirements.txt') as f: - required = f.read().splitlines() +import pathlib +BASE_DIR = pathlib.Path(__file__).parent +with open(BASE_DIR / 'requirements.txt', encoding='utf-8') as f: + required = f.read().splitlines().super-linter.yml (2)
64-66
: Enable TypeScript linting to cover TS projects in this repo.TS linters are disabled while the repo contains TS services. Recommend enabling at least ESLint for TS and TSX.
Apply:
-VALIDATE_TSX: false -VALIDATE_TYPESCRIPT_ES: false +VALIDATE_TSX: true +VALIDATE_TYPESCRIPT_ES: true
6-9
: Clarify EXCLUDE_PATHS intent.Either remove the commented placeholder or commit the exclusion explicitly if you plan to defer linting for the new project.
Possible options:
-# - examples/green-house-migration +# - examples/green-house-migrationor delete the placeholder lines to avoid confusion.
PULL_REQUEST_DETAILS.md (1)
57-69
: Standardize project naming for consistency.Use a single form (e.g., “green-house-migration” or “Greenhouse-to-TeamTailor migration”) across the doc.
Apply:
-**Green-House-Migration Project:** +**Greenhouse-to-TeamTailor Migration (examples/green-house-migration):**examples/cli-typer-base/main.py (4)
38-48
: Defensive deletion of optional keys; avoid KeyError and in-place mutation.Use
pop(..., None)
to prevent KeyError if the API payload changes. Consider working on a shallow copy if other consumers rely on the full payload.Apply:
- del exchange.data['oficial'] - del exchange.data['oficial_euro'] + exchange.data.pop('oficial', None) + exchange.data.pop('oficial_euro', None) @@ - del exchange.data['oficial_euro'] - del exchange.data['blue_euro'] + exchange.data.pop('oficial_euro', None) + exchange.data.pop('blue_euro', None) @@ - del exchange.data['oficial'] - del exchange.data['oficial_euro'] - del exchange.data['blue_euro'] + exchange.data.pop('oficial', None) + exchange.data.pop('oficial_euro', None) + exchange.data.pop('blue_euro', None)
54-55
: Guard against missing last_update; avoid KeyError.Prefer
.get()
to tolerate upstream payload changes.Apply:
- exchange.data["last_update"][:-6] + exchange.data.get("last_update", "")[:-6]If you want robust formatting, parse the timestamp:
# add at top-level from datetime import datetime, timezone # compute safely ts = exchange.data.get("last_update", "") dt = datetime.fromisoformat(ts.replace("Z", "+00:00")).astimezone(timezone.utc) if ts else None console.print(":heavy_check_mark: ... Last update:", dt.isoformat(timespec="seconds") if dt else "unknown")
70-71
: If json_response is True, print and return; don’t continue to interactive flow.Also prefer
.text
over decoding.content
.Apply:
- if json_response: - console.print_json(exchange.json_data.content.decode('utf8')) + if json_response: + console.print_json(exchange.json_data.text) + return
73-77
: Avoid shadowing built-ininput
.Rename the local variable.
Apply:
- input = InputWrapper(exchange, console) + input_wrapper = InputWrapper(exchange, console) @@ - calculator = Calculator(input.wrap()) + calculator = Calculator(input_wrapper.wrap())examples/fastapi-simple-docker-pip/app/auth/auth.py (2)
68-68
: Preserve exception context when re-raising.Use exception chaining.
Apply:
- raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e)) + raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e)) from e
55-55
: Unnecessaryversion=4
inuuid.UUID(...)
.When parsing from string,
version
is ignored; drop it.Apply:
- id=uuid.UUID(user_data["id"], version=4), + id=uuid.UUID(user_data["id"]),
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (107)
.gitignore
(1 hunks).super-linter.yml
(1 hunks)PULL_REQUEST_DETAILS.md
(1 hunks)examples/cli-base/setup.py
(1 hunks)examples/cli-base/src/app.py
(1 hunks)examples/cli-typer-base/entities/__init__.py
(1 hunks)examples/cli-typer-base/entities/currencies.py
(1 hunks)examples/cli-typer-base/main.py
(1 hunks)examples/cli-typer-base/services/__init__.py
(1 hunks)examples/cli-typer-base/services/api_consumer.py
(1 hunks)examples/cli-typer-base/services/calculator.py
(1 hunks)examples/cli-typer-base/services/input_wrapper.py
(1 hunks)examples/cli-typer-base/tables/__init__.py
(1 hunks)examples/cli-typer-base/tables/exchange.py
(1 hunks)examples/fastapi-base/src/__init__.py
(1 hunks)examples/fastapi-base/src/controllers/__init__.py
(1 hunks)examples/fastapi-base/src/controllers/users_controller.py
(1 hunks)examples/fastapi-base/src/core/__init__.py
(1 hunks)examples/fastapi-base/src/exceptions/__init__.py
(1 hunks)examples/fastapi-base/src/factories/__init__.py
(1 hunks)examples/fastapi-base/src/factories/user_factory.py
(1 hunks)examples/fastapi-base/src/main.py
(1 hunks)examples/fastapi-base/src/models/__init__.py
(1 hunks)examples/fastapi-base/src/models/user_model.py
(1 hunks)examples/fastapi-base/src/repository/__init__.py
(1 hunks)examples/fastapi-base/src/routes/__init__.py
(1 hunks)examples/fastapi-base/src/services/__init__.py
(1 hunks)examples/fastapi-base/src/services/user_service.py
(1 hunks)examples/fastapi-crud/src/alembic_migrations/env.py
(3 hunks)examples/fastapi-crud/src/alembic_migrations/versions/11bd0063b0af_specified_not_nullable_fields_in_.py
(1 hunks)examples/fastapi-crud/src/alembic_migrations/versions/15509efef468_first_revision.py
(1 hunks)examples/fastapi-crud/src/alembic_migrations/versions/df8afbc693b9_add_company_model.py
(1 hunks)examples/fastapi-crud/src/auth/__init__.py
(1 hunks)examples/fastapi-crud/src/controllers/__init__.py
(1 hunks)examples/fastapi-crud/src/controllers/company_controllers.py
(1 hunks)examples/fastapi-crud/src/controllers/employee_controllers.py
(1 hunks)examples/fastapi-crud/src/core/__init__.py
(1 hunks)examples/fastapi-crud/src/core/config.py
(1 hunks)examples/fastapi-crud/src/db/__init__.py
(1 hunks)examples/fastapi-crud/src/db/database.py
(1 hunks)examples/fastapi-crud/src/exceptions/__init__.py
(1 hunks)examples/fastapi-crud/src/exceptions/database_exceptions.py
(1 hunks)examples/fastapi-crud/src/factories/__init__.py
(1 hunks)examples/fastapi-crud/src/factories/companies_factory.py
(1 hunks)examples/fastapi-crud/src/factories/employee_factory.py
(1 hunks)examples/fastapi-crud/src/main.py
(1 hunks)examples/fastapi-crud/src/models/__init__.py
(1 hunks)examples/fastapi-crud/src/models/models.py
(1 hunks)examples/fastapi-crud/src/models/schemas/__init__.py
(1 hunks)examples/fastapi-crud/src/models/schemas/company.py
(1 hunks)examples/fastapi-crud/src/models/schemas/employee.py
(1 hunks)examples/fastapi-crud/src/repositories/__init__.py
(1 hunks)examples/fastapi-crud/src/repositories/company_repository.py
(1 hunks)examples/fastapi-crud/src/repositories/employee_repository.py
(1 hunks)examples/fastapi-crud/src/routes/__init__.py
(1 hunks)examples/fastapi-crud/src/routes/auth.py
(1 hunks)examples/fastapi-crud/src/services/__init__.py
(1 hunks)examples/fastapi-crud/src/services/company_service.py
(1 hunks)examples/fastapi-crud/src/services/employee_service.py
(1 hunks)examples/fastapi-gql/src/__init__.py
(1 hunks)examples/fastapi-gql/src/factories/__init__.py
(1 hunks)examples/fastapi-gql/src/factories/company_factory.py
(1 hunks)examples/fastapi-gql/src/main.py
(1 hunks)examples/fastapi-gql/src/models/company_model.py
(1 hunks)examples/fastapi-gql/src/models/queries.py
(1 hunks)examples/fastapi-gql/src/models/types.py
(1 hunks)examples/fastapi-gql/src/services/__init__.py
(1 hunks)examples/fastapi-gql/src/services/company_service.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/__init__.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/app/__init__.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/app/main.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/app/router/__init__.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/app/router/routes.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/main.py
(1 hunks)examples/fastapi-mongo-with-docker-compose/routes.py
(1 hunks)examples/fastapi-postgres-with-serverless/src/__init__.py
(1 hunks)examples/fastapi-postgres-with-serverless/src/main.py
(1 hunks)examples/fastapi-postgres-with-serverless/src/router/__init__.py
(1 hunks)examples/fastapi-postgres-with-serverless/src/router/routes.py
(1 hunks)examples/fastapi-simple-docker-pip/app/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/app/auth/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/app/auth/auth.py
(2 hunks)examples/fastapi-simple-docker-pip/app/auth/generic_auth_provider.py
(1 hunks)examples/fastapi-simple-docker-pip/app/auth/mock_auth_provider.py
(1 hunks)examples/fastapi-simple-docker-pip/app/config.py
(1 hunks)examples/fastapi-simple-docker-pip/app/main.py
(1 hunks)examples/fastapi-simple-docker-pip/app/repositories/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/app/repositories/note_repository.py
(1 hunks)examples/fastapi-simple-docker-pip/app/router/base_routes.py
(1 hunks)examples/fastapi-simple-docker-pip/app/router/note_routes.py
(1 hunks)examples/fastapi-simple-docker-pip/app/schemas/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/app/schemas/note.py
(1 hunks)examples/fastapi-simple-docker-pip/app/schemas/sorting.py
(1 hunks)examples/fastapi-simple-docker-pip/app/schemas/user.py
(1 hunks)examples/fastapi-simple-docker-pip/app/services/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/app/services/email_service.py
(1 hunks)examples/fastapi-simple-docker-pip/app/utils.py
(1 hunks)examples/fastapi-simple-docker-pip/tests/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/tests/integration/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/tests/integration/test_main_routes.py
(2 hunks)examples/fastapi-simple-docker-pip/tests/unit/__init__.py
(1 hunks)examples/fastapi-simple-docker-pip/tests/unit/test_build.py
(1 hunks)examples/geospatial-python-urban-analysis-with-postgis/scripts/countries.py
(2 hunks)examples/geospatial-python-urban-analysis-with-postgis/src/__init__.py
(1 hunks)examples/geospatial-python-urban-analysis-with-postgis/src/database/__init__.py
(1 hunks)examples/geospatial-python-urban-analysis-with-postgis/src/database/connection.py
(1 hunks)examples/geospatial-python-urban-analysis-with-postgis/src/database/queries.py
(1 hunks)
⛔ Files not processed due to max files limit (53)
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/bsas_data/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/bsas_data/fetch_bus_stop.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/bsas_data/fetch_district.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/postgis/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/postgis/fetch_bus_stop.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/rasterio/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/extract/rasterio/population.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/load/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/load/postgis/save_bus_stop.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/transform/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/transform/district.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/transform/fix_shapefile.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/transform/rasterize.py
- examples/geospatial-python-urban-analysis-with-postgis/src/etl/transform/reproject.py
- examples/geospatial-python-urban-analysis-with-postgis/src/ml/clustering.py
- examples/geospatial-python-urban-analysis-with-postgis/src/pipelines/bus_stop_analysis.py
- examples/geospatial-python-urban-analysis-with-postgis/src/pipelines/optimal_stop_pipeline.py
- examples/geospatial-python-urban-analysis-with-postgis/src/pipelines/save_bus_stop_pipeline.py
- examples/geospatial-python-urban-analysis-with-postgis/src/pipelines/shapefile_to_raster.py
- examples/geospatial-python-urban-analysis-with-postgis/src/visualization/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/visualization/bus_stops_map.py
- examples/geospatial-python-urban-analysis-with-postgis/src/visualization/optimization_map.py
- examples/green-house-migration/Dockerfile
- examples/green-house-migration/config.py
- examples/green-house-migration/dashboard/index.html
- examples/green-house-migration/docs/assets/css/style.css
- examples/green-house-migration/legacy/greenhouse/init.py
- examples/green-house-migration/legacy/greenhouse/batch/init.py
- examples/green-house-migration/legacy/greenhouse/batch/applications.py
- examples/green-house-migration/legacy/greenhouse/batch/candidates.py
- examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py
- examples/green-house-migration/legacy/greenhouse/batch/jobs.py
- examples/green-house-migration/legacy/greenhouse/batch/metadata.py
- examples/green-house-migration/legacy/greenhouse/batch/offers.py
- examples/green-house-migration/legacy/greenhouse/batch/scheduled_interviews.py
- examples/green-house-migration/legacy/greenhouse/batch/scorecards.py
- examples/green-house-migration/legacy/greenhouse/batch/users.py
- examples/green-house-migration/legacy/greenhouse/client.py
- examples/green-house-migration/legacy/greenhouse/config_validator.py
- examples/green-house-migration/legacy/greenhouse/jobs_tasks/init.py
- examples/green-house-migration/legacy/greenhouse/jobs_tasks/run_all.py
- examples/green-house-migration/legacy/greenhouse/logger.py
- examples/green-house-migration/legacy/greenhouse/pagination.py
- examples/green-house-migration/legacy/greenhouse/processor.py
- examples/green-house-migration/legacy/greenhouse/storage.py
- examples/green-house-migration/legacy/greenhouse/validators.py
- examples/green-house-migration/main.py
- examples/green-house-migration/monitoring/health_check.py
- examples/green-house-migration/monitoring/metrics.py
- examples/green-house-migration/routes/init.py
- examples/green-house-migration/routes/api/applications.py
- examples/green-house-migration/routes/api/candidate_tags.py
✅ Files skipped from review due to trivial changes (87)
- examples/cli-typer-base/tables/init.py
- examples/fastapi-base/src/core/init.py
- examples/fastapi-crud/src/main.py
- examples/fastapi-simple-docker-pip/tests/unit/init.py
- examples/fastapi-crud/src/routes/auth.py
- examples/fastapi-base/src/routes/init.py
- examples/fastapi-base/src/controllers/init.py
- examples/fastapi-base/src/repository/init.py
- examples/fastapi-mongo-with-docker-compose/main.py
- examples/fastapi-simple-docker-pip/app/utils.py
- examples/fastapi-simple-docker-pip/app/init.py
- examples/fastapi-crud/src/exceptions/init.py
- examples/fastapi-base/src/services/init.py
- examples/fastapi-postgres-with-serverless/src/main.py
- examples/fastapi-base/src/factories/init.py
- examples/fastapi-crud/src/auth/init.py
- examples/fastapi-simple-docker-pip/app/auth/init.py
- examples/fastapi-simple-docker-pip/app/repositories/init.py
- examples/fastapi-base/src/models/init.py
- examples/fastapi-crud/src/core/init.py
- examples/fastapi-postgres-with-serverless/src/init.py
- examples/fastapi-mongo-with-docker-compose/init.py
- examples/fastapi-crud/src/routes/init.py
- examples/fastapi-crud/src/models/schemas/init.py
- examples/fastapi-base/src/main.py
- examples/fastapi-gql/src/services/init.py
- examples/fastapi-base/src/init.py
- examples/fastapi-mongo-with-docker-compose/app/router/routes.py
- examples/fastapi-crud/src/factories/init.py
- examples/fastapi-crud/src/db/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/init.py
- examples/cli-typer-base/services/init.py
- examples/fastapi-gql/src/factories/init.py
- examples/fastapi-mongo-with-docker-compose/app/init.py
- examples/fastapi-base/src/controllers/users_controller.py
- examples/fastapi-mongo-with-docker-compose/app/router/init.py
- examples/fastapi-crud/src/alembic_migrations/env.py
- examples/fastapi-gql/src/services/company_service.py
- examples/geospatial-python-urban-analysis-with-postgis/src/database/init.py
- examples/fastapi-gql/src/init.py
- examples/fastapi-crud/src/repositories/init.py
- examples/fastapi-crud/src/models/init.py
- examples/fastapi-base/src/services/user_service.py
- examples/fastapi-crud/src/factories/employee_factory.py
- examples/fastapi-crud/src/repositories/employee_repository.py
- examples/fastapi-simple-docker-pip/app/router/note_routes.py
- examples/cli-typer-base/entities/init.py
- examples/fastapi-simple-docker-pip/tests/init.py
- examples/fastapi-simple-docker-pip/app/main.py
- examples/fastapi-crud/src/db/database.py
- examples/fastapi-crud/src/models/schemas/company.py
- examples/fastapi-simple-docker-pip/app/services/init.py
- examples/geospatial-python-urban-analysis-with-postgis/src/database/queries.py
- examples/cli-typer-base/services/calculator.py
- examples/fastapi-simple-docker-pip/app/router/base_routes.py
- examples/geospatial-python-urban-analysis-with-postgis/scripts/countries.py
- examples/fastapi-crud/src/alembic_migrations/versions/df8afbc693b9_add_company_model.py
- examples/fastapi-crud/src/controllers/init.py
- examples/fastapi-postgres-with-serverless/src/router/routes.py
- examples/fastapi-postgres-with-serverless/src/router/init.py
- examples/fastapi-crud/src/services/employee_service.py
- examples/fastapi-crud/src/factories/companies_factory.py
- .gitignore
- examples/fastapi-mongo-with-docker-compose/routes.py
- examples/fastapi-base/src/factories/user_factory.py
- examples/fastapi-crud/src/services/init.py
- examples/fastapi-gql/src/models/types.py
- examples/fastapi-base/src/models/user_model.py
- examples/fastapi-crud/src/alembic_migrations/versions/15509efef468_first_revision.py
- examples/fastapi-gql/src/main.py
- examples/cli-base/src/app.py
- examples/fastapi-gql/src/models/queries.py
- examples/cli-typer-base/services/input_wrapper.py
- examples/cli-typer-base/tables/exchange.py
- examples/fastapi-simple-docker-pip/app/services/email_service.py
- examples/fastapi-mongo-with-docker-compose/app/main.py
- examples/fastapi-gql/src/models/company_model.py
- examples/cli-typer-base/services/api_consumer.py
- examples/cli-typer-base/entities/currencies.py
- examples/fastapi-crud/src/repositories/company_repository.py
- examples/fastapi-crud/src/alembic_migrations/versions/11bd0063b0af_specified_not_nullable_fields_in_.py
- examples/fastapi-crud/src/models/models.py
- examples/fastapi-simple-docker-pip/tests/integration/init.py
- examples/fastapi-crud/src/services/company_service.py
- examples/fastapi-simple-docker-pip/app/schemas/note.py
- examples/fastapi-gql/src/factories/company_factory.py
- examples/fastapi-base/src/exceptions/init.py
🚧 Files skipped from review as they are similar to previous changes (7)
- examples/fastapi-simple-docker-pip/app/schemas/sorting.py
- examples/fastapi-simple-docker-pip/app/schemas/init.py
- examples/fastapi-simple-docker-pip/tests/unit/test_build.py
- examples/fastapi-simple-docker-pip/app/schemas/user.py
- examples/fastapi-simple-docker-pip/app/config.py
- examples/fastapi-simple-docker-pip/app/auth/mock_auth_provider.py
- examples/fastapi-simple-docker-pip/app/repositories/note_repository.py
🧰 Additional context used
🧬 Code graph analysis (6)
examples/fastapi-crud/src/models/schemas/employee.py (1)
examples/fastapi-crud/src/models/schemas/company.py (3)
Config
(20-21)Config
(39-43)Config
(61-65)
examples/fastapi-simple-docker-pip/app/auth/generic_auth_provider.py (1)
examples/fastapi-simple-docker-pip/app/auth/mock_auth_provider.py (6)
authenticate_user
(7-14)get_user
(16-24)create_user
(26-29)update_user
(31-32)delete_user
(34-35)set_user_password
(37-38)
examples/cli-typer-base/main.py (4)
examples/cli-typer-base/services/api_consumer.py (1)
Exchange
(4-12)examples/cli-typer-base/tables/exchange.py (1)
exchange_table_gen
(19-52)examples/cli-typer-base/services/input_wrapper.py (2)
InputWrapper
(26-83)wrap
(60-83)examples/cli-typer-base/services/calculator.py (3)
Calculator
(10-32)convert
(30-32)CalculatorLogger
(35-74)
examples/fastapi-crud/src/controllers/company_controllers.py (3)
examples/fastapi-crud/src/models/schemas/company.py (3)
CompanySchema
(4-21)PatchCompanySchema
(46-65)NewCompanySchema
(24-43)examples/fastapi-crud/src/db/database.py (1)
get_db
(28-33)examples/fastapi-crud/src/services/company_service.py (7)
CompanyService
(12-47)bulk_creator
(15-18)get_all
(21-22)get_by_id
(25-26)patch
(29-35)create
(38-43)delete_company
(46-47)
examples/fastapi-crud/src/controllers/employee_controllers.py (3)
examples/fastapi-crud/src/models/schemas/employee.py (3)
EmployeeSchema
(4-23)ShortEmployeeSchema
(26-39)PatchEmployeeSchema
(65-86)examples/fastapi-crud/src/db/database.py (1)
get_db
(28-33)examples/fastapi-crud/src/services/employee_service.py (8)
EmployeeService
(14-60)bulk_creator
(17-25)get_all_for_company
(32-33)get_all
(28-29)get_by_id
(36-38)patch
(50-56)create
(41-47)delete_employee
(59-60)
examples/fastapi-simple-docker-pip/app/auth/auth.py (3)
examples/fastapi-simple-docker-pip/app/auth/generic_auth_provider.py (3)
AuthProvider
(5-30)authenticate_user
(7-8)get_user
(11-12)examples/fastapi-simple-docker-pip/app/auth/mock_auth_provider.py (3)
MockAuthProvider
(6-38)authenticate_user
(7-14)get_user
(16-24)examples/fastapi-simple-docker-pip/app/schemas/user.py (1)
User
(7-13)
🪛 LanguageTool
PULL_REQUEST_DETAILS.md
[grammar] ~22-~22: There might be a mistake here.
Context: ...ser.py` - User schema for authentication - Fixed repository return types (NoteRepos...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ...n solution from Greenhouse to TeamTailor - Comprehensive dashboard with web interfa...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...mprehensive dashboard with web interface - Extensive documentation structure with g...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...tation structure with guides and reports - Security tools and analysis scripts - De...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ts - Security tools and analysis scripts - Development and production scripts - Tes...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...pts - Development and production scripts - Test fixtures and configuration files - ...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ... - Test fixtures and configuration files - API routes and integration tools - Monit...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...files - API routes and integration tools - Monitoring and metrics functionality - L...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...s - Monitoring and metrics functionality - Legacy Greenhouse integration - TeamTail...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ...ionality - Legacy Greenhouse integration - TeamTailor migration tools ## Key Files...
(QB_NEW_EN)
[grammar] ~117-~117: There might be a mistake here.
Context: ...ng** across all Python files using Black - Proper import organization using isort...
(QB_NEW_EN)
[grammar] ~118-~118: There might be a mistake here.
Context: ...Proper import organization** using isort - Type safety improvements with mypy fix...
(QB_NEW_EN)
[grammar] ~119-~119: There might be a mistake here.
Context: ...pe safety** improvements with mypy fixes - Unused variable handling in TypeScript...
(QB_NEW_EN)
[grammar] ~120-~120: There might be a mistake here.
Context: ...riable handling** in TypeScript projects - ESLint compliance across all Node.js p...
(QB_NEW_EN)
[grammar] ~125-~125: There might be a mistake here.
Context: ...solution** from Greenhouse to TeamTailor - Web dashboard for monitoring and manag...
(QB_NEW_EN)
[grammar] ~126-~126: There might be a mistake here.
Context: ...ashboard** for monitoring and management - Comprehensive documentation with step-...
(QB_NEW_EN)
[grammar] ~127-~127: There might be a mistake here.
Context: ...documentation** with step-by-step guides - Security analysis tools and best pract...
(QB_NEW_EN)
[grammar] ~128-~128: There might be a mistake here.
Context: ...rity analysis tools** and best practices - Production-ready scripts for deploymen...
(QB_NEW_EN)
[grammar] ~133-~133: There might be a mistake here.
Context: ...nters passing** - no more linting errors - Consistent code style across all proje...
(QB_NEW_EN)
[grammar] ~134-~134: There might be a mistake here.
Context: ...sistent code style** across all projects - Better type safety with proper annotat...
(QB_NEW_EN)
[grammar] ~135-~135: There might be a mistake here.
Context: ...er type safety** with proper annotations - Comprehensive documentation for new pr...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...all linting errors** across all projects 2. **Establishing consistent coding standards...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...Establishing consistent coding standards** 3. Adding comprehensive type safety 4. **...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...** 3. Adding comprehensive type safety 4. *Providing a complete migration solution...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...ration solution** as a reference example 5. Improving developer experience with be...
(QB_NEW_EN)
🪛 Ruff (0.12.2)
examples/cli-typer-base/main.py
34-34: Loop control variable i
not used within loop body
Rename unused i
to _i
(B007)
examples/fastapi-crud/src/controllers/company_controllers.py
19-19: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
29-29: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
39-39: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
49-49: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
59-59: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
68-68: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
examples/fastapi-crud/src/controllers/employee_controllers.py
20-20: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
30-30: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
41-41: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
51-51: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
61-61: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
70-70: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
examples/fastapi-crud/src/core/config.py
42-42: Use of functools.lru_cache
or functools.cache
on methods can lead to memory leaks
(B019)
46-46: Use of functools.lru_cache
or functools.cache
on methods can lead to memory leaks
(B019)
examples/fastapi-crud/src/exceptions/database_exceptions.py
27-30: Use binary operator detail = integrity_error.orig.diag.message_detail or str(integrity_error)
instead of if
-else
-block
Replace if
-else
-block with detail = integrity_error.orig.diag.message_detail or str(integrity_error)
(SIM108)
examples/fastapi-simple-docker-pip/app/auth/auth.py
68-68: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
🪛 ast-grep (0.38.6)
examples/fastapi-simple-docker-pip/app/auth/auth.py
[warning] 26-28: A secret is hard-coded in the application. Secrets stored in source code, such as credentials, identifiers, and other types of sensitive data, can be leaked and used by internal or external malicious actors. Use environment variables to securely provide credentials and other secrets or retrieve them from a secure vault or Hardware Security Module (HSM).
Context: jwt.decode(
token.credentials, "", algorithms=["HS256"], options={"verify_signature": False}
)
Note: [CWE-798]: Use of Hard-coded Credentials [OWASP A01:2021]: Identification and Authentication Failures [REFERENCES]
https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html
(python-pyjwt-hardcoded-secret-python)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Lint Code Base
🔇 Additional comments (14)
examples/fastapi-crud/src/controllers/employee_controllers.py (1)
20-21
: Optional: validate company_id format if it’s a UUIDIf company_id is a UUID, add a pydantic type or fastapi Path/Query validator to fail fast.
examples/fastapi-crud/src/controllers/company_controllers.py (6)
24-31
: LGTMEndpoint reads clean and matches service API.
34-41
: LGTMConsistent path and response model.
44-50
: LGTMPatch flow mirrors service; fine.
54-60
: LGTM201 with CompanySchema is appropriate.
64-69
: LGTM204 delete is fine; returning None is acceptable.
29-31
: Side note: get_db typing likely incorrect elsewheredb.database.get_db is annotated as -> None in this example project; should be a Generator[Session, None, None] to satisfy type checkers.
Proposed signature in db/database.py:
from collections.abc import Generator from sqlalchemy.orm import Session def get_db() -> Generator[Session, None, None]: db = SessionLocal() try: yield db finally: db.close()examples/fastapi-crud/src/models/schemas/employee.py (3)
5-21
: LGTM: formatting-only; EmployeeSchema remains unchanged functionally.
Fields and orm_mode keep prior behavior.Also applies to: 22-23
27-36
: LGTM: formatting-only; ShortEmployeeSchema unchanged.
No behavioral impact.Also applies to: 38-39
22-23
: No changes needed for Pydantic configuration in examples/fastapi-crud
We’ve confirmed that this example pins pydantic==1.10.13 and all schema files consistently use the v1 Config / orm_mode / extra style, so no migration to model_config / ConfigDict is required.
• examples/fastapi-crud/requirements.txt specifies pydantic==1.10.13
• examples/fastapi-crud/src/models/schemas/employee.py (and related schema files) use class Config with orm_mode and extra, matching v1 conventionsexamples/fastapi-crud/src/core/config.py (1)
7-7
: No action needed: project uses Pydantic v1
Thefastapi-crud
example pins Pydantic to version 1.10.13, so importingValidationError
frompydantic.error_wrappers
and usingBaseSettings
frompydantic
is correct. No migration topydantic-settings
is required at this time..super-linter.yml (1)
11-12
: Consider initial full validation after this large PR.
VALIDATE_ALL_CODEBASE: false
speeds PRs but can hide legacy issues. Suggest running a one-off workflow (or scheduled) with it set to true to baseline the repo.PULL_REQUEST_DETAILS.md (1)
148-148
: Examples updated—no outdated references found
- Scanned
examples/
forNoteRepository
,NoteOut
, andSorting
; no matches were found.- The only hits on
User
occur in argument doc-comments (e.g.user_id: User ID
), which are unaffected.This confirms that the example code reflects the updated schema/return types without breaking changes.
examples/fastapi-simple-docker-pip/app/auth/generic_auth_provider.py (1)
2-30
: LGTM: interface formatting and consistency improvements.Abstract signatures look consistent and readable. No functional changes.
with open('VERSION.txt') as f: | ||
version = f.read().splitlines() | ||
version = f.read().splitlines() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix version parsing (currently a list coerced to string).
splitlines()
returns a list; str(version)
serializes to "['x.y.z']". Use strip()
and pass the string directly.
Apply:
-with open('VERSION.txt') as f:
- version = f.read().splitlines()
+with open('VERSION.txt', encoding='utf-8') as f:
+ version = f.read().strip()
...
- version=str(version),
+ version=version,
Also applies to: 11-11
🤖 Prompt for AI Agents
In examples/cli-base/setup.py around lines 6-7 (and also line 11), the code uses
f.read().splitlines() which returns a list that later gets coerced to a string
like "['x.y.z']"; change the read to return a single string by using
f.read().strip() (or f.readline().strip()) and then pass that string directly
where version is used so the VERSION value is "x.y.z" rather than a
list-serialized string.
py_modules=['src'], | ||
package_dir={"": "src"}, | ||
install_requires=required, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use packages instead of py_modules=['src'] (broken packaging).
py_modules
should list module names, not a directory. Use find_packages("src")
with package_dir
.
Apply:
- py_modules=['src'],
- package_dir={"": "src"},
+ packages=find_packages("src"),
+ package_dir={"": "src"},
Add import (outside this range):
from setuptools import setup, find_packages
🤖 Prompt for AI Agents
In examples/cli-base/setup.py around lines 12 to 14, py_modules=['src'] is
incorrect because py_modules expects module names not a directory; replace that
line with packages=find_packages("src") and keep package_dir={"": "src"} and
install_requires=required; also add the import "from setuptools import setup,
find_packages" near the top of the file (outside the shown range) so
find_packages is available.
examples/cli-base/setup.py
Outdated
entry_points=''' | ||
[console_scripts] | ||
pycmd=app:start | ||
''', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify console script target exists.
Ensure app:start
is an importable callable (e.g., src/app.py has def start()
or package exposes it). If not, point to the actual callable (e.g., app.cli:main
).
Run to confirm:
🏁 Script executed:
#!/bin/bash
rg -nP -C2 '^\s*def\s+start\s*\(' src | sed -n '1,10p'
python - <<'PY'
import importlib
try:
mod = importlib.import_module('app')
print('has start:', hasattr(mod, 'start'))
except Exception as e:
print('import app failed:', e)
PY
Length of output: 150
🏁 Script executed:
#!/bin/bash
echo "Listing files under examples/cli-base:"
find examples/cli-base -maxdepth 2 -type f
echo ""
echo "Searching for 'def start' in examples/cli-base:"
rg -nP '^\s*def\s+start\s*\(' -n -C2 examples/cli-base
echo ""
echo "Searching for 'def start' in entire repo:"
rg -nP '^\s*def\s+start\s*\(' -n -C2 .
echo ""
echo "Searching for 'def main' in entire repo:"
rg -nP '^\s*def\s+main\s*\(' -n -C2 .
echo ""
echo "Displaying setup.py content around entry_points:"
sed -n '1,200p' examples/cli-base/setup.py
Length of output: 17126
Fix setup.py packaging so pycmd=app:start
can be installed
The start()
function is correctly defined in examples/cli-base/src/app.py
(line 5), but setup.py
’s py_modules=['src']
won’t package the app
module. Update the module listing so setuptools installs app.py
as the top-level app
module:
• In examples/cli-base/setup.py
, change:
- py_modules=['src'],
+ py_modules=['app'],
(with package_dir={'': 'src'}
, this ensures src/app.py
becomes app
)
• Alternatively, switch to a package layout:
from setuptools import setup, find_packages
setup(
...,
packages=find_packages('src'),
package_dir={'': 'src'},
...
)
With that change, the console script entry pycmd=app:start
will be importable and executable as intended.
🤖 Prompt for AI Agents
In examples/cli-base/setup.py around lines 17 to 20, the setup currently lists
py_modules=['src'] so src/app.py is not installed as top-level module and the
console entry pycmd=app:start cannot be imported; update setup to either set
py_modules=['app'] (keeping package_dir={'': 'src'}) so src/app.py is installed
as app, or replace py_modules with packages=find_packages('src') plus
package_dir={'': 'src'} (and import find_packages) to use a package layout;
apply one of these changes and keep package_dir intact so the console script can
import app:start.
for i in track(range(3), description="Fetching..."): | ||
exchange = Exchange() | ||
options = (euro, official) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid 3 identical API calls; also rename unused loop var to _
.
Calling Exchange()
inside the loop hits the remote API three times with no benefit and can trigger rate limits. If you just want a progress indicator, iterate once or drop the loop.
Apply:
- for i in track(range(3), description="Fetching..."):
- exchange = Exchange()
+ for _ in track(range(1), description="Fetching..."):
+ exchange = Exchange()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for i in track(range(3), description="Fetching..."): | |
exchange = Exchange() | |
options = (euro, official) | |
for _ in track(range(1), description="Fetching..."): | |
exchange = Exchange() | |
options = (euro, official) |
🧰 Tools
🪛 Ruff (0.12.2)
34-34: Loop control variable i
not used within loop body
Rename unused i
to _i
(B007)
🤖 Prompt for AI Agents
In examples/cli-typer-base/main.py around lines 34 to 36, the loop calls
Exchange() three times unnecessarily and uses an unused loop variable; replace
the repeated API calls by calling Exchange() once (remove the loop or iterate
only once) and keep the progress indicator accordingly, and rename the loop
variable to _ if you retain a single-iteration loop so it’s clear it’s unused;
ensure options = (euro, official) remains after the single Exchange() call.
"/", | ||
status_code=status.HTTP_201_CREATED, | ||
name="Create a employee", | ||
response_model=EmployeeSchema | ||
) | ||
async def new_employee(request: NewCompanySchema, db: Session = Depends(get_db)): | ||
return await EmployeeService.create(request, db) | ||
return await EmployeeService.create(request, db) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix request schema: using Company schema breaks employee creation
EmployeeService.create expects NewEmployeeSchema; current import/signature uses NewCompanySchema → runtime validation/type errors.
-from models.schemas.employee import EmployeeSchema, PatchEmployeeSchema, ShortEmployeeSchema
+from models.schemas.employee import EmployeeSchema, PatchEmployeeSchema, ShortEmployeeSchema, NewEmployeeSchema
@@
-async def new_employee(request: NewCompanySchema, db: Session = Depends(get_db)):
+async def new_employee(request: NewEmployeeSchema, db: Session = Depends(get_db)):
Also applies to: 8-8
🧰 Tools
🪛 Ruff (0.12.2)
61-61: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In examples/fastapi-crud/src/controllers/employee_controllers.py around lines
56-63 (and also line 8), the route handler for creating an employee is typed to
accept NewCompanySchema but EmployeeService.create expects NewEmployeeSchema;
update the import and the function parameter type from NewCompanySchema to
NewEmployeeSchema (and adjust any related aliasing) so the request uses the
correct schema and avoids runtime validation/type errors.
@lru_cache | ||
def get_api_settings() -> ApiSettings: | ||
return ApiSettings() | ||
|
||
@lru_cache | ||
def get_db_settings() -> DbSettings: | ||
"""First try to get env values from .env file. | ||
Pydantic will only check the current working directory and won't check any parent directories for the .env.database file. | ||
If pydantic does not find the file dotenv library will search the file in parent directories, | ||
If it finds the file the values will be loaded and then set with os.getenv method. | ||
""" | ||
try: | ||
return DbSettings() | ||
except ValidationError: | ||
db_env_file_path = find_dotenv(".env.database", True) | ||
load_dotenv(db_env_file_path) | ||
return DbBaseSettings( | ||
postgres_host="localhost", | ||
postgres_port=getenv("POSTGRES_PORT"), | ||
postgres_user=getenv("POSTGRES_USER"), | ||
postgres_password=getenv("POSTGRES_PASSWORD"), | ||
postgres_name=getenv("POSTGRES_NAME"), | ||
pgadmin_default_email=getenv("PGADMIN_DEFAULT_EMAIL"), | ||
pgadmin_default_password=getenv("PGADMIN_DEFAULT_PASSWORD") | ||
) | ||
@lru_cache | ||
def get_db_settings() -> DbSettings: | ||
"""First try to get env values from .env file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Methods lack @staticmethod and are cached with lru_cache on a method (Ruff B019).
Calling these as instance methods will raise TypeError; also lru_cache on methods risks memory leaks. Delegate to module-level cached functions and make these static.
Apply this diff:
- @lru_cache
- def get_api_settings() -> ApiSettings:
- return ApiSettings()
+ @staticmethod
+ def get_api_settings() -> ApiSettings:
+ return _get_api_settings_cached()
And add the supporting module-level function outside this block (see auxiliary code below).
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.12.2)
42-42: Use of functools.lru_cache
or functools.cache
on methods can lead to memory leaks
(B019)
46-46: Use of functools.lru_cache
or functools.cache
on methods can lead to memory leaks
(B019)
🤖 Prompt for AI Agents
In examples/fastapi-crud/src/core/config.py around lines 42-48, the methods are
decorated with @lru_cache while remaining instance methods causing Ruff B019 and
potential memory leaks; replace the method-level @lru_cache by creating
module-level functions (outside the class) decorated with @lru_cache that return
ApiSettings and DbSettings respectively, remove @lru_cache from the class
methods, mark the class methods as @staticmethod, and have those static methods
simply delegate to the new module-level cached functions so callers can use
either the static method or the module helper without creating instance-bound
cached state.
@staticmethod | ||
def throw_db_integrity_error(integrity_error: IntegrityError): | ||
"""Throws SqlAlchemy integrity error detail""" | ||
detail = "" | ||
if integrity_error.orig.diag.message_detail: | ||
detail = integrity_error.orig.diag.message_detail | ||
else: | ||
detail = str(integrity_error) | ||
raise HTTPException( | ||
status_code=status.HTTP_409_CONFLICT, | ||
detail=detail | ||
) from integrity_error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Robustness + PII: guard backend-specific attributes and avoid leaking raw DB details.
Accessing integrity_error.orig.diag.message_detail
assumes psycopg2; on other drivers it can raise AttributeError
. Also, sending raw details to clients risks PII leakage. Log the raw error, but return a sanitized message. Also annotate the return type for consistency.
@staticmethod
- def throw_db_integrity_error(integrity_error: IntegrityError):
- """Throws SqlAlchemy integrity error detail"""
- detail = ""
- if integrity_error.orig.diag.message_detail:
- detail = integrity_error.orig.diag.message_detail
- else:
- detail = str(integrity_error)
- raise HTTPException(
- status_code=status.HTTP_409_CONFLICT,
- detail=detail
- ) from integrity_error
+ def throw_db_integrity_error(integrity_error: IntegrityError) -> None:
+ """Raise a 409 Conflict on integrity constraint violations (details are logged)."""
+ # Try to extract backend-specific detail safely (psycopg2-style), otherwise fall back.
+ raw_detail = getattr(
+ getattr(getattr(integrity_error, "orig", None), "diag", None),
+ "message_detail",
+ None,
+ ) or str(integrity_error)
+ logger.error("Database integrity error", exc_info=integrity_error)
+ raise HTTPException(
+ status_code=status.HTTP_409_CONFLICT,
+ detail="Requested operation violates database constraints."
+ ) from integrity_error
Optional: If you still prefer showing details in non-prod, gate detail=raw_detail
behind an environment flag.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@staticmethod | |
def throw_db_integrity_error(integrity_error: IntegrityError): | |
"""Throws SqlAlchemy integrity error detail""" | |
detail = "" | |
if integrity_error.orig.diag.message_detail: | |
detail = integrity_error.orig.diag.message_detail | |
else: | |
detail = str(integrity_error) | |
raise HTTPException( | |
status_code=status.HTTP_409_CONFLICT, | |
detail=detail | |
) from integrity_error | |
@staticmethod | |
def throw_db_integrity_error(integrity_error: IntegrityError) -> None: | |
"""Raise a 409 Conflict on integrity constraint violations (details are logged).""" | |
# Try to extract backend-specific detail safely (psycopg2-style), otherwise fall back. | |
raw_detail = ( | |
getattr( | |
getattr(getattr(integrity_error, "orig", None), "diag", None), | |
"message_detail", | |
None, | |
) | |
or str(integrity_error) | |
) | |
logger.error("Database integrity error", exc_info=integrity_error) | |
raise HTTPException( | |
status_code=status.HTTP_409_CONFLICT, | |
detail="Requested operation violates database constraints." | |
) from integrity_error |
🧰 Tools
🪛 Ruff (0.12.2)
27-30: Use binary operator detail = integrity_error.orig.diag.message_detail or str(integrity_error)
instead of if
-else
-block
Replace if
-else
-block with detail = integrity_error.orig.diag.message_detail or str(integrity_error)
(SIM108)
payload = jwt.decode( | ||
token.credentials, "", algorithms=["HS256"], options={"verify_signature": False} | ||
) | ||
sub: str | None = payload.get("sub") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JWT signature verification disabled and hard-coded secret placeholder.
Decoding with options={"verify_signature": False}
is a security risk in non-test environments. Verify signatures using a secret from configuration or env, and fail fast if missing.
Apply:
- payload = jwt.decode(
- token.credentials, "", algorithms=["HS256"], options={"verify_signature": False}
- )
+ # Read secret/algorithm from environment (or your settings module).
+ import os
+ secret = os.getenv("JWT_SECRET")
+ if not secret:
+ raise HTTPException(
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
+ detail="JWT secret not configured",
+ )
+ alg = os.getenv("JWT_ALGORITHM", "HS256")
+ payload = jwt.decode(token.credentials, secret, algorithms=[alg])
If you need to bypass verification in tests, gate it behind an explicit test-only flag and never default to insecure behavior.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
payload = jwt.decode( | |
token.credentials, "", algorithms=["HS256"], options={"verify_signature": False} | |
) | |
sub: str | None = payload.get("sub") | |
# Read secret/algorithm from environment (or your settings module). | |
import os | |
secret = os.getenv("JWT_SECRET") | |
if not secret: | |
raise HTTPException( | |
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, | |
detail="JWT secret not configured", | |
) | |
alg = os.getenv("JWT_ALGORITHM", "HS256") | |
payload = jwt.decode(token.credentials, secret, algorithms=[alg]) | |
sub: str | None = payload.get("sub") |
🤖 Prompt for AI Agents
In examples/fastapi-simple-docker-pip/app/auth/auth.py around lines 27-30, the
JWT decode call disables signature verification and uses an empty string
placeholder which is insecure; replace this by reading the HMAC secret (or
JWKS/config) from your configuration or environment, require it to be present
(raise a clear error and refuse to start if missing), call jwt.decode with the
real secret and without options={"verify_signature": False} so signatures are
validated, and if you must bypass verification in tests only, gate that behavior
behind an explicit test-only flag/config value that defaults to false in
production.
try: | ||
return create_engine(f"postgresql://{db_user}:{db_password}@{db_host}:{db_port}/{db_name}") | ||
except Exception as e: | ||
raise SQLAlchemyError(f"Failed to establish database connection: {str(e)}") from e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Build DSN safely; remove broad Exception and handle special characters in credentials
Interpolating credentials into a URL can break when they contain reserved characters and the broad except Exception
hides useful exception types. Use URL.create(...)
and drop the catch; optionally enable pool_pre_ping
.
- try:
- return create_engine(f"postgresql://{db_user}:{db_password}@{db_host}:{db_port}/{db_name}")
- except Exception as e:
- raise SQLAlchemyError(f"Failed to establish database connection: {str(e)}") from e
+ url = URL.create(
+ drivername=os.getenv("DB_DRIVER", "postgresql+psycopg"),
+ username=db_user,
+ password=db_password,
+ host=db_host,
+ port=int(db_port),
+ database=db_name,
+ )
+ return create_engine(url, pool_pre_ping=True)
Add the import (outside the changed hunk):
from sqlalchemy.engine import URL
Optional: if you want to eagerly validate connectivity here, do a quick ping before returning and keep the docstring as-is:
engine = create_engine(url, pool_pre_ping=True)
with engine.connect() as conn:
conn.exec_driver_sql("SELECT 1")
return engine
🤖 Prompt for AI Agents
In
examples/geospatial-python-urban-analysis-with-postgis/src/database/connection.py
around lines 23-26, stop interpolating credentials into the DSN and remove the
broad except: import sqlalchemy.engine.URL at top, build the connection URL
using URL.create(drivername="postgresql", username=db_user,
password=db_password, host=db_host, port=db_port, database=db_name) and pass
that URL to create_engine; drop the blanket try/except so specific SQLAlchemy
errors propagate, and optionally enable pool_pre_ping=True and perform a quick
connection ping (connect and exec "SELECT 1") before returning the engine to
validate credentials.
|
||
### Commit 1: `fix: comprehensive linting fixes across all projects` | ||
|
||
**48 files changed, 33,500 insertions(+), 3,070 deletions(-)** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify change statistics.
The exact counts may drift as the PR evolves. Consider generating them from git to ensure accuracy.
Also applies to: 53-54
🏁 Script executed:
#!/bin/bash
git fetch origin main --depth=1
git diff --shortstat origin/main...
Length of output: 226
🏁 Script executed:
#!/bin/bash
# Locate occurrences of change summary in PULL_REQUEST_DETAILS.md
rg -n "files changed" --type md .
Length of output: 200
Update change statistics in PULL_REQUEST_DETAILS.md
The hard-coded diff summaries on lines 11 and 53–54 no longer match the actual PR. Running
git fetch origin main --depth=1
git diff --shortstat origin/main...
now reports:
358 files changed, 71,699 insertions(+), 5,770 deletions(-)
Please update both occurrences in PULL_REQUEST_DETAILS.md to these numbers—or better yet, replace the manual counts with a simple script or CI step that regenerates them automatically (for example, capturing git diff --shortstat origin/main...
and in-place replacing those two lines). This will keep your stats accurate as the PR evolves.
🤖 Prompt for AI Agents
In PULL_REQUEST_DETAILS.md around lines 11 and 53–54, the hard-coded diff
summary no longer matches the real PR; update those two occurrences to "358
files changed, 71,699 insertions(+), 5,770 deletions(-)" or replace the static
lines with a small automated approach: add a CI step or tiny script that runs
git fetch origin main --depth=1 and git diff --shortstat origin/main... and
writes its output in-place to those lines so the counts are regenerated each
time; ensure you update both locations consistently and commit the changed file.
- Fixed stylelint errors in CSS: * Updated color function notation to modern format (rgb(0 0 0 / 10%)) * Removed quotes from font family name (SFMono-Regular) * Reordered selectors to fix specificity issues - Fixed hadolint errors in Dockerfile: * Added specific versions for all apk packages * Added specific version for pipenv * Consolidated RUN instructions - Fixed editorconfig errors across multiple projects: * Corrected indentation (4 spaces for green-house-migration, 2 spaces for langgraph) * Added missing newlines at end of files * Removed trailing whitespace * Fixed line endings - Projects corrected: * green-house-migration (Python files with 4-space indentation) * langgraph-sls-fastapi-rag (Python files with 2-space indentation) * nest-nats-microservices (markdown, yaml, json files) * stripe-integration-node-typescript (TypeScript files) - All files now comply with editorconfig standards - Removed log files from repository
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
examples/green-house-migration/monitoring/health_check.py (1)
1-159
: Critical fixes required: unresolved variables, deprecated event loop usage, and missing f-string interpolationThe verification script shows that several of the previously flagged issues have not been resolved. Please address the following in
examples/green-house-migration/monitoring/health_check.py
before merging:
run_all_checks – inconsistent naming and undefined variable
• Lines 76 & 138: you initialize_results = {}
but then useresults[name]
and referenceresults.values()
, causing a NameError.
• Line 89: still using the deprecatedasyncio.get_event_loop().time()
. Switch toasyncio.get_running_loop().time()
ortime.monotonic()
.
Suggested diff:76c76 - _results = {} + results = {} 89c89 - "timestamp": asyncio.get_event_loop().time(), + "timestamp": asyncio.get_running_loop().time(),TeamTailorHealthChecker – missing f-strings and wrong header template
• Line 100:"Authorization": "Token _token ={token}"
is literal and incorrect. Should injectself.token
.
• Lines 110 & 143: URL strings use"{self.base_url}..."
rather than f-strings, so they never interpolate.
Suggested diff:100c100 - "Authorization": "Token _token ={token}", + "Authorization": f"Token token={self.token}", 110c110 - "{self.base_url}/candidates", + f"{self.base_url}/candidates", 143c143 - "{self.base_url}{endpoint}", + f"{self.base_url}{endpoint}",Error logging – use structured logging instead of bare f-strings in log messages
• Incheck_connection()
, replaceapp_logger.error("TeamTailor API connection check failed: {e}")with
app_logger.error("TeamTailor API connection check failed: %s", e)After applying these fixes, rerun the self-check script to confirm all pitfalls have been addressed.
♻️ Duplicate comments (26)
examples/green-house-migration/monitoring/health_check.py (4)
134-159
: Fix results var usage; reuse one session; f-strings; combine async with
Currently writes to undefined results; opens a session per endpoint; string interpolation is wrong.- async def checkendpoints(self) -> Dict[str, Any]: + async def checkendpoints(self) -> Dict[str, Any]: @@ - _results = {} - for endpoint in endpoints: - try: - async with aiohttp.ClientSession() as session: - async with session.get( - "{self.base_url}{endpoint}", - headers=self.headers, - timeout=aiohttp.ClientTimeout(total=10), - ) as response: - results[endpoint] = { - "status": response.status, - "accessible": response.status in [200, 401, 403], - } - except Exception as e: - results[endpoint] = { - "status": "error", - "accessible": False, - "error": str(e), - } + results: Dict[str, Any] = {} + timeout = aiohttp.ClientTimeout(total=10) + async with aiohttp.ClientSession(timeout=timeout) as session: + for endpoint in endpoints: + try: + async with session.get(f"{self.base_url}{endpoint}", headers=self.headers) as response: + results[endpoint] = { + "status": response.status, + "accessible": response.status in (200, 401, 403), + } + except Exception as e: + results[endpoint] = {"status": "error", "accessible": False, "error": str(e)} @@ - return results + return results
74-90
: Fix undefined variables and bad event loop call in run_all_checks
Results dict is undefined; comprehension uses wrong variable; getevent_loop() is a typo; timestamp should use wall clock.@@ async def run_all_checks(self) -> Dict[str, Any]: """Run all health checks.""" - _results = {} - for name, check_func in self.checks.items(): - results[name] = await check_func() + results: Dict[str, Any] = {} + for name, check_func in self.checks.items(): + results[name] = await check_func() @@ - overall_status = ( - "healthy" - if all(result["status"] == "healthy" for _result in results.values()) - else "unhealthy" - ) + overall_status = "healthy" if all(r["status"] == "healthy" for r in results.values()) else "unhealthy" @@ - return { - "status": overall_status, - "checks": results, - "timestamp": asyncio.getevent_loop().time(), - } + return {"status": overall_status, "checks": results, "timestamp": time.time()}Add the missing import:
@@ import asyncio from typing import Any, Dict +import time
96-104
: Fix Authorization header; f-string and correct format
TeamTailor expects "Token token=".self.headers = { - "Authorization": "Token _token ={token}", + "Authorization": f"Token token={self.token}", "Content-Type": "application/vnd.api+json", "Accept": "application/vnd.api+json", }
108-127
: Combine async context managers; fix f-strings; improve exception logging
Nested with, string interpolation, and logging issues.- async with aiohttp.ClientSession() as session: - async with session.get( - "{self.base_url}/candidates", - headers=self.headers, - timeout=aiohttp.ClientTimeout(total=10), - ) as response: + timeout = aiohttp.ClientTimeout(total=10) + async with aiohttp.ClientSession(timeout=timeout) as session, session.get( + f"{self.base_url}/candidates", headers=self.headers + ) as response: @@ - return { - "status": "unhealthy", - "message": "TeamTailor API returned status {response.status}", - } - except Exception as e: - app_logger.error("TeamTailor API connection check failed: {e}") + return {"status": "unhealthy", "message": f"TeamTailor API returned status {response.status}"} + except Exception as e: + app_logger.exception("TeamTailor API connection check failed") return { "status": "unhealthy", - "message": "Connection failed: {str(e)}", + "message": f"Connection failed: {e}", }examples/green-house-migration/routes/api/candidate_tags.py (4)
270-307
: Blocking I/O in async endpoint — wrap client call; also bound limit.-async def search_candidates_by_tags( - tags: str = Query(..., description="Comma-separated list of tags to search for"), - limit: int = Query(50, description="Maximum number of candidates to return"), -) -> List[CandidateTagsResponse]: +async def search_candidates_by_tags( + tags: str = Query(..., description="Comma-separated list of tags to search for"), + limit: int = Query(50, ge=1, le=200, description="Maximum number of candidates to return"), +) -> List[CandidateTagsResponse]: @@ - response = client.get_candidates(params=params) + response = await run_in_threadpool(client.get_candidates, params)
317-376
: Blocking I/O in async endpoint — wrap get/update; consider response model.
- Wrap
get_candidate
andupdate_candidate
.- Optionally define a response model (e.g., RemoveTagsResponse) for consistency.
- response = client.get_candidate(candidate_id) + response = await run_in_threadpool(client.get_candidate, candidate_id) @@ - client.update_candidate(candidate_id, update_data) + await run_in_threadpool(client.update_candidate, candidate_id, update_data)
89-107
: Blocking I/O in async endpoint — run client calls in a threadpool.
TeamTailorClient
usesrequests
+ sleep; calling it inasync def
will block the event loop.Apply import once:
+from starlette.concurrency import run_in_threadpool
Wrap blocking call:
- response = client.get_candidates(params=params) + response = await run_in_threadpool(client.get_candidates, params)
116-139
: Blocking I/O in async endpoint — wrap TT calls.- response = client.get_candidate(candidate_id) + response = await run_in_threadpool(client.get_candidate, candidate_id)examples/green-house-migration/main.py (1)
54-62
: Bug: invalid FastAPI kwarg_title
→ usetitle
FastAPI does not accept
_title
; this will raise TypeError at startup.Apply this diff:
-app = FastAPI( - _title="Greenhouse API Proxy", +app = FastAPI( + title="Greenhouse API Proxy",examples/green-house-migration/legacy/greenhouse/processor.py (1)
14-20
: Fix logging interpolation and undefined variabledata
The print logs a literal string and
data
is undefined.Apply this diff:
def run(self) -> Dict[str, Any]: """Run the processor to fetch and save entity data.""" - print("▶ Processing {self.entity}...") - _data = self.fetch() - saveentity_data(self.entity, data) - return {"entity": self.entity, "count": len(data)} + print(f"▶ Processing {self.entity}...") + data = self.fetch() + saveentity_data(self.entity, data) + return {"entity": self.entity, "count": len(data)}examples/green-house-migration/legacy/greenhouse/batch/jobs.py (1)
19-47
: Fix loop var misuse, string interpolation, and exception handlingReferences to
job
are undefined (loop var is_job
), endpoint strings are not interpolated, and catchingJobExportError
is ineffective here.Apply this diff:
- for _job in jobs: - job_id = job.get("id") + for job in jobs: + job_id = job.get("id") if not job_id: continue try: - job["job_posts"] = gh_get("jobs/{job_id}/job_posts") - except JobExportError as e: - print("⚠️ Error getting job_posts for job {job_id}: {e}") + _res = gh_get(f"jobs/{job_id}/job_posts") + job["job_posts"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + print(f"⚠️ Error getting job_posts for job {job_id}: {e}") job["job_posts"] = [] try: - job["approval_flows"] = gh_get("jobs/{job_id}/approval_flows") - except JobExportError as e: - print("⚠️ Error getting approval_flows for job {job_id}: {e}") + _res = gh_get(f"jobs/{job_id}/approval_flows") + job["approval_flows"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + print(f"⚠️ Error getting approval_flows for job {job_id}: {e}") job["approval_flows"] = [] try: - job["job_openings"] = gh_get("jobs/{job_id}/openings") - except JobExportError as e: - print("⚠️ Error getting job_openings for job {job_id}: {e}") + _res = gh_get(f"jobs/{job_id}/openings") + job["job_openings"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + print(f"⚠️ Error getting job_openings for job {job_id}: {e}") job["job_openings"] = [] try: - job["job_stages"] = gh_get("jobs/{job_id}/stages") - except JobExportError as e: - print("⚠️ Error getting job_stages for job {job_id}: {e}") + _res = gh_get(f"jobs/{job_id}/stages") + job["job_stages"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + print(f"⚠️ Error getting job_stages for job {job_id}: {e}") job["job_stages"] = []examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py (1)
26-41
: Fix undefined names, f-string formatting, and list/dict normalization in run().Current code uses undefined variables (model, data, url), prints literal placeholders, and doesn’t normalize dict/list responses from gh_get. This will crash and/or save wrong payloads. Also, catching CustomFieldsExportError is ineffective here.
Apply:
- for _model in self.model_types: + for model in self.model_types: try: - print("Fetching custom fields for {model}...") - _data = gh_get("custom_fields/{model}") - saveentity_data(model, data, subfolder="custom_fields") - summary.append({"entity": model, "count": len(data)}) - except CustomFieldsExportError as e: - print("⚠️ Error fetching custom_fields for {model}: {e}") + print(f"Fetching custom fields for {model}...") + _res = gh_get(f"custom_fields/{model}") + items = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + saveentity_data(model, items, subfolder="custom_fields") + summary.append({"entity": model, "count": len(items)}) + except Exception as e: + print(f"⚠️ Error fetching custom_fields for {model}: {e}") errored_models.append( - "https://harvest.greenhouse.io/v1/custom_fields/{model}" + f"https://harvest.greenhouse.io/v1/custom_fields/{model}" ) @@ - if errored_models: + if errored_models: print("\n❌ Errors occurred while fetching custom fields for some models:") - for _url in errored_models: - print(url) + for url in errored_models: + print(url)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
19-36
: Fix NameError, f-strings, and exception handling; normalize API responses.Loop variable mismatch (_user vs user) will crash; paths/logs aren’t interpolated; gh_get won’t raise UsersProcessorError. Normalize dict/list payloads before assignment.
- for _user in users: - user_id = user.get("id") + for user in users: + user_id = user.get("id") if not user_id: continue try: - user["job_permissions"] = gh_get("users/{user_id}/permissions/jobs") - except UsersProcessorError as e: - print("⚠️ Error getting job permissions for user {user_id}: {e}") + _res = gh_get(f"users/{user_id}/permissions/jobs") + user["job_permissions"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + print(f"⚠️ Error getting job permissions for user {user_id}: {e}") user["job_permissions"] = [] try: - user["pending_approvals"] = gh_get("users/{user_id}/pending_approvals") - except UsersProcessorError as e: - # Log the error and set pending_approvals to an empty list - print("⚠️ Error getting pending approvals for user {user_id}: {e}") + _res = gh_get(f"users/{user_id}/pending_approvals") + user["pending_approvals"] = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + except Exception as e: + # Log the error and set pending_approvals to an empty list + print(f"⚠️ Error getting pending approvals for user {user_id}: {e}") user["pending_approvals"] = []examples/green-house-migration/legacy/greenhouse/client.py (1)
31-34
: Broken URL formatting: missing f-string and slash normalization.Without f-string, requests go to a literal URL. Normalize slashes to avoid doubles.
-def gh_get(path: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: +from typing import Union + +def gh_get(path: str, params: Optional[Dict[str, Any]] = None) -> Union[Dict[str, Any], List[Any]]: """Fetch raw data from the Greenhouse API with rate limiting.""" - url = "{GREENHOUSE_API_URL}/{path.lstrip('/')}" + url = f"{GREENHOUSE_API_URL.rstrip('/')}/{path.lstrip('/')}"examples/green-house-migration/legacy/greenhouse/batch/metadata.py (1)
40-55
: Fix f-strings, undefined names, and list/dict normalization in run().Currently prints literals, uses undefined data/url, and may pass a dict to saveentity_data (CSV export will break). Catch broader exceptions.
for name, path in self.endpoints.items(): try: - print("Fetching {name} data from {path}...") - _data = gh_get(path) - saveentity_data(name, data, subfolder="metadata") - summary.append({"entity": name, "count": len(data)}) - except MetadataExportError as e: - print("⚠️ Error fetching {name} from {path}: {e}") - errored_paths.append("https://harvest.greenhouse.io/v1/{path}") + print(f"Fetching {name} data from {path}...") + _res = gh_get(path) + items = _res.get("data", []) if isinstance(_res, dict) else (_res or []) + saveentity_data(name, items, subfolder="metadata") + summary.append({"entity": name, "count": len(items)}) + except Exception as e: + print(f"⚠️ Error fetching {name} from {path}: {e}") + errored_paths.append(f"https://harvest.greenhouse.io/v1/{path}") if errored_paths: print("\n❌ Errors occurred while fetching some metadata:") - for _url in errored_paths: - print(url) + for _url in errored_paths: + print(_url)examples/green-house-migration/legacy/greenhouse/batch/offers.py (1)
20-48
: Fix undefined names and literal placeholders; add typing for fetch.Loop variable, f-strings, and final print are incorrect; also add a precise return type.
@@ -import requests +import requests +from typing import Any, Dict, List @@ - def fetch(self): + def fetch(self) -> List[Dict[str, Any]]: offers = fetch_all_from_api("offers") errored_urls = [] - for _offer in offers: - offer_id = offer.get("id") + for offer in offers: + offer_id = offer.get("id") if not offer_id: continue try: - details = gh_get("offers/{offer_id}") + details = gh_get(f"offers/{offer_id}") offer.update(details) except requests.exceptions.HTTPError as e: - print("⚠️ HTTP error fetching offer {offer_id}: {e}") - errored_urls.append( - "https://harvest.greenhouse.io/v1/offers/{offer_id}" - ) + print(f"⚠️ HTTP error fetching offer {offer_id}: {e}") + errored_urls.append(f"https://harvest.greenhouse.io/v1/offers/{offer_id}") except OfferExportError as e: - print("⚠️ Error fetching details for offer {offer_id}: {e}") - errored_urls.append( - "https://harvest.greenhouse.io/v1/offers/{offer_id}" - ) + print(f"⚠️ Error fetching details for offer {offer_id}: {e}") + errored_urls.append(f"https://harvest.greenhouse.io/v1/offers/{offer_id}") @@ if errored_urls: print("\n❌ Errors encountered while fetching offers:") - for _url in errored_urls: - print(url) + for url in errored_urls: + print(url) return offersAlso applies to: 5-8
examples/green-house-migration/legacy/greenhouse/batch/applications.py (3)
37-46
: Interpolate variables in error messages and persist the resolved text.Build real messages and write them, using the exception object.
try: return key, gh_get(url_path) except requests.exceptions.HTTPError as e: - msg = "⚠️ HTTP error fetching {key} for {app_id} at {url_path}: {e}" + msg = f"⚠️ HTTP error fetching {key} for {app_id} at {url_path}: {e}" except ApplicationExportError as e: - msg = "⚠️ Other error fetching {key} for {app_id} at {url_path}: {e}" + msg = f"⚠️ Other error fetching {key} for {app_id} at {url_path}: {e}" - print("safe_fetch: ", msg) + print("safe_fetch:", msg) with open(ERROR_LOG, "a", encoding="utf-8") as f: - f.write("{msg}\n") + f.write(f"{msg}\n") return key, default
55-59
: Assign the fetched value; avoid undefinedresult
.- for key, path_template, default in self.related_requests: - k, _result = self.safe_fetch(app_id, key, path_template, default) - app[k] = result + for key, path_template, default in self.related_requests: + k, result = self.safe_fetch(app_id, key, path_template, default) + app[k] = result
63-73
: Fix concurrency loop variable names; collect results correctly.- _applications = fetch_all_from_api("applications") + applications = fetch_all_from_api("applications") @@ - with ThreadPoolExecutor(max_workers=6) as executor: - futures = [ - executor.submit(self.enrich_application, app) for _app in applications - ] - for _future in as_completed(futures): - enriched.append(future.result()) + with ThreadPoolExecutor(max_workers=6) as executor: + futures = [executor.submit(self.enrich_application, app) for app in applications] + for future in as_completed(futures): + enriched.append(future.result()) return enrichedexamples/green-house-migration/routes/api/applications.py (1)
21-66
: Interpolate IDs in Greenhouse paths and fix typos in routes/names.All endpoints used literal placeholders; also fix “demogrunphics” and function name for EEOC.
-from typing import Any, Dict +from typing import Any, Dict, Annotated @@ -@router.get("/") -def list_applications( - pagination: Dict[str, Any] = Depends(pagination_dependency), -): +@router.get("/") +def list_applications( + pagination: Annotated[Dict[str, Any], Depends(pagination_dependency)], +): @@ - return gh_get("applications/{application_id}") + return gh_get(f"applications/{application_id}") @@ -@router.get("/{application_id}/demogrunphics/answers") -def get_application_demogrunphics(application_id: int): - """Get demogrunphic answers for a specific application""" - return gh_get("applications/{application_id}/demogrunphics/answers") +@router.get("/{application_id}/demographics/answers") +def get_application_demographics(application_id: int): + """Get demographic answers for a specific application""" + return gh_get(f"applications/{application_id}/demographics/answers") @@ - return gh_get("applications/{application_id}/scorecards") + return gh_get(f"applications/{application_id}/scorecards") @@ - return gh_get("applications/{application_id}/scheduled_interviews") + return gh_get(f"applications/{application_id}/scheduled_interviews") @@ - return gh_get("applications/{application_id}/offer") + return gh_get(f"applications/{application_id}/offer") @@ - return gh_get("applications/{application_id}/offers") + return gh_get(f"applications/{application_id}/offers") @@ - return gh_get("applications/{application_id}/offers/current_offer") + return gh_get(f"applications/{application_id}/offers/current_offer") @@ -@router.get("/{application_id}/eeoc") -def get_applicationeeoc(application_id: int): +@router.get("/{application_id}/eeoc") +def get_application_eeoc(application_id: int): @@ - return gh_get("applications/{application_id}/eeoc") + return gh_get(f"applications/{application_id}/eeoc")examples/green-house-migration/legacy/greenhouse/storage.py (4)
24-26
: Use f-strings for filenames; current code writes literal{entity}
.- json_path = json_dir / "{entity}.json" - csv_path = csv_dir / "{entity}.csv" + json_path = json_dir / f"{entity}.json" + csv_path = csv_dir / f"{entity}.csv"
47-47
: Fix wrong variable in size aggregation.- "size_mb": (sum(Path(f).stat().st_size for _f in saved_files) / (1024 * 1024)), + "size_mb": (sum(Path(_f).stat().st_size for _f in saved_files) / (1024 * 1024)),
57-61
: Path templates in loader are literals; interpolateentity
.- json_path = ( - target_dir / "{entity}.json.gz" - if use_compressed - else target_dir / "{entity}.json" - ) + json_path = ( + target_dir / f"{entity}.json.gz" + if use_compressed + else target_dir / f"{entity}.json" + )
66-71
: Use the loaded variable; avoid undefineddata
.- with open_fn(json_path, mode, encoding="utf-8") as f: - _data = json.load(f) - if isinstance(data, list): - return data - return [data] + with open_fn(json_path, mode, encoding="utf-8") as f: + data = json.load(f) + return data if isinstance(data, list) else [data]examples/green-house-migration/legacy/greenhouse/batch/candidates.py (2)
26-31
: Interpolate variables in error messages to use the exception and IDs.except requests.exceptions.HTTPError as e: - print("⚠️ HTTP error fetching {key} for {candidate.get('id')} at {path}: {e}") + print(f"⚠️ HTTP error fetching {key} for {candidate.get('id')} at {path}: {e}") candidate[key] = default if default is not None else [] except CandidateExportError as e: - print("⚠️ Other error fetching {key} for {candidate.get('id')} at {path}: {e}") + print(f"⚠️ Other error fetching {key} for {candidate.get('id')} at {path}: {e}") candidate[key] = default if default is not None else []
39-52
: Fix undefined names and format the activity path; return the correct list.- def fetch(self) -> list[dict]: - _candidates = fetch_all_from_api("candidates") - - for _candidate in candidates: - candidate_id = candidate.get("id") + def fetch(self) -> list[dict]: + _candidates = fetch_all_from_api("candidates") + + for candidate in _candidates: + candidate_id = candidate.get("id") if not candidate_id: continue safe_fetch( - candidate, - "activity_feed", - "candidates/{candidate_id}/activity_feed", + candidate, + "activity_feed", + f"candidates/{candidate_id}/activity_feed", ) - return candidates + return _candidates
🧹 Nitpick comments (23)
examples/green-house-migration/monitoring/health_check.py (4)
14-21
: Good checks registry; consider parallelizing executionRun the checks concurrently to reduce overall latency.
Apply after fixing run_all_checks:
- for name, check_func in self.checks.items(): - results[name] = await check_func() + results = { + name: result + for name, result in zip( + self.checks.keys(), + await asyncio.gather(*(fn() for fn in self.checks.values()), return_exceptions=False), + ) + }
31-33
: Log exceptions with tracebackUse app_logger.exception(...) to capture stack traces across all health checks.
- except Exception as e: - app_logger.error("Greenhouse API health check failed: %s", e) + except Exception: + app_logger.exception("Greenhouse API health check failed")Repeat the same pattern for TeamTailor, storage, database, and redis blocks.
Also applies to: 43-45, 53-54, 62-63, 71-72
74-90
: Optional: run checks concurrently (keep ordering)
If sequential execution is intended, ignore. Otherwise:- results: Dict[str, Any] = {} - for name, check_func in self.checks.items(): - results[name] = await check_func() + names = list(self.checks.keys()) + funcs = [self.checks[n] for n in names] + values = await asyncio.gather(*(fn() for fn in funcs), return_exceptions=False) + results = dict(zip(names, values))
11-22
: Add minimal tests for HealthChecker
Unit test run_all_checks happy-path and one failing check.I can generate pytest tests and fixtures if you confirm the desired behavior for failing checks (HTTP vs. non-HTTP errors).
examples/green-house-migration/routes/api/candidate_tags.py (8)
43-47
: Avoid mutable default in Pydantic model (use default_factory).Using
default=[]
shares the list across instances.- errors: List[str] = Field(default=[], description="List of error messages") + errors: List[str] = Field(default_factory=list, description="List of error messages")
24-38
: Tighten validation for bulk limits.Constrain
limit
to reasonable bounds to prevent accidental large scans.- limit: int = Field(100, description="Limit for bulk operations") + limit: int = Field(100, ge=1, le=200, description="Limit for bulk operations (1-200)")If TeamTailor allows higher page sizes, adjust
le
accordingly.
95-105
: Available tags only sample first 100 candidates — add pagination.Current result can miss many tags.
- params = {"page[size]": 100} - response = client.get_candidates(params=params) - candidates = response.get("data", []) - - all_tags = set() - for candidate in candidates: - tags = candidate.get("attributes", {}).get("tags", []) - all_tags.update(tags) + page = 1 + all_tags: set[str] = set() + while True: + params = {"page[size]": 100, "page[number]": page} + response = await run_in_threadpool(client.get_candidates, params) + candidates = response.get("data", []) + if not candidates: + break + for candidate in candidates: + all_tags.update(candidate.get("attributes", {}).get("tags", [])) + page += 1Confirm TT API pagination semantics (e.g., meta.next).
140-146
: Chain exceptions to satisfy Ruff B904 and keep cause.Preserve the original error via
from e
and avoid leaking internals in HTTP response.Example (apply similarly in these blocks):
- raise HTTPException( - status_code=500, detail=f"Failed to get candidate tags: {str(e)}" - ) + raise HTTPException( + status_code=500, detail="Failed to get candidate tags" + ) from eAlso applies to: 265-267, 312-314, 373-375, 109-113
186-209
: Email search may miss exact matches across pages.If TT supports exact email filter (e.g.,
filter[email]
), prefer it; else set a higher page size and compare case-insensitively.- params = {"filter[search]": email} + params = {"filter[email]": email}If unsupported, keep
filter[search]
but addpage[size]
and compare.lower()
.
14-17
: Prefer reusing TagManager to avoid duplicated tag logic.
teamtailor/management/tag_manager.py
already implements tag search and tag removal. Using it here reduces duplication and centralizes API semantics.Example:
- Use
TagManager.search_candidates_by_tags(...)
instead of re-implementing search.- Use
remove_tags_from_candidate(...)
for DELETE path.Also applies to: 149-260, 270-307, 317-376
243-246
: Log volume: avoid dumping full tag lists.Large tag arrays can bloat logs; log counts and diffs instead.
- logger.info( - f"Successfully updated candidate {candidate_id} with tags: {', '.join(updated_tags)}" - ) + logger.info( + "Updated candidate %s: +%d new tags, total=%d", + candidate_id, + len(set(updated_tags) - set(current_tags)), + len(updated_tags), + )
222-259
: Counters may be misleading on partial retries.If a candidate update fails after prior success in the same request (due to duplication), counts drift. The dedupe fix above mitigates this; consider tracking IDs in success/failed sets.
examples/green-house-migration/config.py (2)
23-27
: Avoid import-time hard failure for missing GREENHOUSE_API_KEYImport-time evaluation will crash any tooling that imports the module without envs configured. Consider lazy access or a clearer bootstrap check.
Apply this diff to defer the failure to first use while keeping a clear error:
-# Configuración de la API -GREENHOUSE_API_KEY = get_required_env("GREENHOUSE_API_KEY") +# Configuración de la API +# Diferir acceso para evitar fallas en import-time; solicitar cuando se necesite. +def greenhouse_api_key() -> str: + return get_required_env("GREENHOUSE_API_KEY")Note: update call sites to use greenhouse_api_key().
29-34
: Make DEBUG parsing robust and validate BATCH_SIZESupport common truthy values and fail early on invalid or non-positive batch size.
Apply this diff:
-# Configuración de la aplicación -DEBUG = os.getenv("DEBUG", "False").lower() == "true" +# Configuración de la aplicación +# Acepta: 1/true/yes/y/on (case-insensitive) +DEBUG = os.getenv("DEBUG", "False").strip().lower() in {"1", "true", "yes", "y", "on"} LOG_LEVEL = get_optional_env("LOG_LEVEL", "INFO") -# Configuración de almacenamiento -DATA_DIR = get_optional_env("DATA_DIR", "data") -BATCH_SIZE = int(get_optional_env("BATCH_SIZE", "100")) +# Configuración de almacenamiento +DATA_DIR = get_optional_env("DATA_DIR", "data") +_bs = get_optional_env("BATCH_SIZE", "100") +try: + BATCH_SIZE = max(1, int(_bs)) +except ValueError as e: + raise ValueError(f"Invalid BATCH_SIZE={_bs!r}; must be a positive integer") from eexamples/green-house-migration/main.py (2)
64-71
: CORS: wildcard origins with credentials is unsafe and often ineffectiveallow_origins=["*"] with allow_credentials=True can break browser CORS and is risky in prod. Prefer an explicit allowlist or env-driven config.
Example:
-app.add_middleware( - CORSMiddleware, - allow_origins=["*"], # Configure appropriately for production - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) +ALLOWED_ORIGINS = os.getenv("ALLOWED_ORIGINS", "").split(",") if os.getenv("ALLOWED_ORIGINS") else [] +app.add_middleware( + CORSMiddleware, + allow_origins=ALLOWED_ORIGINS or ["http://localhost:3000"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +)(Remember to import os.)
127-134
: Type the exception handlerrequest
and log more contextTyping improves tooling; logging method/path helps debugging.
Apply this diff (and add
from fastapi import Request
to imports):-@app.exception_handler(Exception) -async def global_exception_handler(request, exc): +@app.exception_handler(Exception) +async def global_exception_handler(request: "Request", exc: Exception): """Global exception handler.""" - app_logger.error("Unhandled exception: %s", exc) + app_logger.error("Unhandled exception on %s %s: %s", getattr(request, "method", "?"), getattr(request, "url", "?"), exc)examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py (1)
19-21
: Consider implementing fetch() or clarifying intent.fetch() returns an empty list while run() performs all work. Either implement fetch() per model or add a brief comment explaining why fetch() is a no-op for this processor.
examples/green-house-migration/legacy/greenhouse/pagination.py (2)
28-31
: Silence Ruff B008 for FastAPI Depends usage or refactor dependency.Ruff flags Depends in default args. In FastAPI this is canonical. Add noqa or move Depends into the function body.
-def pagination_dependency( - pagination: PaginationParams = Depends(get_pagination_model), +def pagination_dependency( + pagination: PaginationParams = Depends(get_pagination_model), # noqa: B008 ) -> Dict[str, Any]:
10-18
: Minor: reuse pagination_params() for consistency.You can derive the return via pagination_params(pagination.per_page, pagination.page) to centralize validation/shape.
- return {"per_page": pagination.per_page, "page": pagination.page} + return pagination_params(pagination.per_page, pagination.page)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
7-9
: UsersProcessorError is unused after changes.If not thrown anywhere, remove it to avoid dead code. If you plan to raise it elsewhere, keep and document where it’s emitted.
examples/green-house-migration/legacy/greenhouse/client.py (1)
67-73
: Fix tqdm description formatting.Shows literal braces now.
- progress_bar = tqdm(desc="{path}", unit=" page") + progress_bar = tqdm(desc=f"{path}", unit=" page")examples/green-house-migration/routes/api/applications.py (1)
15-16
: Silence Ruff B008 without disabling FastAPI DI.Switch to Annotated+Depends to avoid “function call in default” warnings.
(Handled in the diff above.)
examples/green-house-migration/legacy/greenhouse/batch/candidates.py (1)
13-16
: Align logs base path with config for consistency.Consider using DATA_DIR like other modules (applications) do.
-from pathlib import Path +from pathlib import Path +from config import DATA_DIR @@ -LOG_PATH = Path("data/logs") +LOG_PATH = Path(DATA_DIR) / "logs"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (25)
examples/green-house-migration/Dockerfile
(1 hunks)examples/green-house-migration/config.py
(1 hunks)examples/green-house-migration/docs/assets/css/style.css
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/applications.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/candidates.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/jobs.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/metadata.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/offers.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/scheduled_interviews.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/scorecards.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/batch/users.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/client.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/config_validator.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/jobs_tasks/run_all.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/logger.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/pagination.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/processor.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/storage.py
(1 hunks)examples/green-house-migration/legacy/greenhouse/validators.py
(1 hunks)examples/green-house-migration/main.py
(1 hunks)examples/green-house-migration/monitoring/health_check.py
(1 hunks)examples/green-house-migration/monitoring/metrics.py
(1 hunks)examples/green-house-migration/routes/api/applications.py
(1 hunks)examples/green-house-migration/routes/api/candidate_tags.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- examples/green-house-migration/legacy/greenhouse/batch/scheduled_interviews.py
- examples/green-house-migration/docs/assets/css/style.css
- examples/green-house-migration/legacy/greenhouse/batch/scorecards.py
- examples/green-house-migration/Dockerfile
- examples/green-house-migration/monitoring/metrics.py
- examples/green-house-migration/legacy/greenhouse/validators.py
- examples/green-house-migration/legacy/greenhouse/logger.py
- examples/green-house-migration/legacy/greenhouse/config_validator.py
🧰 Additional context used
🧬 Code graph analysis (15)
examples/green-house-migration/legacy/greenhouse/batch/candidates.py (6)
examples/green-house-migration/legacy/greenhouse/client.py (2)
fetch_all_from_api
(67-84)gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (2)
BaseProcessor
(9-23)fetch
(22-23)examples/green-house-migration/legacy/greenhouse/batch/applications.py (2)
safe_fetch
(31-46)fetch
(61-73)examples/green-house-migration/legacy/greenhouse/batch/jobs.py (1)
fetch
(16-48)examples/green-house-migration/legacy/greenhouse/batch/offers.py (1)
fetch
(20-48)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
fetch
(16-37)
examples/green-house-migration/legacy/greenhouse/batch/offers.py (5)
examples/green-house-migration/legacy/greenhouse/client.py (2)
fetch_all_from_api
(67-84)gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (2)
BaseProcessor
(9-23)fetch
(22-23)examples/green-house-migration/legacy/greenhouse/batch/candidates.py (1)
fetch
(39-52)examples/green-house-migration/legacy/greenhouse/batch/jobs.py (1)
fetch
(16-48)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
fetch
(16-37)
examples/green-house-migration/routes/api/applications.py (2)
examples/green-house-migration/legacy/greenhouse/client.py (2)
gh_get
(31-47)paginated_get
(50-64)examples/green-house-migration/legacy/greenhouse/pagination.py (1)
pagination_dependency
(27-31)
examples/green-house-migration/routes/api/candidate_tags.py (3)
examples/green-house-migration/teamtailor/api/client.py (7)
TeamTailorClient
(20-358)get
(88-91)get_candidates
(117-119)get_candidate
(121-123)post
(93-99)update_candidate
(129-133)delete
(108-111)examples/green-house-migration/teamtailor/management/tag_manager.py (2)
search_candidates_by_tags
(493-532)remove_tags_from_candidate
(452-491)examples/green-house-migration/routes/api/candidates.py (2)
get_candidates
(165-219)get_candidate
(223-238)
examples/green-house-migration/legacy/greenhouse/client.py (3)
examples/green-house-migration/routes/clients/tt_client.py (1)
get
(25-32)examples/green-house-migration/routes/clients/tt_client_enhanced.py (1)
get
(76-79)examples/green-house-migration/teamtailor/api/client.py (1)
get
(88-91)
examples/green-house-migration/main.py (7)
examples/green-house-migration/routes/import_/import_applications.py (1)
import_applications
(95-229)examples/green-house-migration/routes/import_/import_candidates.py (1)
import_candidates
(117-189)examples/green-house-migration/routes/import_/import_comments.py (1)
import_comments
(104-176)examples/green-house-migration/routes/import_/import_interviews.py (1)
import_interviews
(85-184)examples/green-house-migration/routes/import_/import_jobs.py (1)
import_jobs
(68-149)examples/green-house-migration/routes/import_/import_offers.py (1)
import_offers
(102-227)examples/green-house-migration/routes/clients/tt_client.py (1)
get
(25-32)
examples/green-house-migration/monitoring/health_check.py (3)
examples/green-house-migration/routes/clients/tt_client.py (1)
get
(25-32)examples/green-house-migration/routes/clients/tt_client_enhanced.py (1)
get
(76-79)examples/green-house-migration/teamtailor/api/client.py (1)
get
(88-91)
examples/green-house-migration/legacy/greenhouse/batch/applications.py (5)
examples/green-house-migration/legacy/greenhouse/client.py (2)
fetch_all_from_api
(67-84)gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (2)
BaseProcessor
(9-23)fetch
(22-23)examples/green-house-migration/legacy/greenhouse/batch/candidates.py (2)
safe_fetch
(22-31)fetch
(39-52)examples/green-house-migration/legacy/greenhouse/batch/jobs.py (1)
fetch
(16-48)examples/green-house-migration/legacy/greenhouse/batch/offers.py (1)
fetch
(20-48)
examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py (3)
examples/green-house-migration/legacy/greenhouse/client.py (1)
gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (3)
BaseProcessor
(9-23)fetch
(22-23)run
(14-19)examples/green-house-migration/legacy/greenhouse/storage.py (1)
saveentity_data
(14-48)
examples/green-house-migration/legacy/greenhouse/batch/jobs.py (6)
examples/green-house-migration/legacy/greenhouse/client.py (2)
fetch_all_from_api
(67-84)gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (2)
BaseProcessor
(9-23)fetch
(22-23)examples/green-house-migration/legacy/greenhouse/batch/applications.py (1)
fetch
(61-73)examples/green-house-migration/legacy/greenhouse/batch/candidates.py (1)
fetch
(39-52)examples/green-house-migration/legacy/greenhouse/batch/offers.py (1)
fetch
(20-48)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
fetch
(16-37)
examples/green-house-migration/legacy/greenhouse/batch/metadata.py (3)
examples/green-house-migration/legacy/greenhouse/client.py (1)
gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (3)
BaseProcessor
(9-23)fetch
(22-23)run
(14-19)examples/green-house-migration/legacy/greenhouse/storage.py (1)
saveentity_data
(14-48)
examples/green-house-migration/legacy/greenhouse/batch/users.py (2)
examples/green-house-migration/legacy/greenhouse/client.py (2)
fetch_all_from_api
(67-84)gh_get
(31-47)examples/green-house-migration/legacy/greenhouse/processor.py (2)
BaseProcessor
(9-23)fetch
(22-23)
examples/green-house-migration/legacy/greenhouse/jobs_tasks/run_all.py (10)
examples/green-house-migration/legacy/greenhouse/batch/applications.py (1)
ApplicationsProcessor
(22-73)examples/green-house-migration/legacy/greenhouse/batch/candidates.py (1)
CandidatesProcessor
(34-52)examples/green-house-migration/legacy/greenhouse/batch/jobs.py (1)
JobsProcessor
(11-48)examples/green-house-migration/legacy/greenhouse/batch/metadata.py (2)
MetadataProcessor
(12-55)run
(36-55)examples/green-house-migration/legacy/greenhouse/batch/offers.py (1)
OffersProcessor
(15-48)examples/green-house-migration/legacy/greenhouse/batch/scheduled_interviews.py (1)
ScheduledInterviewsProcessor
(7-13)examples/green-house-migration/legacy/greenhouse/batch/scorecards.py (1)
ScorecardsProcessor
(7-13)examples/green-house-migration/legacy/greenhouse/batch/users.py (1)
UsersProcessor
(11-37)examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py (1)
run
(22-43)examples/green-house-migration/legacy/greenhouse/processor.py (1)
run
(14-19)
examples/green-house-migration/legacy/greenhouse/pagination.py (1)
examples/green-house-migration/legacy/greenhouse/validators.py (4)
PaginationParams
(8-20)validate_page
(68-72)validate_per_page
(16-20)validate_per_page
(75-81)
examples/green-house-migration/legacy/greenhouse/processor.py (1)
examples/green-house-migration/legacy/greenhouse/storage.py (1)
saveentity_data
(14-48)
🪛 Ruff (0.12.2)
examples/green-house-migration/legacy/greenhouse/batch/candidates.py
26-26: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
29-29: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
42-42: Undefined name candidates
(F821)
43-43: Undefined name candidate
(F821)
48-48: Undefined name candidate
(F821)
52-52: Undefined name candidates
(F821)
examples/green-house-migration/legacy/greenhouse/batch/offers.py
25-25: Undefined name offer
(F821)
31-31: Undefined name offer
(F821)
32-32: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
37-37: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
46-46: Undefined name url
(F821)
examples/green-house-migration/routes/api/applications.py
15-15: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
examples/green-house-migration/routes/api/candidate_tags.py
111-113: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
144-146: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
265-267: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
312-314: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
373-375: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
examples/green-house-migration/legacy/greenhouse/client.py
11-11: config.GREENHOUSE_API_URL
imported but unused
Remove unused import: config.GREENHOUSE_API_URL
(F401)
examples/green-house-migration/legacy/greenhouse/storage.py
68-68: Undefined name data
(F821)
69-69: Undefined name data
(F821)
70-70: Undefined name data
(F821)
examples/green-house-migration/monitoring/health_check.py
78-78: Undefined name results
(F821)
82-82: Undefined name result
(F821)
82-82: Undefined name results
(F821)
88-88: Undefined name results
(F821)
108-113: Use a single with
statement with multiple contexts instead of nested with
statements
Combine with
statements
(SIM117)
127-127: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
141-146: Use a single with
statement with multiple contexts instead of nested with
statements
(SIM117)
147-147: Undefined name results
(F821)
152-152: Undefined name results
(F821)
158-158: Undefined name results
(F821)
examples/green-house-migration/legacy/greenhouse/batch/applications.py
39-39: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
41-41: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
57-57: Undefined name result
(F821)
68-68: Undefined name app
(F821)
68-68: Undefined name applications
(F821)
71-71: Undefined name future
(F821)
examples/green-house-migration/legacy/greenhouse/batch/custom_fields.py
30-30: Undefined name model
(F821)
30-30: Undefined name data
(F821)
31-31: Undefined name model
(F821)
31-31: Undefined name data
(F821)
32-32: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
41-41: Undefined name url
(F821)
examples/green-house-migration/legacy/greenhouse/batch/jobs.py
20-20: Undefined name job
(F821)
25-25: Undefined name job
(F821)
26-26: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
28-28: Undefined name job
(F821)
31-31: Undefined name job
(F821)
32-32: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
34-34: Undefined name job
(F821)
37-37: Undefined name job
(F821)
38-38: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
40-40: Undefined name job
(F821)
43-43: Undefined name job
(F821)
44-44: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
46-46: Undefined name job
(F821)
examples/green-house-migration/legacy/greenhouse/batch/metadata.py
44-44: Undefined name data
(F821)
45-45: Undefined name data
(F821)
46-46: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
53-53: Undefined name url
(F821)
examples/green-house-migration/legacy/greenhouse/batch/users.py
20-20: Undefined name user
(F821)
25-25: Undefined name user
(F821)
26-26: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
28-28: Undefined name user
(F821)
31-31: Undefined name user
(F821)
32-32: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
35-35: Undefined name user
(F821)
examples/green-house-migration/legacy/greenhouse/jobs_tasks/run_all.py
31-31: Undefined name processor
(F821)
32-32: Undefined name processor
(F821)
32-32: Undefined name data
(F821)
examples/green-house-migration/legacy/greenhouse/pagination.py
28-28: Do not perform function call Depends
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
examples/green-house-migration/legacy/greenhouse/processor.py
18-18: Undefined name data
(F821)
19-19: Undefined name data
(F821)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Lint Code Base
🔇 Additional comments (3)
examples/green-house-migration/monitoring/health_check.py (2)
3-10
: Logger import path verifiedThe symbol
app_logger
is defined inexamples/green-house-migration/legacy/greenhouse/logger.py
(line 59), so the import pathlegacy.greenhouse.logger.app_logger
is correct and requires no changes.• Definition found at:
examples/green-house-migration/legacy/greenhouse/logger.py:59
134-136
: Renamecheckendpoints
tocheck_endpoints
No usages of
checkendpoints
were found elsewhere in the repository, so it’s safe to apply this rename without breaking any call sites.Proposed change:
--- a/examples/green-house-migration/monitoring/health_check.py +++ b/examples/green-house-migration/monitoring/health_check.py @@ -134,7 +134,7 @@ class HealthCheck: async def checkendpoints(self) -> Dict[str, Any]: """Check key TeamTailor endpoints.""" endpoints = ["/candidates", "/jobs", "/applications", "/users"] - + async def check_endpoints(self) -> Dict[str, Any]: """Check key TeamTailor endpoints.""" endpoints = ["/candidates", "/jobs", "/applications", "/users"]• No callers found (
rg -nP '\bcheckendpoints\s*\(' -C2
returned only the definition).
• Renaming tocheck_endpoints
aligns with PEP 8 conventions.examples/green-house-migration/routes/api/candidate_tags.py (1)
1-18
: LGTM on router/module structure.Clear separation of models, utils, and endpoints; naming is consistent.
response = session.get(url, params=params) | ||
response.raise_for_status() | ||
return response.json() | ||
except requests.exceptions.HTTPError as e: | ||
if e.response.status_code == 429: | ||
# Rate limit exceeded, wait longer | ||
time.sleep(2) | ||
return gh_get(path, params) # Retry once |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add request timeout and keep retry-on-429 behavior.
Avoid hanging connections; 30s is consistent with other clients in this repo.
- response = session.get(url, params=params)
+ response = session.get(url, params=params, timeout=30)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
response = session.get(url, params=params) | |
response.raise_for_status() | |
return response.json() | |
except requests.exceptions.HTTPError as e: | |
if e.response.status_code == 429: | |
# Rate limit exceeded, wait longer | |
time.sleep(2) | |
return gh_get(path, params) # Retry once | |
try: | |
response = session.get(url, params=params, timeout=30) | |
response.raise_for_status() | |
return response.json() | |
except requests.exceptions.HTTPError as e: | |
if e.response.status_code == 429: | |
# Rate limit exceeded, wait longer | |
time.sleep(2) | |
return gh_get(path, params) # Retry once |
🤖 Prompt for AI Agents
In examples/green-house-migration/legacy/greenhouse/client.py around lines 39 to
46, the HTTP GET call lacks a timeout which can cause hanging connections;
modify the session.get call to include timeout=30 (e.g., session.get(url,
params=params, timeout=30)) and keep the existing retry-on-429 behavior (sleep
and recursive retry). Ensure any recursive retry also uses the timeout parameter
so subsequent calls don't hang.
summary = [] | ||
for _processor in processors: | ||
_data = processor.run() | ||
summary.append({"entity": processor.entity, "count": len(data)}) | ||
return summary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix NameError and use processor’s own summary
Variables processor
/data
are undefined; also .run()
already returns a summary dict—reuse it.
Apply this diff:
- summary = []
- for _processor in processors:
- _data = processor.run()
- summary.append({"entity": processor.entity, "count": len(data)})
- return summary
+ summary = []
+ for _processor in processors:
+ try:
+ result = _processor.run()
+ # result is expected to be {"entity": ..., "count": ...} or similar
+ summary.append(result)
+ except Exception as e:
+ summary.append(
+ {
+ "entity": getattr(_processor, "entity", _processor.__class__.__name__),
+ "error": str(e),
+ }
+ )
+ return summary
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
summary = [] | |
for _processor in processors: | |
_data = processor.run() | |
summary.append({"entity": processor.entity, "count": len(data)}) | |
return summary | |
summary = [] | |
for _processor in processors: | |
try: | |
result = _processor.run() | |
# result is expected to be {"entity": ..., "count": ...} or similar | |
summary.append(result) | |
except Exception as e: | |
summary.append({ | |
"entity": getattr(_processor, "entity", _processor.__class__.__name__), | |
"error": str(e), | |
}) | |
return summary |
🧰 Tools
🪛 Ruff (0.12.2)
31-31: Undefined name processor
(F821)
32-32: Undefined name processor
(F821)
32-32: Undefined name data
(F821)
🤖 Prompt for AI Agents
In examples/green-house-migration/legacy/greenhouse/jobs_tasks/run_all.py around
lines 29-33, the loop uses undefined names `processor` and `data` and ignores
that `.run()` already returns a summary dict; change the loop to iterate with a
consistent variable (e.g., `for processor in processors:` or use `_processor`
everywhere), call `summary_item = processor.run()` (or `_processor.run()` to
match your chosen name), and append that returned summary dict directly to
`summary` (summary.append(summary_item)); keep the function returning `summary`.
|
||
# Validate that at least one identification method is provided | ||
if not any([request.candidate_ids, request.emails, request.bulk_mode]): | ||
raise HTTPException( | ||
status_code=400, | ||
detail="Must provide candidate_ids, emails, or enable bulk_mode", | ||
) | ||
|
||
candidates_to_update = [] | ||
|
||
# Get candidates by IDs | ||
if request.candidate_ids: | ||
for candidate_id in request.candidate_ids: | ||
try: | ||
response = client.get_candidate(candidate_id) | ||
candidate_data = response.get("data") | ||
if candidate_data: | ||
candidates_to_update.append(candidate_data) | ||
else: | ||
errors.append(f"Candidate {candidate_id} not found") | ||
failed_count += 1 | ||
except Exception as e: | ||
errors.append(f"Error getting candidate {candidate_id}: {str(e)}") | ||
failed_count += 1 | ||
|
||
# Get candidates by emails | ||
if request.emails: | ||
for email in request.emails: | ||
try: | ||
params = {"filter[search]": email} | ||
response = client.get_candidates(params=params) | ||
candidates = response.get("data", []) | ||
|
||
found = False | ||
for candidate in candidates: | ||
candidate_email = candidate.get("attributes", {}).get("email") | ||
if candidate_email == email: | ||
candidates_to_update.append(candidate) | ||
found = True | ||
break | ||
|
||
if not found: | ||
errors.append(f"Candidate with email {email} not found") | ||
failed_count += 1 | ||
|
||
except Exception as e: | ||
errors.append(f"Error getting candidate by email {email}: {str(e)}") | ||
failed_count += 1 | ||
|
||
# Get candidates for bulk mode | ||
if request.bulk_mode: | ||
try: | ||
params = {"page[size]": request.limit} | ||
response = client.get_candidates(params=params) | ||
bulk_candidates = response.get("data", []) | ||
candidates_to_update.extend(bulk_candidates) | ||
except Exception as e: | ||
errors.append(f"Error getting candidates for bulk mode: {str(e)}") | ||
failed_count += 1 | ||
|
||
# Update candidates with new tags | ||
for candidate in candidates_to_update: | ||
try: | ||
candidate_id = candidate.get("id") | ||
current_tags = candidate.get("attributes", {}).get("tags", []) | ||
|
||
# Add new tags (avoid duplicates) | ||
updated_tags = list(set(current_tags + request.tags)) | ||
|
||
# Prepare update payload | ||
update_data = { | ||
"data": { | ||
"id": candidate_id, | ||
"type": "candidates", | ||
"attributes": {"tags": updated_tags}, | ||
} | ||
} | ||
|
||
# Update candidate | ||
client.update_candidate(candidate_id, update_data) | ||
success_count += 1 | ||
|
||
logger.info( | ||
f"Successfully updated candidate {candidate_id} with tags: {', '.join(updated_tags)}" | ||
) | ||
|
||
except Exception as e: | ||
candidate_id = candidate.get("id", "unknown") | ||
error_msg = f"Error updating candidate {candidate_id}: {str(e)}" | ||
errors.append(error_msg) | ||
failed_count += 1 | ||
logger.error(error_msg) | ||
|
||
return TagUpdateResponse( | ||
success=success_count, | ||
failed=failed_count, | ||
total=success_count + failed_count, | ||
errors=errors, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Bulk add: wrap blocking calls, dedupe targets, and preserve tag order.
- Wrap all
client.get_*
andclient.update_candidate
calls. - Dedupe candidates to avoid double updates and skewed counters.
- Keep existing tag order; append new unique tags deterministically.
@@
- if request.candidate_ids:
+ if request.candidate_ids:
for candidate_id in request.candidate_ids:
try:
- response = client.get_candidate(candidate_id)
+ response = await run_in_threadpool(client.get_candidate, candidate_id)
@@
- if request.emails:
+ if request.emails:
for email in request.emails:
try:
params = {"filter[search]": email}
- response = client.get_candidates(params=params)
+ response = await run_in_threadpool(client.get_candidates, params)
@@
- if request.bulk_mode:
+ if request.bulk_mode:
try:
params = {"page[size]": request.limit}
- response = client.get_candidates(params=params)
+ response = await run_in_threadpool(client.get_candidates, params)
bulk_candidates = response.get("data", [])
candidates_to_update.extend(bulk_candidates)
except Exception as e:
@@
- # Update candidates with new tags
+ # Dedupe candidates by ID (preserve order)
+ seen_ids = set()
+ unique_candidates = []
+ for c in candidates_to_update:
+ cid = c.get("id")
+ if cid and cid not in seen_ids:
+ seen_ids.add(cid)
+ unique_candidates.append(c)
+ candidates_to_update = unique_candidates
+
+ # Update candidates with new tags
for candidate in candidates_to_update:
try:
candidate_id = candidate.get("id")
current_tags = candidate.get("attributes", {}).get("tags", [])
@@
- updated_tags = list(set(current_tags + request.tags))
+ updated_tags = list(dict.fromkeys([*current_tags, *request.tags]))
@@
- client.update_candidate(candidate_id, update_data)
+ await run_in_threadpool(client.update_candidate, candidate_id, update_data)
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/green-house-migration/routes/api/candidate_tags.py around lines
149-260, the handler must: (1) run blocking TeamTailorClient calls in the async
endpoint using asyncio.to_thread or loop.run_in_executor for
client.get_candidate, client.get_candidates and client.update_candidate; (2)
dedupe candidates before performing updates by unique candidate id (use a dict
or set) so you don't double-update or double-count successes/failures; (3) stop
using set() for tags (which reorders) and instead preserve existing tag order by
taking current_tags then appending each tag from request.tags only if not
already present (deterministic order); (4) update success/failed counters only
once per unique candidate and only after the wrapped update call completes
(increment success_count on success, failed_count and errors on exception); and
(5) keep existing try/except logging around each threaded call and ensure errors
record candidate id when available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
PULL_REQUEST_DETAILS.md (1)
11-11
: Stop hard-coding diff stats; auto-generate to avoid drift.These numbers are already out of date again. Replace them automatically in CI or update them via a script to keep this file accurate.
Run this script from the PR branch to compute the current shortstat and rewrite all matching lines in PULL_REQUEST_DETAILS.md:
#!/bin/bash set -euo pipefail # Fetch base to compute accurate diff stats git fetch origin main --depth=1 stats="$(git diff --shortstat origin/main...)" echo "Computed stats: $stats" # Prepare the replacement Markdown line replacement="**${stats}**" # Replace any line that looks like a hard-coded shortstat # Examples it will replace: # **48 files changed, 33,500 insertions(+), 3,070 deletions(-)** # **180 files changed, 35,223 insertions(+)** gawk -v repl="$replacement" ' /^\*\*[0-9,]+ files changed, .*insertions\(\+\).*$/ { print repl; next } { print } ' PULL_REQUEST_DETAILS.md > PULL_REQUEST_DETAILS.md.tmp && mv PULL_REQUEST_DETAILS.md.tmp PULL_REQUEST_DETAILS.md echo "Updated PULL_REQUEST_DETAILS.md with: $replacement"If you’d like, I can open a small PR to add a GitHub Actions step that runs this on every push and commits the update.
Also applies to: 53-54
🧹 Nitpick comments (5)
PULL_REQUEST_DETAILS.md (5)
1-1
: Tighten the heading phrasing."What's this PR do?" reads informal; prefer "What does this PR do?" for consistency with the rest of the doc tone.
-# What's this PR do? +# What does this PR do?
57-57
: Standardize project naming to reduce ambiguity.Prefer a single canonical name in prose. Suggest: “Greenhouse-to-TeamTailor migration project” (using the vendors’ official casing) to match the description and avoid mixing “Green-House-Migration” with “green-house-migration”.
-**Green-House-Migration Project:** +**Greenhouse-to-TeamTailor migration project:**
99-111
: Consider linking the directory to its README for faster navigation.Adding a README link helps reviewers jump directly to the project’s entry point.
-### New Project Added: +### New Project Added (with README): @@ -``` -examples/green-house-migration/ +``` +[examples/green-house-migration/](examples/green-house-migration/) ├── dashboard/ (web interface) @@ └── config/ (configuration files)--- `131-137`: **Back the “All linters passing” claim with a CI reference.** Add a link or badge to the passing workflow run for this PR to make the status auditable. ```diff -- **All linters passing** - no more linting errors +- **All linters passing** (see CI run for this PR) - no linting errors
137-137
: Add a short Security & PII note for the migration example.This example likely processes candidate PII. A lightweight note helps set expectations and reduces surprises.
## Developer Experience: @@ - **Comprehensive documentation** for new project + +### Security & Data Handling: + +- The migration example processes candidate PII (names, emails, resumes). Ensure API keys/secrets are injected via env vars, logs are scrubbed of PII, and any fixtures use anonymized data. Include a brief “Data Handling” section in examples/green-house-migration/README.md.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
PULL_REQUEST_DETAILS.md
(1 hunks)examples/cli-base/setup.py
(1 hunks)examples/cli-base/src/app.py
(1 hunks)examples/green-house-migration/docs/assets/css/style.css
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- examples/cli-base/setup.py
- examples/green-house-migration/docs/assets/css/style.css
- examples/cli-base/src/app.py
🧰 Additional context used
🪛 LanguageTool
PULL_REQUEST_DETAILS.md
[grammar] ~22-~22: There might be a mistake here.
Context: ...ser.py` - User schema for authentication - Fixed repository return types (NoteRepos...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ...n solution from Greenhouse to TeamTailor - Comprehensive dashboard with web interfa...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...mprehensive dashboard with web interface - Extensive documentation structure with g...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...tation structure with guides and reports - Security tools and analysis scripts - De...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ts - Security tools and analysis scripts - Development and production scripts - Tes...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...pts - Development and production scripts - Test fixtures and configuration files - ...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ... - Test fixtures and configuration files - API routes and integration tools - Monit...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...files - API routes and integration tools - Monitoring and metrics functionality - L...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...s - Monitoring and metrics functionality - Legacy Greenhouse integration - TeamTail...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ...ionality - Legacy Greenhouse integration - TeamTailor migration tools ## Key Files...
(QB_NEW_EN)
[grammar] ~117-~117: There might be a mistake here.
Context: ...ng** across all Python files using Black - Proper import organization using isort...
(QB_NEW_EN)
[grammar] ~118-~118: There might be a mistake here.
Context: ...Proper import organization** using isort - Type safety improvements with mypy fix...
(QB_NEW_EN)
[grammar] ~119-~119: There might be a mistake here.
Context: ...pe safety** improvements with mypy fixes - Unused variable handling in TypeScript...
(QB_NEW_EN)
[grammar] ~120-~120: There might be a mistake here.
Context: ...riable handling** in TypeScript projects - ESLint compliance across all Node.js p...
(QB_NEW_EN)
[grammar] ~125-~125: There might be a mistake here.
Context: ...solution** from Greenhouse to TeamTailor - Web dashboard for monitoring and manag...
(QB_NEW_EN)
[grammar] ~126-~126: There might be a mistake here.
Context: ...ashboard** for monitoring and management - Comprehensive documentation with step-...
(QB_NEW_EN)
[grammar] ~127-~127: There might be a mistake here.
Context: ...documentation** with step-by-step guides - Security analysis tools and best pract...
(QB_NEW_EN)
[grammar] ~128-~128: There might be a mistake here.
Context: ...rity analysis tools** and best practices - Production-ready scripts for deploymen...
(QB_NEW_EN)
[grammar] ~133-~133: There might be a mistake here.
Context: ...nters passing** - no more linting errors - Consistent code style across all proje...
(QB_NEW_EN)
[grammar] ~134-~134: There might be a mistake here.
Context: ...sistent code style** across all projects - Better type safety with proper annotat...
(QB_NEW_EN)
[grammar] ~135-~135: There might be a mistake here.
Context: ...er type safety** with proper annotations - Comprehensive documentation for new pr...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...all linting errors** across all projects 2. **Establishing consistent coding standards...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...Establishing consistent coding standards** 3. Adding comprehensive type safety 4. **...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...** 3. Adding comprehensive type safety 4. *Providing a complete migration solution...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...ration solution** as a reference example 5. Improving developer experience with be...
(QB_NEW_EN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Lint Code Base
🔇 Additional comments (1)
PULL_REQUEST_DETAILS.md (1)
148-148
: Backward compatibility verified — no examples reference the old return typesAll examples in examples/fastapi-simple-docker-pip have been updated to use NoteOut for repository returns, and there are no public endpoints or docs still expecting the previous types:
• app/router/note_routes.py — all routes use response_model=NoteOut or list[NoteOut]
• app/repositories/note_repository.py — get_note_by_id/get_all_notes return NoteOut instances
• app/schemas/note.py — NoteBase is only the shared BaseModel for NoteCreate/NoteUpdate/NoteOutSince no call sites depend on the old return types, no “Migration Notes” subsection is needed; the existing statement stands.
What's this PR do?
This PR addresses comprehensive linting issues across all projects in the repository and adds a complete green-house-migration project. The changes ensure all code follows established quality standards and maintains consistency across all example projects.
@ulises-jeremias @matiasz8
Summary of Changes
Commit 1:
fix: comprehensive linting fixes across all projects
Python Projects Fixed:
FastAPI Simple Docker Pip Project:
app/schemas/sorting.py
- Sorting schema for multisort functionalityapp/schemas/user.py
- User schema for authenticationLangGraph SLS FastAPI RAG Project:
TypeScript/Node.js Projects Fixed:
Nest NATS Microservices (3 services):
Stripe Integration Node TypeScript:
Root Level:
Commit 2:
feat: add green-house-migration project
New Project Added:
Green-House-Migration Project:
Key Files Modified
New Project Added:
Summary by CodeRabbit
New Features
API / Schemas
Documentation
Tests
Chores