-
Notifications
You must be signed in to change notification settings - Fork 67
Description
Context
I am currently integrating Mercator with an external ETL pipeline that imports large batches of data (physical switches, network devices, etc.).
While Mercator’s REST API works well for interactive use, I’ve encountered important limitations when trying to ingest hundreds or thousands of items automatically.
Specifically:
- Each item must be created or updated via individual POST/PUT calls.
- Laravel’s default API rate limiter (
60 requests/minute per IP) is hit almost immediately during bulk imports. - There is no direct way to check for existing items (e.g., by name) before inserting, since the
index()endpoints do not currently support query filters.
I initially considered increasing the rate limit for the API ingestion routes, but while studying the codebase, I noticed that Mercator already implements a massDestroy route for batch deletions.
This inspired the following proposal.
Proposal
### 1. Add massStore and massUpdate endpoints for each resource
Since Mercator already exposes:
DELETE /api/<resource>/mass-destroy
it would be consistent and extremely useful to add:
POST /api/<resource>/mass-store
PUT /api/<resource>/mass-update
This would:
- Make the API suitable for ETL pipelines, inventory synchronisation, and automated provisioning.
- Avoid hitting the rate limiter.
- Greatly reduce ingestion time.
- Keep API semantics consistent (CRUD + mass operations).
🔧 Example: massStore route
Route:
Route::post('<resource>/mass-store', [<Controller>::class, 'massStore'])
->name('<resource>.mass-store');Controller Example:
public function massStore(Request $request)
{
Gate::authorize('<resource>_create');
$data = $request->validate([
'items' => 'required|array|min:1',
'items.*.name' => 'required|string|max:255',
'items.*' => 'array',
]);
<Model>::insert($data['items']);
return response()->json([
'status' => 'ok',
'count' => count($data['items']),
], 201);
}### 2. Add massUpdate endpoint
Route:
Route::put('<resource>/mass-update', [<Controller>::class, 'massUpdate'])
->name('<resource>.mass-update');Controller Example:
public function massUpdate(Request $request)
{
Gate::authorize('<resource>_edit');
$data = $request->validate([
'items' => 'required|array|min:1',
'items.*.id' => 'required|exists:<table>,id',
'items.*' => 'array',
]);
foreach ($data['items'] as $item) {
$id = $item['id'];
$payload = collect($item)->except('id')->toArray();
<Model>::where('id', $id)->update($payload);
}
return response()->json(['status' => 'ok']);
}### 3. Add query filters to index() endpoints
Currently, calling:
GET /api/physical-switches
returns the entire dataset.
For integration automation, it is useful to check if a given item already exists based on attributes other than ID (e.g. name, type, site_id), in order to decide whether to create or update an item.
I suggest allowing simple query parameters:
GET /api/physical-switches?name=SW-0123
GET /api/vlans?id_vlan=1401
GET /api/network-switches?site_id=3&type=Layer3
Controller Example:
public function index(Request $request)
{
$query = <Model>::query();
if ($request->has('name')) {
$query->where('name', 'like', '%' . $request->name . '%');
}
if ($request->has('site_id')) {
$query->where('site_id', $request->site_id);
}
return response()->json($query->paginate(100));
}Benefits
✔ ETL pipelines become much simpler and faster
✔ No more rate-limiting issues
✔ API becomes scalable for organisations with large inventories
✔ Maintains consistency with the existing massDestroy pattern
✔ Improves developer experience with filterable index() queries
✔ Makes Mercator more suitable for environments with automated provisioning (universities, entreprises, cloud/hybrid infra)
Conclusion
These improvements would make Mercator dramatically more integration-friendly while remaining consistent with the existing API architecture.
I would be very happy to contribute the implementation (PR), but I wanted first to open a discussion following the project’s contribution guidelines.
Thank you for your work on Mercator , it’s a great project and we would really benefit from these enhancements!