Skip to content

feat: complete data flow - from ingesting from DLH to sending to DEMS#315

Closed
abdul-rahim-psl wants to merge 52 commits into
devfrom
feat-paysys-simu-copy
Closed

feat: complete data flow - from ingesting from DLH to sending to DEMS#315
abdul-rahim-psl wants to merge 52 commits into
devfrom
feat-paysys-simu-copy

Conversation

@abdul-rahim-psl
Copy link
Copy Markdown

@abdul-rahim-psl abdul-rahim-psl commented Apr 28, 2026

SPDX-License-Identifier: Apache-2.0

What did we change?

We added the following things

  1. Ingesting data from the DLH API
  2. dynamically knowing which endpoint to ingest that data against
  3. ingesting the data on DEMS

Why are we doing this?

So that the user can be able to run simulations of old data

How was it tested?

  • Locally
  • Development Environment
  • Not needed, changes very basic
  • Husky successfully run
  • Unit tests passing and Documentation done

Summary by CodeRabbit

Release Notes

  • New Features

    • Added simulation database infrastructure with configurable connection parameters
    • Added API endpoints to retrieve simulation messages and manage mask configurations
    • Integrated Data Lake House (DLH) functionality for external data retrieval and storage
    • Extended simulation logging with message storage and querying capabilities
  • Tests

    • Updated unit tests to reflect service handler changes and improve coverage
  • Chores

    • Added PostgreSQL database support and updated library dependencies

MK-Paysyslabs and others added 30 commits April 3, 2026 17:00
fix: RuleRequest payload and improve test case coverage
Signed-off-by: abdul-rahim-psl abdul.rahim@paysyslabs.com
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
…and service logic

Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
… TCS config and createMask

Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
MK-Paysyslabs and others added 22 commits April 16, 2026 13:44
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
…csConfigs routes

Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
…n it

Signed-off-by: abdul-rahim-psl abdul.rahim@paysyslabs.com
Signed-off-by: MK-Paysyslabs <muhammad.kashif@paysyslabs.com>
…pacs002

Signed-off-by: abdul-rahim-psl abdul.rahim@paysyslabs.com
@abdul-rahim-psl abdul-rahim-psl requested review from a team as code owners April 28, 2026 08:46
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

This PR introduces support for a new simulation database backend with PostgreSQL integration, adds DLH (Distributed Ledger Handler) API integration for querying and persisting data, implements optimistic-lock semantics for config updates via updated_at tokens, and extends the API with new handlers for retrieving simulation messages and mask configurations.

Changes

Cohort / File(s) Summary
Configuration & Environment Setup
.env.template, package.json, src/config.ts
Added simulation database environment variables; updated dependency versions (frms-coe-lib, added pg and pg-format); extended AppDatabaseServices type to require simulation configuration.
Database Infrastructure
src/index.ts, src/services/database.logic.service.ts
Initialized Database.SIMULATION service during startup; added switch case to route SQL queries to databaseManager._simulation.
Simulation Logs & DLH Integration
src/interface/simulattionLogs.interface.ts, src/repositories/configuration/simulation-logs.repository.ts, src/services/simulation-logs.logic.service.ts
Introduced SimulationMessage interface; implemented DLH POST integration with dynamic table creation and bulk insertion; added service functions to fetch messages and delegate to repository layer.
Configuration Repository & Optimistic Locking
src/repositories/configuration/tcs.config.repository.ts
Added optimistic-lock update semantics via updated_at timestamp validation; introduced MaskTuple interface and findActiveConfigsByTuples function to resolve active masking configurations.
API Controllers
src/app.controller.ts
Added four new handlers: getSimulationMessagesHandler, fetchFromDlhHandler, findActiveMaskConfigsHandler, and fetchCountApiFlow to expose simulation logs, DLH queries, and mask configuration resolution.
Test Updates
__tests__/unit/tcs-config.logic.service.test.ts
Removed updatedAt parameter assertions; eliminated redundant test cases; added XML payload coverage for handleGetConfigByTransactionType; removed duplicate assertion lines.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Controller as fetchFromDlhHandler
    participant Service as simulation-logs.logic.service
    participant Repository as simulation-logs.repository
    participant DLH as DLH API
    participant DB as PostgreSQL
    
    Client->>Controller: POST /dlh with queries & token
    Controller->>Service: fetchFromDlh(queries, token)
    Service->>Repository: fetchDataFromDlh(queries, token)
    Repository->>DLH: POST with Bearer token + queries
    DLH-->>Repository: JSON response with results[]
    alt Response contains document array
        Repository->>DB: Query information_schema for next table name
        Repository->>DB: CREATE TABLE sim### if needed
        Repository->>DB: BULK INSERT serialized documents
        Repository-->>Service: {results, tableName}
    else No document array
        Repository-->>Service: Raw response
    end
    Service-->>Controller: result
    Controller-->>Client: HTTP 200 + payload
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~28 minutes

Possibly related PRs

Suggested reviewers

  • arif-paysyslabs
  • scott45
  • Sandy-at-Tazama

Poem

🐰 Hop, hop! The simulation hops so fast,
With DLH queries from future to past,
Masks align, configs lock tight,
Postgres whispers through the night,
New backends bloom—the rabbit takes flight! 🌙✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main objective of the PR: completing a data flow pipeline from DLH ingestion to DEMS. It is concise, specific, and clearly reflects the primary changes across the codebase.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat-paysys-simu-copy

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/repositories/configuration/tcs.config.repository.ts (1)

292-304: ⚠️ Potential issue | 🔴 Critical

Pass the optimistic-lock token into the UPDATE query.

The new WHERE clause references a third trailing placeholder, but only id and tenantId are appended to values. Every call will fail with an unbound parameter before the update runs. If optimistic locking is intended here, require updates.updatedAt and append it; otherwise drop the predicate until callers are wired.

💡 Minimal fix
   setClauses.push('updated_at = NOW()');

+  if (updates.updatedAt === undefined) {
+    throw new Error('updatedAt is required');
+  }
+
   // Optimistic lock: update only when updated_at still matches the token returned by the previous read.
   const whereClause = `WHERE id = $${paramIndex} AND tenant_id = $${paramIndex + 1} AND updated_at = $${paramIndex + 2}::timestamptz`;
@@
-  values.push(id, tenantId);
+  values.push(id, tenantId, updates.updatedAt);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/repositories/configuration/tcs.config.repository.ts` around lines 292 -
304, The WHERE clause in the UPDATE uses a third placeholder for optimistic
locking but values only push id and tenantId; update the call that builds the
parameter array (the values array used by the query in this repository function)
to also push the optimistic-lock token (updates.updatedAt or the variable
carrying the previous updated_at) as the third parameter before executing the
query so $${paramIndex + 2} is bound; alternatively, if optimistic locking is
not yet supported, remove the "AND updated_at = $...::timestamptz" predicate
from whereClause. Locate the variables whereClause, query, setClauses and the
values.push(id, tenantId) line to make the change.
🧹 Nitpick comments (2)
__tests__/unit/tcs-config.logic.service.test.ts (1)

958-976: Remove duplicated XML payload test block.

Lines [958-976] repeat the same test scenario already covered at Lines [938-956]. Keeping one avoids redundant execution/noise.

Suggested patch
-    it('should retrieve config with XML payload when content type is XML', async () => {
-      const mockConfig = {
-        schema: { type: 'object' },
-        mapping: { field: 'value' },
-        content_type: 'application/xml',
-        payload_xml: '<root><data>test</data></root>',
-        payload_json: null,
-      };
-
-      (tcsConfigRepository.getSchemaByTransactionType as jest.Mock).mockResolvedValue(mockConfig);
-
-      const result = await tcsConfigService.handleGetConfigByTransactionType('pacs.008.001.10', '1.0.0', mockTenantId);
-
-      expect(result).toEqual({
-        schema: mockConfig.schema,
-        mapping: mockConfig.mapping,
-        payload: mockConfig.payload_xml,
-      });
-    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/unit/tcs-config.logic.service.test.ts` around lines 958 - 976,
There is a duplicated test case named 'should retrieve config with XML payload
when content type is XML' in the test file; remove one of the two identical
it(...) blocks (the duplicate that repeats the same setup using
tcsConfigRepository.getSchemaByTransactionType mock and calling
tcsConfigService.handleGetConfigByTransactionType) so only a single test
covering the XML payload scenario remains.
.env.template (1)

25-30: Make the new simulation env keys dotenv-linter compliant.

Lines [25-30] introduce ordering/quote warnings in dotenv-linter. Align this block to avoid avoidable lint churn.

Suggested patch
 # simulation
-SIMULATION_DATABASE_PORT='5432'
-SIMULATION_DATABASE='simulation'
-SIMULATION_DATABASE_USER='postgres'
-SIMULATION_DATABASE_PASSWORD='password'
-SIMULATION_DATABASE_HOST='localhost'
+SIMULATION_DATABASE=simulation
+SIMULATION_DATABASE_HOST=localhost
+SIMULATION_DATABASE_PASSWORD=password
+SIMULATION_DATABASE_PORT=5432
+SIMULATION_DATABASE_USER=postgres
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.env.template around lines 25 - 30, The SIMULATION_* env entries trigger
dotenv-linter ordering and quote warnings; update the block
(SIMULATION_DATABASE, SIMULATION_DATABASE_HOST, SIMULATION_DATABASE_PASSWORD,
SIMULATION_DATABASE_PORT, SIMULATION_DATABASE_USER) to remove the single quotes
from values and reorder the keys into a consistent alphabetical/grouped order to
satisfy the linter (e.g., SIMULATION_DATABASE, SIMULATION_DATABASE_HOST,
SIMULATION_DATABASE_PASSWORD, SIMULATION_DATABASE_PORT,
SIMULATION_DATABASE_USER) while preserving the same values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/unit/tcs-config.logic.service.test.ts`:
- Line 731: Remove the orphaned duplicate assertion ").rejects.toThrow('Failed
to add function');" (the stray rejects.toThrow statement) that was left after
the preceding assertion, which causes a syntax error; delete this standalone
line so the existing async assertion (the original .rejects.toThrow call)
remains intact and the test file parses correctly.

In `@package.json`:
- Line 36: The package.json pins `@tazama-lf/frms-coe-lib` to 0.0.1-psl.2 which
conflicts with tcs-lib (used by tcs-config.logic.service.ts,
job.logic.service.ts, interfaces, controller, repository) that depends on
7.0.2-rc.2; update package.json to align both packages to a single compatible
frms-coe-lib version (either upgrade `@tazama-lf/frms-coe-lib` to the version
tcs-lib requires or change tcs-lib’s dependency to the psl.2-compatible
version), run npm/yarn install and type-check the codepaths that use tcs-lib and
frms-coe-lib (especially types shared in tcs-config.logic.service.ts and
job.logic.service.ts) to verify no type/interface mismatches, and if API
differences exist, adapt code or choose a mutual version and pin it in
package.json to prevent transitive duplicate installs.

In `@src/app.controller.ts`:
- Around line 1778-1788: fetchCountApiFlow currently calls findMasksWithFilters
with Number.MAX_SAFE_INTEGER which causes a full data fetch; change this to use
a count-only path (either add a count-only flag/param to findMasksWithFilters or
call a new function like countMasksWithFilters) so the DB query only computes
the total without hydrating rows, then reply with success and the total (omit or
return an empty masks array rather than fetching all rows). Locate
fetchCountApiFlow and update the call/response, or implement and call
countMasksWithFilters to return only count metadata.
- Around line 1768-1772: The handler findActiveMaskConfigsHandler currently
trusts tenant_id in req.body (MaskTuple[]) and passes them to
findActiveConfigsByTuples; instead, before calling findActiveConfigsByTuples
iterate the tuples from req.body and either (a) overwrite each tuple.tenant_id
with req.tenantId (re-stamp) or (b) validate that tuple.tenant_id ===
req.tenantId and return a 4xx error on mismatch; then pass the
sanitized/validated tuples to findActiveConfigsByTuples. Ensure this logic runs
inside findActiveMaskConfigsHandler right before the call to
findActiveConfigsByTuples.

In `@src/repositories/configuration/simulation-logs.repository.ts`:
- Around line 150-159: The current allocation using tableCountResult /
tableCount and constructing nextTableName is racy; replace the read-then-create
approach with an atomic allocator such as a DB sequence or UUID. Update the code
that calls handlePostExecuteSqlStatement to request a unique suffix atomically
(e.g. SELECT nextval('sim_table_seq') or SELECT
gen_random_uuid()/uuid_generate_v4()) and then build nextTableName from that
returned value instead of COUNT(*); ensure the sequence or UUID function
exists/is created/migrated as needed and keep the query/response handling in the
same function (referenced symbols: handlePostExecuteSqlStatement,
tableCountResult, tableCount, nextTableName).
- Around line 179-189: The INSERT currently writes only payload so tenantId,
credttm and msgid are NULL and getSimulationMessagesFromDb cannot filter/order
correctly; modify the code around results/documents and the call to
handlePostExecuteSqlStatement to extract tenantId, credttm and msgid from each
document (or thread tenantId into this function if tenant is external), build
rows that include payload, tenantId, credttm, msgid, generate matching
placeholders (e.g. ($1,$2,$3,$4), ...), and pass the expanded values array and
an INSERT INTO %I (payload, tenant_id, credttm, msgid) VALUES ... query to
handlePostExecuteSqlStatement so inserted rows contain the metadata used by
getSimulationMessagesFromDb.
- Around line 116-123: The SELECT in getSimulationMessagesFromDb uses an
unquoted tenantId which will be folded to lowercase by Postgres; update the
pgFormat call used in the handlePostExecuteSqlStatement invocation so the WHERE
clause quotes the camelCase column name (e.g. replace tenantId with "tenantId")
while keeping tableName as %I; reference getSimulationMessagesFromDb, pgFormat
and handlePostExecuteSqlStatement to locate and change the query string.

---

Outside diff comments:
In `@src/repositories/configuration/tcs.config.repository.ts`:
- Around line 292-304: The WHERE clause in the UPDATE uses a third placeholder
for optimistic locking but values only push id and tenantId; update the call
that builds the parameter array (the values array used by the query in this
repository function) to also push the optimistic-lock token (updates.updatedAt
or the variable carrying the previous updated_at) as the third parameter before
executing the query so $${paramIndex + 2} is bound; alternatively, if optimistic
locking is not yet supported, remove the "AND updated_at = $...::timestamptz"
predicate from whereClause. Locate the variables whereClause, query, setClauses
and the values.push(id, tenantId) line to make the change.

---

Nitpick comments:
In `@__tests__/unit/tcs-config.logic.service.test.ts`:
- Around line 958-976: There is a duplicated test case named 'should retrieve
config with XML payload when content type is XML' in the test file; remove one
of the two identical it(...) blocks (the duplicate that repeats the same setup
using tcsConfigRepository.getSchemaByTransactionType mock and calling
tcsConfigService.handleGetConfigByTransactionType) so only a single test
covering the XML payload scenario remains.

In @.env.template:
- Around line 25-30: The SIMULATION_* env entries trigger dotenv-linter ordering
and quote warnings; update the block (SIMULATION_DATABASE,
SIMULATION_DATABASE_HOST, SIMULATION_DATABASE_PASSWORD,
SIMULATION_DATABASE_PORT, SIMULATION_DATABASE_USER) to remove the single quotes
from values and reorder the keys into a consistent alphabetical/grouped order to
satisfy the linter (e.g., SIMULATION_DATABASE, SIMULATION_DATABASE_HOST,
SIMULATION_DATABASE_PASSWORD, SIMULATION_DATABASE_PORT,
SIMULATION_DATABASE_USER) while preserving the same values.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 062c4c26-5b26-497b-923d-45d5b1524816

📥 Commits

Reviewing files that changed from the base of the PR and between 6c3ec00 and 2c437c9.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (13)
  • .env.template
  • __tests__/unit/tcs-config.logic.service.test.ts
  • package.json
  • src/app.controller.ts
  • src/config.ts
  • src/index.ts
  • src/interface/simulattionLogs.interface.ts
  • src/repositories/configuration/masking.repository.ts
  • src/repositories/configuration/simulation-logs.repository.ts
  • src/repositories/configuration/tcs.config.repository.ts
  • src/router.ts
  • src/services/database.logic.service.ts
  • src/services/simulation-logs.logic.service.ts

await expect(
tcsConfigService.handleAddFunction(999, mockTenantId, { functionName: 'test' }, '2026-04-07T10:00:00.000Z'),
).rejects.toThrow('Failed to add function');
).rejects.toThrow('Failed to add function');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Inspecting the failing region:"
sed -n '724,734p' __tests__/unit/tcs-config.logic.service.test.ts

echo
echo "Searching for duplicated/orphaned rejects assertion lines:"
rg -n -C2 "\\)\\.rejects\\.toThrow\\('Failed to add function'\\)" __tests__/unit/tcs-config.logic.service.test.ts

Repository: tazama-lf/admin-service

Length of output: 916


Remove the duplicate rejects.toThrow statement on line 731 that breaks test file parsing.

Line 731 is an orphaned duplicate of the assertion on line 730 with no matching opening expression, causing a syntax error. Remove it.

Fix
     it('should throw error when config is not found', async () => {
       (tcsConfigRepository.findConfigById as jest.Mock).mockResolvedValue(null);

       await expect(
         tcsConfigService.handleAddFunction(999, mockTenantId, { functionName: 'test' }, '2026-04-07T10:00:00.000Z'),
       ).rejects.toThrow('Failed to add function');
-      ).rejects.toThrow('Failed to add function');
     });
🧰 Tools
🪛 Biome (2.4.13)

[error] 731-731: Expected a statement but instead found ').rejects.toThrow('Failed to add function')'.

(parse)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/unit/tcs-config.logic.service.test.ts` at line 731, Remove the
orphaned duplicate assertion ").rejects.toThrow('Failed to add function');" (the
stray rejects.toThrow statement) that was left after the preceding assertion,
which causes a syntax error; delete this standalone line so the existing async
assertion (the original .rejects.toThrow call) remains intact and the test file
parses correctly.

Comment thread package.json
"@sinclair/typebox": "^0.34.41",
"@tazama-lf/auth-lib": "4.0.0-rc.4",
"@tazama-lf/frms-coe-lib": "7.0.2-rc.2",
"@tazama-lf/frms-coe-lib": "0.0.1-psl.2",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Declared frms-coe-lib version:"
jq -r '.dependencies["@tazama-lf/frms-coe-lib"]' package.json

echo
echo "All current imports/usages that depend on frms-coe-lib symbols/paths:"
rg -n --type=ts '@tazama-lf/frms-coe-lib' src __tests__

echo
echo "Deep imports (higher breakage risk across versions):"
rg -n --type=ts '@tazama-lf/frms-coe-lib/lib/' src __tests__

Repository: tazama-lf/admin-service

Length of output: 10260


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Checking for lock files ==="
if [ -f "package-lock.json" ]; then
  echo "package-lock.json exists; extracting frms-coe-lib metadata:"
  jq '.packages[] | select(.name == "@tazama-lf/frms-coe-lib") | {name, version, resolved}' package-lock.json 2>/dev/null | head -30
else
  echo "No package-lock.json found"
fi

if [ -f "yarn.lock" ]; then
  echo "yarn.lock exists; searching for frms-coe-lib:"
  grep -A 5 '@tazama-lf/frms-coe-lib' yarn.lock | head -20
else
  echo "No yarn.lock found"
fi

echo
echo "=== Git history for package.json changes ==="
git log --oneline -n 20 -- package.json 2>/dev/null || echo "Git log unavailable"

echo
echo "=== Recent changes to `@tazama-lf/frms-coe-lib` version ==="
git log -p -n 3 -- package.json 2>/dev/null | grep -A 2 -B 2 '@tazama-lf/frms-coe-lib' | head -40 || echo "Git diff unavailable"

Repository: tazama-lf/admin-service

Length of output: 383


🌐 Web query:

@tazama-lf/frms-coe-lib npm package version 0.0.1-psl.2

💡 Result:

The @tazama-lf/frms-coe-lib is a TypeScript library that is the FRMS (Fraud Risk Management System?) Center of Excellence package library, part of the open-source Tazama project for real-time transaction monitoring to detect fraud and money laundering. It provides core utilities like LoggerService and CreateDatabaseManager for handling data in Tazama services. It is hosted as a private/organization npm package on GitHub Packages (registry: https://npm.pkg.github.com), requiring GitHub authentication and read:packages permission. Install command: npm install @tazama-lf/frms-coe-lib --registry=https://npm.pkg.github.com The repository is at https://github.com/tazama-lf/frms-coe-lib (mirrored under frmscoe org), with 4 stars, active development (last push 2026-04-04), 24 open issues as of 2026-04, Apache 2.0 license. It has 4 releases, latest v3.0.0 on 2025-11-18. No public mention of version 0.0.1-psl.2 found; this may be an internal pre-release, prerelease tag, or custom build (possibly "psl.2" indicating Paysys Labs or similar from issue contributors like @abdul-rahim-psl). Recent issues discuss major version bumps for breaking changes.

Citations:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== package-lock.json frms-coe-lib entry ==="
python3 <<'PYTHON'
import json
import sys

try:
    with open('package-lock.json', 'r') as f:
        lock = json.load(f)
    
    # Check in packages section
    packages = lock.get('packages', {})
    for key, pkg in packages.items():
        if 'frms-coe-lib' in key and '@tazama-lf' in key:
            print(f"Key: {key}")
            print(f"Version: {pkg.get('version')}")
            print(f"Resolved: {pkg.get('resolved')}")
            print(f"From: {pkg.get('from')}")
            print()
except Exception as e:
    print(f"Error: {e}")
    sys.exit(1)
PYTHON

echo "=== Checking src/index.ts (main entry point) ==="
head -20 src/index.ts | cat -n

echo
echo "=== Checking for any version comments or docs ==="
rg -n 'frms-coe-lib.*version|psl' --type=ts --type=json --type=md . 2>/dev/null | head -15 || echo "No version comments found"

echo
echo "=== Check .github/workflows for CI ==="
if [ -d .github/workflows ]; then
  ls -la .github/workflows/
  echo
  echo "Content of first workflow file:"
  ls .github/workflows/ | head -1 | xargs -I {} cat ".github/workflows/{}" | head -50
else
  echo "No workflows directory found"
fi

Repository: tazama-lf/admin-service

Length of output: 5862


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Dependency types for frms-coe-startup-lib and tcs-lib ==="
jq '.dependencies, .devDependencies, .peerDependencies' package.json

echo
echo "=== Search for actual usage of startup-lib in codebase ==="
rg -n 'frms-coe-startup-lib' src __tests__ --type=ts 2>/dev/null | head -10

echo
echo "=== Search for actual usage of tcs-lib in codebase ==="
rg -n 'tcs-lib' src __tests__ --type=ts 2>/dev/null | head -10

echo
echo "=== Check if these are imported in src/index.ts or config ==="
rg -n 'startup|tcs' src/index.ts src/config.ts 2>/dev/null

Repository: tazama-lf/admin-service

Length of output: 1414


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check if startup-lib is imported transitively via index or config ==="
cat src/index.ts | head -20

echo
echo "=== List all imports from files that might pull in startup-lib ==="
find src -type f -name '*.ts' -exec grep -l 'import.*from.*frms-coe' {} \;

echo
echo "=== Direct imports of startup-lib or tcs-lib anywhere ==="
rg -i 'startup-lib|tcs-lib' . --type=ts --type=json 2>/dev/null | grep -v node_modules | head -20

Repository: tazama-lf/admin-service

Length of output: 3585


Resolve the @tazama-lf/frms-coe-lib version mismatch with tcs-lib before merging.

Line 36 pins @tazama-lf/frms-coe-lib to 0.0.1-psl.2, but tcs-lib (which is actively imported across the codebase—tcs-config.logic.service.ts, job.logic.service.ts, interfaces, controller, and repository) declares a dependency on 7.0.2-rc.2. This creates a transitive dependency conflict with two versions of frms-coe-lib in the node_modules tree.

Type/interface mismatches may occur at runtime when tcs-lib (built against 7.0.2-rc.2 APIs) exchanges objects with admin-service code expecting 0.0.1-psl.2. Confirm that the psl.2 branch version is API-compatible with the version tcs-lib requires, or pin both to the same compatible version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` at line 36, The package.json pins `@tazama-lf/frms-coe-lib` to
0.0.1-psl.2 which conflicts with tcs-lib (used by tcs-config.logic.service.ts,
job.logic.service.ts, interfaces, controller, repository) that depends on
7.0.2-rc.2; update package.json to align both packages to a single compatible
frms-coe-lib version (either upgrade `@tazama-lf/frms-coe-lib` to the version
tcs-lib requires or change tcs-lib’s dependency to the psl.2-compatible
version), run npm/yarn install and type-check the codepaths that use tcs-lib and
frms-coe-lib (especially types shared in tcs-config.logic.service.ts and
job.logic.service.ts) to verify no type/interface mismatches, and if API
differences exist, adapt code or choose a mutual version and pin it in
package.json to prevent transitive duplicate installs.

Comment thread src/app.controller.ts
Comment on lines +1768 to +1772
export const findActiveMaskConfigsHandler = async (req: FastifyRequest, reply: FastifyReply): Promise<void> => {
try {
const tuples = req.body as MaskTuple[];
const result = await findActiveConfigsByTuples(tuples);
reply.code(200).send({ success: true, masks: result });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Do not trust tenant_id from the request body here.

This handler forwards client-supplied tenant IDs straight into the lookup, so a caller can ask for another tenant's active config tuples. Re-stamp each tuple with req.tenantId or reject mismatches before querying.

💡 Minimal fix
 export const findActiveMaskConfigsHandler = async (req: FastifyRequest, reply: FastifyReply): Promise<void> => {
   try {
-    const tuples = req.body as MaskTuple[];
+    const { tenantId } = req as ITenantRequest;
+    const tuples = (req.body as MaskTuple[]).map((tuple) => ({
+      ...tuple,
+      tenant_id: tenantId,
+    }));
     const result = await findActiveConfigsByTuples(tuples);
     reply.code(200).send({ success: true, masks: result });
   } catch (error: unknown) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const findActiveMaskConfigsHandler = async (req: FastifyRequest, reply: FastifyReply): Promise<void> => {
try {
const tuples = req.body as MaskTuple[];
const result = await findActiveConfigsByTuples(tuples);
reply.code(200).send({ success: true, masks: result });
export const findActiveMaskConfigsHandler = async (req: FastifyRequest, reply: FastifyReply): Promise<void> => {
try {
const { tenantId } = req as ITenantRequest;
const tuples = (req.body as MaskTuple[]).map((tuple) => ({
...tuple,
tenant_id: tenantId,
}));
const result = await findActiveConfigsByTuples(tuples);
reply.code(200).send({ success: true, masks: result });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app.controller.ts` around lines 1768 - 1772, The handler
findActiveMaskConfigsHandler currently trusts tenant_id in req.body
(MaskTuple[]) and passes them to findActiveConfigsByTuples; instead, before
calling findActiveConfigsByTuples iterate the tuples from req.body and either
(a) overwrite each tuple.tenant_id with req.tenantId (re-stamp) or (b) validate
that tuple.tenant_id === req.tenantId and return a 4xx error on mismatch; then
pass the sanitized/validated tuples to findActiveConfigsByTuples. Ensure this
logic runs inside findActiveMaskConfigsHandler right before the call to
findActiveConfigsByTuples.

Comment thread src/app.controller.ts
Comment on lines +1778 to +1788
export const fetchCountApiFlow = async (req: FastifyRequest, reply: FastifyReply): Promise<void> => {
try {
const authReq = req as AuthenticatedRequest;
const { tenantId } = req as ITenantRequest;
const body = (authReq.body ?? {}) as Record<string, string>;
const result = await findMasksWithFilters(Number.MAX_SAFE_INTEGER, 0, body, tenantId);
reply.code(200).send({
success: true,
masks: result.data,
total: result.total,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid the full-row scan in this count flow.

findMasksWithFilters still executes the data query after counting, so passing Number.MAX_SAFE_INTEGER turns this into an almost-unbounded read of masks just to return total. Please add a count-only path here and return only the count metadata instead of hydrating every row.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app.controller.ts` around lines 1778 - 1788, fetchCountApiFlow currently
calls findMasksWithFilters with Number.MAX_SAFE_INTEGER which causes a full data
fetch; change this to use a count-only path (either add a count-only flag/param
to findMasksWithFilters or call a new function like countMasksWithFilters) so
the DB query only computes the total without hydrating rows, then reply with
success and the total (omit or return an empty masks array rather than fetching
all rows). Locate fetchCountApiFlow and update the call/response, or implement
and call countMasksWithFilters to return only count metadata.

Comment on lines +116 to +123
export const getSimulationMessagesFromDb = async (tenantId: string, tableName: string): Promise<SimulationMessage[]> => {
const result = await handlePostExecuteSqlStatement<{ payload: SimulationMessage }>(
{
text: pgFormat('SELECT payload FROM %I WHERE tenantId = $1 order by credttm', tableName),
values: [tenantId],
} satisfies PgQueryConfig,
'simulation',
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In PostgreSQL, if a table column is created as "tenantId", does an unquoted reference tenantId in a SELECT resolve to the same column?

💡 Result:

Yes, in PostgreSQL, if a table column is created as "tenantId" (using double quotes, preserving mixed case), an unquoted reference to tenantId in a SELECT statement will NOT resolve to the same column. It will fail with an error like "column "tenantid" does not exist". PostgreSQL folds all unquoted identifiers to lowercase. So unquoted "tenantId" becomes "tenantid". However, the quoted "tenantId" column was created with exact mixed case preserved, so it must always be referenced with double quotes as "tenantId" to match exactly. To reference the column without quotes, it must have been created unquoted (e.g., CREATE TABLE ... (tenantid ...)), in which case both tenantid and TenantId (unquoted) resolve to "tenantid".

Citations:


🏁 Script executed:

# Locate and examine the simulation-logs.repository.ts file
fd -t f 'simulation-logs.repository.ts' --exec wc -l {} \;

Repository: tazama-lf/admin-service

Length of output: 134


🏁 Script executed:

# Read the specific lines mentioned in the review (116-123)
fd -t f 'simulation-logs.repository.ts' --exec sed -n '110,130p' {} +

Repository: tazama-lf/admin-service

Length of output: 869


🏁 Script executed:

# Check lines 164-170 mentioned in "Also applies to"
fd -t f 'simulation-logs.repository.ts' --exec sed -n '160,175p' {} +

Repository: tazama-lf/admin-service

Length of output: 532


🏁 Script executed:

# Search for table creation patterns to verify "tenantId" quoting
rg -A 10 'CREATE TABLE.*simulation' --type sql --type ts

Repository: tazama-lf/admin-service

Length of output: 49


🏁 Script executed:

# Check for other uses of tenantId column in SELECT queries
rg 'WHERE tenantId' --type ts -B 2 -A 2

Repository: tazama-lf/admin-service

Length of output: 1034


🏁 Script executed:

# Read lines 164-170 correctly
fd -t f 'simulation-logs.repository.ts' --exec cat -n {} \; | sed -n '160,175p'

Repository: tazama-lf/admin-service

Length of output: 581


🏁 Script executed:

# Search for all unquoted tenantId references in this specific file
rg 'WHERE.*tenantId.*=' src/repositories/configuration/simulation-logs.repository.ts -n

Repository: tazama-lf/admin-service

Length of output: 167


🏁 Script executed:

# Find all SELECT statements in simulation-logs.repository.ts
rg 'SELECT' src/repositories/configuration/simulation-logs.repository.ts -n -B 2

Repository: tazama-lf/admin-service

Length of output: 667


Quote "tenantId" consistently in the SELECT query.

The table is created with a quoted camelCase "tenantId" column (line 169), but the SELECT query on line 119 reads tenantId unquoted. PostgreSQL folds unquoted identifiers to lowercase, so this lookup will fail with "column tenantid does not exist" at runtime.

Fix
-      text: pgFormat('SELECT payload FROM %I WHERE tenantId = $1 order by credttm', tableName),
+      text: pgFormat('SELECT payload FROM %I WHERE "tenantId" = $1 ORDER BY credttm', tableName),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const getSimulationMessagesFromDb = async (tenantId: string, tableName: string): Promise<SimulationMessage[]> => {
const result = await handlePostExecuteSqlStatement<{ payload: SimulationMessage }>(
{
text: pgFormat('SELECT payload FROM %I WHERE tenantId = $1 order by credttm', tableName),
values: [tenantId],
} satisfies PgQueryConfig,
'simulation',
);
export const getSimulationMessagesFromDb = async (tenantId: string, tableName: string): Promise<SimulationMessage[]> => {
const result = await handlePostExecuteSqlStatement<{ payload: SimulationMessage }>(
{
text: pgFormat('SELECT payload FROM %I WHERE "tenantId" = $1 ORDER BY credttm', tableName),
values: [tenantId],
} satisfies PgQueryConfig,
'simulation',
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/repositories/configuration/simulation-logs.repository.ts` around lines
116 - 123, The SELECT in getSimulationMessagesFromDb uses an unquoted tenantId
which will be folded to lowercase by Postgres; update the pgFormat call used in
the handlePostExecuteSqlStatement invocation so the WHERE clause quotes the
camelCase column name (e.g. replace tenantId with "tenantId") while keeping
tableName as %I; reference getSimulationMessagesFromDb, pgFormat and
handlePostExecuteSqlStatement to locate and change the query string.

Comment on lines +150 to +159
const tableCountResult = await handlePostExecuteSqlStatement<{ count: string }>(
{
text: "SELECT COUNT(*) AS count FROM information_schema.tables WHERE table_schema = 'public' AND table_name LIKE 'sim%'",
values: [],
} satisfies PgQueryConfig,
'simulation',
);

const tableCount = parseInt(tableCountResult.rows[0]?.count ?? '0', 10);
const nextTableName = `sim${String(tableCount + 1).padStart(3, '0')}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't allocate simulation table names from COUNT(*).

This is racy: two DLH fetches can read the same count and both choose the same sim###, so separate simulation runs end up sharing one table. Please switch to an atomic allocator here, e.g. a sequence, UUID-based suffix, or another uniqueness source that does not depend on a read-then-create race.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/repositories/configuration/simulation-logs.repository.ts` around lines
150 - 159, The current allocation using tableCountResult / tableCount and
constructing nextTableName is racy; replace the read-then-create approach with
an atomic allocator such as a DB sequence or UUID. Update the code that calls
handlePostExecuteSqlStatement to request a unique suffix atomically (e.g. SELECT
nextval('sim_table_seq') or SELECT gen_random_uuid()/uuid_generate_v4()) and
then build nextTableName from that returned value instead of COUNT(*); ensure
the sequence or UUID function exists/is created/migrated as needed and keep the
query/response handling in the same function (referenced symbols:
handlePostExecuteSqlStatement, tableCountResult, tableCount, nextTableName).

Comment on lines +179 to +189
const documents = results.flatMap((r) => (Array.isArray(r.data) ? r.data.map((item) => item.document) : []));
if (documents.length > 0) {
const serialized = documents.map((doc) => JSON.stringify(doc));
const placeholders = serialized.map((_, i) => `($${i + 1})`).join(', ');
await handlePostExecuteSqlStatement(
{
text: pgFormat(`INSERT INTO %I (payload) VALUES ${placeholders}`, nextTableName),
values: serialized,
} satisfies PgQueryConfig,
'simulation',
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Persist the tenant/message metadata you query on later.

The new reader filters by tenant and orders by credttm, but this insert writes only payload. Every imported row will leave "tenantId", credttm, and msgid NULL, so getSimulationMessagesFromDb cannot reliably return the fetched data. Thread tenantId into this function and insert the extracted metadata alongside payload.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/repositories/configuration/simulation-logs.repository.ts` around lines
179 - 189, The INSERT currently writes only payload so tenantId, credttm and
msgid are NULL and getSimulationMessagesFromDb cannot filter/order correctly;
modify the code around results/documents and the call to
handlePostExecuteSqlStatement to extract tenantId, credttm and msgid from each
document (or thread tenantId into this function if tenant is external), build
rows that include payload, tenantId, credttm, msgid, generate matching
placeholders (e.g. ($1,$2,$3,$4), ...), and pass the expanded values array and
an INSERT INTO %I (payload, tenant_id, credttm, msgid) VALUES ... query to
handlePostExecuteSqlStatement so inserted rows contain the metadata used by
getSimulationMessagesFromDb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants