Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# External File Storage -- Azure Blob Service Connector

This app bridges Azure Blob Storage and Business Central's External File Storage framework. It implements the `External File Storage Connector` interface, translating the framework's file/directory operations into Azure Blob REST API calls via the `ABS Blob Client` from System Application. It is intentionally minimal -- one table, one codeunit, three pages -- serving as the reference implementation for how connectors plug into the framework.

## Quick reference

- ID range: 4560-4569 (10 IDs, uses 4560-4562)
- No dependencies beyond the implicit System Application
- Namespace: `System.ExternalFileStorage`
- `internalsVisibleTo` the test app only

## How it works

The connector registers itself with the framework through a single enum extension (`ExtBlobStorageConnector.EnumExt.al`) that adds value `"Blob Storage"` to the `"Ext. File Storage Connector"` enum, binding it to the `"Ext. Blob Sto. Connector Impl."` codeunit. When the framework dispatches a file operation for a Blob Storage account, it calls into this codeunit via the interface.

Every operation follows the same pattern: `InitBlobClient()` loads the account record, checks the `Disabled` flag, retrieves the secret from IsolatedStorage, selects the auth strategy (SAS token or shared key), and initializes an `ABS Blob Client` scoped to the account's container. The operation then delegates to the appropriate ABS method and validates the response.

Azure Blob Storage is flat -- it has no native directories. The connector simulates directories by uploading a marker file (`BusinessCentral.FileSystem.txt`) at the directory path. Listing directories filters for this marker; deleting a directory removes it. This means directories can "disappear" if something external deletes the marker blob while files remain at that path.

Account credentials are never stored in the database. The table holds only a GUID reference (`Secret Key`); the actual SAS token or shared key lives in IsolatedStorage at company scope. On sandbox creation from production, an environment cleanup subscriber auto-disables all accounts to prevent credential leakage.

## Structure

- `src/` -- all AL objects (table, codeunit, pages, enums)
- `permissions/` -- three permission sets (objects, read, edit) plus two permission set extensions
- `Entitlements/` -- single entitlement for connector access
- `data/` -- connector logo resource

## Documentation

- [docs/data-model.md](docs/data-model.md) -- table design and secret storage
- [docs/business-logic.md](docs/business-logic.md) -- operation flows, directory simulation, account registration
- [docs/extensibility.md](docs/extensibility.md) -- how to build a new connector using this as reference
- [docs/patterns.md](docs/patterns.md) -- marker files, copy-then-delete moves, environment cleanup

## Things to know

- MoveFile is not atomic. It does CopyBlob then DeleteBlob. If the copy succeeds but the delete fails, the file exists in both locations with no way for the caller to detect partial failure.
- DirectoryExists returns true for empty path (root always exists). FileExists returns false for empty path. This asymmetry is intentional.
- ListBlobs pages in batches of 500 via `ABSOptionalParameters.MaxResults(500)` and marker-based pagination. The framework's `FilePaginationData` codeunit carries the continuation marker between calls.
- The wizard page (`ExtBlobStorAccountWizard.Page.al`) uses a temporary source table. The account record is only persisted when the user clicks Next, which calls `CreateAccount()`.
- Container name supports a lookup that calls `ABS Container Client.ListContainers()` -- this requires a valid storage account name and secret before it works.
- The `Secret` field on the account page shows `'***'` for existing accounts (see `OnAfterGetCurrRecord`). The actual secret text is never displayed or returned to the client.
- All pages except the account card are `Extensible = false`. The table is extensible by default.
- The codeunit is `Access = Internal` with `InherentEntitlements = X` and `InherentPermissions = X` -- it relies on the entitlement and permission sets to gate access rather than granting permissions implicitly.
- ListFiles uses `Delimiter('/')` to scope results to the immediate directory level (no recursive listing). ListDirectories filters by `Resource Type::Directory` at the same prefix level.
- The `CheckPath()` helper ensures all non-empty paths end with `/`. This normalization happens before every listing operation.
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Business logic

## Core operation pattern

Every file operation in `ExtBlobStoConnectorImpl.Codeunit.al` follows the same structure: initialize the blob client, perform the Azure operation, validate the response. The initialization is the critical gate -- it enforces account existence, disabled-state checks, and secret retrieval before any network call happens.

```mermaid
flowchart TD
A[Interface method called] --> B[InitBlobClient]
B --> C{Account exists?}
C -- No --> D[Error: not registered]
C -- Yes --> E{Account disabled?}
E -- Yes --> F[Error: account disabled]
E -- No --> G[GetSecret from IsolatedStorage]
G --> H{Auth type?}
H -- SasToken --> I[UseReadySAS]
H -- SharedKey --> J[CreateSharedKey]
I --> K[Initialize ABS Blob Client]
J --> K
K --> L[Execute Azure operation]
L --> M{Response successful?}
M -- No --> N[Error with response message]
M -- Yes --> O[Return result]
```

## Account registration

The registration flow is wizard-driven. The framework calls `RegisterAccount()`, which opens the wizard page modally. The page uses a temporary source table -- nothing is persisted until the user completes the wizard.

```mermaid
flowchart TD
A[Framework calls RegisterAccount] --> B[Open wizard page modal]
B --> C[User enters name, storage account, auth type, secret]
C --> D[User enters or looks up container name]
D --> E{Container lookup?}
E -- Yes --> F[ABS Container Client.ListContainers]
F --> G[Show container lookup page]
G --> H[User selects container]
E -- No --> H[User types container name]
H --> I{IsAccountValid?}
I -- No --> J[Next button disabled]
I -- Yes --> K[User clicks Next]
K --> L[CreateAccount: generate GUID, store secret, insert record]
L --> M[Return File Account to framework]
```

The `IsAccountValid()` check requires all three fields -- Name, Storage Account Name, and Container Name -- to be non-empty. The secret is validated only implicitly when the user attempts a container lookup.

## Directory simulation

Azure Blob Storage has no directory concept. The connector fakes it using a marker file named `BusinessCentral.FileSystem.txt`. This is the most surprising part of the implementation.

```mermaid
flowchart TD
A[CreateDirectory called] --> B{DirectoryExists?}
B -- Yes --> C[Error: directory already exists]
B -- No --> D[Create marker file at path/BusinessCentral.FileSystem.txt]
D --> E[Upload via CreateFile with marker content]

F[ListDirectories called] --> G[ListBlobs with prefix and delimiter]
G --> H[Filter: Resource Type = Directory]
H --> I[Return matching entries]

J[DeleteDirectory called] --> K[ListFiles + ListDirectories at path]
K --> L{Only marker file present?}
L -- No --> M[Error: directory not empty]
L -- Yes --> N[DeleteFile marker blob]
```

The marker file content is a human-readable message: "This is a directory marker file created by Business Central. It is safe to delete it." If someone deletes this marker outside of BC, the directory vanishes from BC's perspective even though files at that path still exist in blob storage.

Note that `DeleteDirectory` enforces emptiness by listing both files and subdirectories. The filter `TempFileAccountContent.SetFilter(Name, '<>%1', MarkerFileNameTok)` excludes the marker file itself from the emptiness check.

## Listing and pagination

File and directory listing operations page through results in batches of 500 (`MaxResults(500)` in `InitOptionalParameters`). The `FilePaginationData` codeunit carries the continuation token (`NextMarker`) between calls. After each batch, `ValidateListingResponse` updates the marker and sets `EndOfListing` when no more pages remain.

ListFiles uses the `/` delimiter to scope results to the current directory level and filters out entries with empty `Blob Type` (directory placeholders) and the marker file. ListDirectories omits the delimiter to get prefix-based grouping and filters for `Resource Type::Directory`.

## MoveFile

MoveFile deserves special attention because it is not atomic. The implementation calls `CopyBlob(target, source)` followed by `DeleteBlob(source)`. If the copy succeeds but the delete fails, the file ends up in both locations. The caller gets an error (from the failed delete), but the copy has already completed and will not be rolled back. There is no transaction boundary or compensation logic.
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Data model

## Overview

The connector's persistence is deliberately minimal. A single table (`Ext. Blob Storage Account`, ID 4560) stores the Azure connection configuration. Credentials never touch the database -- they live in IsolatedStorage, referenced by a GUID held in the table. At runtime, the framework passes around temporary `File Account` and `File Account Content` records that are never persisted.

```mermaid
erDiagram
ExtBlobStorageAccount ||--|| IsolatedStorage : "Secret Key GUID references"
ExtBlobStorageAccount ||--|{ FileAccount : "populates (temporary)"
FileAccount ||--|{ FileAccountContent : "lists into (temporary)"
ExtBlobStorageAccount }|--|| ExtFileStorageConnectorEnum : "registered via"
```

## How secrets work

The `Secret Key` field is a GUID, not a credential. When an account is created, `SetSecret()` generates a new GUID, stores the actual secret text in IsolatedStorage at company scope keyed by that GUID, and saves only the GUID to the table. Every operation retrieves the secret on demand via `GetSecret()` -- the secret is never cached in memory across calls.

The OnDelete trigger cleans up: if the Secret Key GUID is not null and IsolatedStorage contains it, the entry is purged. This prevents orphaned secrets when accounts are deleted.

Company scope means the secret is isolated per-company within a tenant. If you copy a company, the IsolatedStorage entries do not come along -- the accounts will fail to authenticate until secrets are re-entered.

## The Disabled flag

The `Disabled` boolean exists specifically for the sandbox scenario. When a sandbox environment is created from production, the `EnvironmentCleanup::OnClearCompanyConfig` event fires. The connector subscribes to this and sets `Disabled = true` on every non-disabled account. `InitBlobClient()` checks this flag before every operation and errors immediately if the account is disabled. This prevents production credentials (which do survive in IsolatedStorage across environment copy) from being used in sandbox.

## Temporary records at runtime

The framework defines `File Account` and `File Account Content` as tables, but they are always used with `temporary` -- never written to the database. `GetAccounts()` iterates the real `Ext. Blob Storage Account` table and populates temporary `File Account` records. File listing operations populate temporary `File Account Content` records with name, type (file vs. directory), and parent directory. These temporaries exist only for the duration of the framework call.
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Extensibility

## Building a new connector

This app is the reference implementation for the External File Storage connector pattern. The framework lives in System Application; connectors are separate apps that plug in via interface implementation and enum extension. To build a new connector (say, for Google Cloud Storage), you need three things:

**1. Implement the interface.** Create a codeunit that implements `"External File Storage Connector"`. This interface defines the contract: file operations (list, get, create, delete, copy, move, exists), directory operations (list, create, delete, exists), account management (register, delete, get accounts, show info), and metadata (description, logo). Look at `ExtBlobStoConnectorImpl.Codeunit.al` for the full set of procedures -- your codeunit must implement all of them.

**2. Extend the enum.** Create an enum extension on `"Ext. File Storage Connector"` that adds your connector as a new value. The value must use the `Implementation` property to bind your interface implementation codeunit. This is how the framework discovers and dispatches to your connector:

```al
enumextension 50100 "My Cloud Storage Connector" extends "Ext. File Storage Connector"
{
value(50100; "My Cloud Storage")
{
Caption = 'My Cloud Storage';
Implementation = "External File Storage Connector" = "My Cloud Storage Impl.";
}
}
```

**3. Manage your own account storage.** The framework does not prescribe how you store account configuration. This connector uses a dedicated table (`Ext. Blob Storage Account`) with IsolatedStorage for secrets. Your connector needs its own table for connection details. The framework only cares about the `File Account` temporary record (Account Id, Name, Connector enum value) that your `GetAccounts()` and `RegisterAccount()` procedures return.

## What the framework provides

The framework handles the UI for selecting connectors, listing accounts, and dispatching operations. Your connector never needs to build a "file browser" -- the framework provides that. Your job is to translate the abstract operations into your storage backend's API.

The `File Pagination Data` codeunit is worth understanding. The framework may call your listing procedures multiple times for the same path, passing this codeunit each time. You store your continuation token in it (via `SetMarker`/`GetMarker`) and signal completion with `SetEndOfListing`. The Blob Storage connector uses Azure's `NextMarker` for this, but you can store any string your backend needs for pagination.

## What you cannot extend on this connector

The wizard page (`ExtBlobStorAccountWizard.Page.al`) and container lookup page are both `Extensible = false`. The account card page is also `Extensible = false`. You cannot add fields to these pages via page extensions.

The table is extensible by default, but extending it would be unusual -- if you need different connection parameters, you would create your own table in your own connector app.

The codeunit is `Access = Internal`, so you cannot call its procedures directly from outside the app. The only public surface is the interface contract.

## No events to subscribe to

This connector publishes no events. It subscribes to one -- `EnvironmentCleanup::OnClearCompanyConfig` -- but that is a framework event, not one it defines. If you build a connector that stores credentials, you should subscribe to the same event and disable accounts on sandbox copy, following the pattern in the `EnvironmentCleanup_OnClearCompanyConfig` subscriber.
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Patterns

## Marker file for directory simulation

Azure Blob Storage is flat -- blobs live in a single namespace with `/` as a conventional (but meaningless) separator. The connector simulates directories by creating a sentinel blob named `BusinessCentral.FileSystem.txt` at the directory path. Creating directory `invoices/2024/` actually uploads `invoices/2024/BusinessCentral.FileSystem.txt` with a human-readable explanation as content.

This has consequences worth knowing:

- If something outside BC deletes the marker blob, the directory vanishes from BC's perspective, but files at that path remain in blob storage. They become orphans that BC cannot navigate to (because the parent "directory" no longer exists in listings).
- DirectoryExists does not look for the marker specifically. It does `ListBlobs` with the path as prefix and `MaxResults(1)`. Any blob at that prefix makes the directory "exist." So a directory with files but no marker still "exists" as far as DirectoryExists is concerned -- the inconsistency only shows up in ListDirectories, which filters by `Resource Type::Directory`.
- DeleteDirectory refuses to delete non-empty directories. It lists files and subdirectories, excludes the marker file from the count, and errors if anything remains. Then it deletes the marker blob. This means you cannot recursively delete a directory tree in one call.

## Copy-then-delete for move

`MoveFile()` in `ExtBlobStoConnectorImpl.Codeunit.al` implements move as `CopyBlob` followed by `DeleteBlob`. Azure Blob Storage has no native move/rename operation, so this is the only option.

The failure mode matters: if the copy succeeds but the delete fails, the file exists at both source and target paths. The codeunit raises an error from the failed delete, so the caller knows something went wrong, but the copy is already committed and will not be rolled back. The caller would need to manually clean up the target copy. There is no retry or compensation logic.

This also means MoveFile is not constant-time. CopyBlob for a large blob can take significant time because Azure performs a server-side copy, and the operation blocks until completion.

## Lazy secret retrieval

Secrets are fetched from IsolatedStorage on every single operation call via `InitBlobClient()`. The codeunit does not cache the secret across calls -- each invocation of a file operation does a full `IsolatedStorage.Get()`. This is intentional: it avoids holding secrets in memory longer than necessary, and it means a secret rotation (updating the secret via the account page) takes effect immediately without needing to invalidate any cache.

The `GetSecret()` procedure on the table errors immediately if the secret is not found in IsolatedStorage, rather than returning an empty value. This fail-fast behavior surfaces misconfiguration clearly instead of producing cryptic Azure auth failures downstream.

## Environment cleanup hook

The `EnvironmentCleanup_OnClearCompanyConfig` event subscriber is a safety mechanism for sandbox environments. When a sandbox is created from production, this event fires for each company. The subscriber sets `Disabled = true` on all non-disabled accounts.

The key insight is that IsolatedStorage contents survive the environment copy -- production secrets are present in the sandbox. Without this hook, a sandbox could inadvertently perform operations against production Azure storage. The `Disabled` flag acts as a circuit breaker: `InitBlobClient()` checks it before every operation and errors immediately for disabled accounts.

The subscriber filters to only non-disabled accounts (`SetRange(Disabled, false)`) and uses `ModifyAll` for efficiency. It does not distinguish between environment types -- any sandbox creation disables all accounts regardless of whether the source was production or another sandbox.

## Auth strategy selection

`InitBlobClient()` uses a case statement on the account's authorization type to select between SAS token and shared key authentication. SAS tokens go through `UseReadySAS()` (the token is used as-is), while shared keys go through `CreateSharedKey()` (the SDK handles HMAC signing). This is a simple strategy pattern without the overhead of a separate strategy interface -- the two-case switch is sufficient given there are only two auth types.

The same pattern repeats in `LookUpContainer()` for the container name lookup during account setup, where it initializes an `ABS Container Client` instead of an `ABS Blob Client`.
Loading