Connection & Protocol Layer
Manage SFTP, FTP, local filesystem, and Azure Function connections securely.
This section covers the abstraction layer for remote file transfer protocols (SFTP, FTP, FTPS), connection management, credential storage, and the session lifecycle that integrates with the Job Engine.
6.1 Connection Entity
A Connection is a first-class persisted entity representing a configured remote server. Connections are defined once and referenced by multiple jobs — changing a connection's configuration automatically affects all jobs that use it.
| Field | Type | Description |
|---|---|---|
id | UUID | Internal identifier referenced by job step configuration |
name | TEXT | Human-readable label (e.g., "Partner X Production SFTP") |
group | TEXT | Organizational folder (e.g., "Partner X", "Legacy Systems") |
protocol | ENUM | SFTP, FTP, FTPS, azure_function |
host | TEXT | Hostname or IP address |
port | INT | Port number (defaults: SFTP=22, FTP=21, FTPS=990) |
auth_method | ENUM | Password, SshKey, PasswordAndSshKey, service_principal |
username | TEXT | Login username |
password_encrypted | BYTEA | AES-256 encrypted password (nullable); master key for azure_function |
client_secret_encrypted | BYTEA | AES-256 encrypted Entra client secret (nullable; used by azure_function protocol) |
properties | JSONB | Protocol-specific config (e.g., workspace_id, tenant_id, client_id for azure_function) |
ssh_key_id | UUID | FK to the SSH Key Store (nullable) |
host_key_policy | ENUM | TrustOnFirstUse, AlwaysTrust, Manual |
stored_host_fingerprint | TEXT | Known host fingerprint for TOFU/Manual policies |
passive_mode | BOOL | FTP/FTPS: use passive mode (default: true) |
tls_version_floor | ENUM | FTPS: minimum TLS version (default: TLS_1_2) |
tls_validate_cert | BOOL | FTPS: validate server certificate (default: true) |
ssh_algorithms | JSONB | SFTP: preferred/restricted key exchange, cipher, MAC algorithms |
connect_timeout_sec | INT | Connection timeout (default: 30) |
operation_timeout_sec | INT | Per-operation timeout (default: 300) |
keepalive_interval_sec | INT | Session keepalive ping interval (default: 60) |
transport_retries | INT | Auto-reconnect attempts on connection drop (default: 2, max: 3) |
status | ENUM | Active, Disabled |
created_at | TIMESTAMP | Creation timestamp |
updated_at | TIMESTAMP | Last modification timestamp |
notes | TEXT | Free-text notes (e.g., partner contact info, maintenance windows) |
6.1.1 Connection Groups
Connections can be assigned to a group for organizational purposes. Groups are simple text labels — no separate entity or hierarchy. The frontend UI renders connections grouped by this field, with an "Ungrouped" section for connections without a group.
Common groupings: by partner name, by environment (Production / UAT / Dev), by department, or by data flow direction (inbound / outbound).
6.2 Unified Transfer Interface
All file transfer operations go through a common interface regardless of protocol. Protocol-specific implementations handle the underlying differences transparently.
public interface ITransferClient : IAsyncDisposable
{
string Protocol { get; } // "sftp", "ftp", "ftps"
bool IsConnected { get; }
// Connection lifecycle
Task ConnectAsync(CancellationToken cancellationToken);
Task DisconnectAsync();
// File operations
Task UploadAsync(
UploadRequest request,
IProgress<TransferProgress> progress,
CancellationToken cancellationToken);
Task DownloadAsync(
DownloadRequest request,
IProgress<TransferProgress> progress,
CancellationToken cancellationToken);
Task RenameAsync(string oldPath, string newPath,
CancellationToken cancellationToken);
Task DeleteFileAsync(string remotePath,
CancellationToken cancellationToken);
// Directory operations
Task<IReadOnlyList<RemoteFileInfo>> ListDirectoryAsync(
string remotePath,
CancellationToken cancellationToken);
Task CreateDirectoryAsync(string remotePath,
CancellationToken cancellationToken);
Task DeleteDirectoryAsync(string remotePath, bool recursive,
CancellationToken cancellationToken);
// Diagnostics
Task<ConnectionTestResult> TestAsync(
CancellationToken cancellationToken);
}
public record UploadRequest(
string LocalPath,
string RemotePath,
bool AtomicUpload, // Upload as .tmp then rename
string AtomicSuffix, // Default: ".tmp"
bool ResumePartial); // Attempt resume if partial exists
public record DownloadRequest(
string RemotePath,
string LocalPath,
bool ResumePartial); // Attempt resume if partial exists
public record TransferProgress(
long BytesTransferred,
long TotalBytes,
string CurrentFile,
double TransferRateBytesPerSec);
public record RemoteFileInfo(
string Name,
string FullPath,
long Size,
DateTime LastModified,
bool IsDirectory);
public record ConnectionTestResult(
bool Success,
TimeSpan Latency,
string? ServerBanner,
string? ErrorMessage,
IReadOnlyList<string>? SupportedAlgorithms); // SFTP only
6.3 Protocol Implementations
6.3.1 SFTP — SSH.NET
The SFTP implementation uses SSH.NET (SSH.NET NuGet package), a free, open-source, mature SSH library for .NET optimized for parallelism. It supports .NET 10 and provides both synchronous and asynchronous SFTP operations.
Key integration points:
- Authentication:
PasswordAuthenticationMethod,PrivateKeyAuthenticationMethod, or both combined in a singleConnectionInfo. Private keys are loaded from the SSH Key Store at connection time. - Host key verification: Handled via the
HostKeyReceivedevent onSftpClient. The implementation checks the connection'shost_key_policyand either accepts, compares againststored_host_fingerprint, or rejects. - Transfer resume: SSH.NET supports offset-based operations. For upload resume, the client checks the remote file size and begins writing at that offset. For download resume, the client checks the local file size and requests data starting from that position.
- Keepalive: Configured via
ConnectionInfo.Timeoutand periodicSendKeepAlive()calls on a background timer. - Algorithm configuration: SSH.NET allows specifying preferred key exchange, encryption, and MAC algorithms via
ConnectionInfo. The connection entity'sssh_algorithmsJSONB field maps directly to these settings.
6.3.2 FTP / FTPS — FluentFTP
The FTP and FTPS implementations use FluentFTP (FluentFTP NuGet package), a widely-used, actively maintained FTP library that handles the many quirks and edge cases of the FTP specification across different server implementations.
Key integration points:
- Plain FTP: Standard unencrypted FTP. Used only for legacy systems where no secure alternative is available.
- FTPS (Explicit): Connection starts as plain FTP, then upgrades to TLS via the
AUTH TLScommand. Configured viaFtpConfig.EncryptionMode = FtpEncryptionMode.Explicit. - FTPS (Implicit): Connection is TLS from the start on a dedicated port (typically 990). Configured via
FtpConfig.EncryptionMode = FtpEncryptionMode.Implicit. - Passive mode: Default and recommended. Configured via
FtpConfig.DataConnectionType = FtpDataConnectionType.PASV. Active mode available for environments that require it. - TLS configuration: Minimum TLS version set via
FtpConfig.SslProtocols. Certificate validation is controlled bytls_cert_policy(see below). - Transfer resume: FluentFTP supports the FTP
REST(restart) command. For uploads,FtpRemoteExists.Resumeappends to existing partial files. For downloads,FtpLocalExists.Resumecontinues from the current local file size.
FTPS certificate validation:
FluentFTP delegates certificate validation to a ValidateCertificate event callback. If no handler is attached, FluentFTP accepts all certificates by default — including expired, self-signed, and hostname-mismatched certificates. Courier always attaches an explicit handler based on the connection's tls_cert_policy:
| Policy | Behavior | Who Can Set | FIPS Mode |
|---|---|---|---|
SystemTrust (default) | Validate against the OS/container trust store. Rejects expired, self-signed, revoked, and hostname-mismatched certificates. | Any role | Allowed |
PinnedThumbprint | Validate that the server certificate's SHA-256 thumbprint exactly matches tls_pinned_thumbprint on the connection. Ignores trust store (supports self-signed partner certs with a known thumbprint). Recommended for partners with self-signed or internal CA certs. | Any role | Allowed |
Insecure | Accept any certificate without validation. Disables TLS verification entirely. Same restrictions as SSH AlwaysTrust: Admin-only, blocked in FIPS mode, blocked in production by default, audited on every use. | Admin only | Blocked |
FluentFTP callback implementation:
private void ConfigureTlsValidation(FtpClient client, Connection connection)
{
client.ValidateCertificate += (control, e) =>
{
switch (connection.TlsCertPolicy)
{
case TlsCertPolicy.SystemTrust:
// Delegate to OS trust store — SslPolicyErrors == None means valid
e.Accept = e.PolicyErrors == System.Net.Security.SslPolicyErrors.None;
if (!e.Accept)
_logger.LogWarning("TLS cert rejected for {Host}: {Errors}",
connection.Host, e.PolicyErrors);
break;
case TlsCertPolicy.PinnedThumbprint:
var thumbprint = e.Certificate.GetCertHashString(
System.Security.Cryptography.HashAlgorithmName.SHA256);
e.Accept = string.Equals(
thumbprint,
connection.TlsPinnedThumbprint,
StringComparison.OrdinalIgnoreCase);
if (!e.Accept)
_logger.LogWarning(
"TLS cert thumbprint mismatch for {Host}: " +
"expected {Expected}, got {Actual}",
connection.Host,
connection.TlsPinnedThumbprint,
thumbprint);
break;
case TlsCertPolicy.Insecure:
// Accept anything, but log for audit
_auditService.LogInsecureTlsCertAccepted(
connection.Id, connection.Host,
e.Certificate.Subject, e.PolicyErrors.ToString());
e.Accept = true;
break;
}
};
}
Restrictions on Insecure cert policy (identical to SSH AlwaysTrust):
- Admin-only to set (error
3006: Insecure TLS policy requires admin) - Blocked when FIPS mode enabled (error
3007: Insecure TLS policy not allowed in FIPS mode) - Blocked in production by default (same
security.insecure_trust_allow_productionsetting) - Audit event
InsecureTlsPolicyUsedon every connection with certificate subject, issuer, and policy errors
6.3.3 Protocol Support Matrix
| Capability | SFTP | FTP | FTPS |
|---|---|---|---|
| Encryption in transit | Yes (SSH) | No | Yes (TLS) |
| Password auth | Yes | Yes | Yes |
| SSH key auth | Yes | N/A | N/A |
| Combined auth | Yes | N/A | N/A |
| Transfer resume | Yes | Server-dependent | Server-dependent |
| Passive mode | N/A | Yes | Yes |
| Host key verification | Yes | N/A | N/A |
| Certificate validation | N/A | N/A | Yes |
| Directory listing | Yes | Yes | Yes |
| Atomic rename | Yes | Yes | Yes |
| Large file streaming | Yes | Yes | Yes |
6.3.4 Azure Functions
Azure Function connections use the Admin API for fire-and-forget function invocation and Application Insights (Log Analytics) for polling completion and retrieving execution traces.
| Field | Purpose |
|---|---|
host | Function App URL (e.g., myapp.azurewebsites.net) |
password_encrypted | Master key for the Function App (encrypted) |
client_secret_encrypted | Entra service principal client secret (encrypted) |
auth_method | service_principal |
properties (JSONB) | \{ "workspace_id": "...", "tenant_id": "...", "client_id": "..." \} |
Trigger flow: POST to https://\{host\}/admin/functions/\{functionName\} with x-functions-key header. Returns 202 immediately (fire-and-forget). No invocation ID is returned by Azure's Admin API.
Completion detection: Poll Application Insights via Log Analytics REST API using KQL: query requests table filtered by function name and trigger timestamp. Uses Azure.Identity.ClientSecretCredential for Entra token acquisition (handles automatic token refresh for multi-hour polls).
Trace retrieval: On-demand query of traces table filtered by customDimensions.InvocationId. Available after function completion via dedicated API endpoint.
6.4 Connection Session Management
Connections are scoped to the lifetime of a job execution. The first step in a job that requires a remote connection opens a session; subsequent steps in the same job reuse that session. When the job execution ends (success, failure, or cancellation), all sessions are closed.
6.4.1 Job Connection Registry
An in-memory registry holds open sessions for the duration of each job execution:
public class JobConnectionRegistry : IAsyncDisposable
{
private readonly ConcurrentDictionary<string, ITransferClient> _sessions = new();
public async Task<ITransferClient> GetOrOpenAsync(
Guid executionId,
Guid connectionId,
ConnectionEntity config,
CancellationToken cancellationToken)
{
var key = $"{executionId}:{connectionId}";
return _sessions.GetOrAdd(key, _ =>
{
var client = CreateClient(config);
await client.ConnectAsync(cancellationToken);
return client;
});
}
public async ValueTask DisposeAsync()
{
foreach (var session in _sessions.Values)
await session.DisconnectAsync();
_sessions.Clear();
}
}
The registry is created per job execution and disposed when the execution completes. This ensures:
- Session reuse: A job with 5 SFTP steps hitting the same server uses one connection
- Multi-server support: A job that touches server A and server B holds two concurrent sessions
- Clean teardown: All sessions are guaranteed to close, even on failure or cancellation
- No cross-job leakage: Each job execution gets its own registry instance
6.4.2 Session Health & Recovery
During a long-running job, a session may drop due to network issues or server-side timeouts. The transfer client handles this transparently:
- Before each operation, check
IsConnected - If disconnected, attempt to reconnect (up to
transport_retriestimes) - If the operation was a transfer with
ResumePartialenabled, resume from the last known offset - If reconnection fails after all retries, throw a
ConnectionLostExceptionwhich surfaces as a step failure - The job's failure policy then determines whether to retry the step, skip it, or fail the job
Keepalive pings run on a background timer (configurable interval, default: 60 seconds) for each active session to prevent server-side idle timeouts during long-running non-transfer steps (e.g., a PGP decryption step between two SFTP steps).
6.5 Transfer Resume for Large Files
Transfer resume is critical for Courier's 6–10 GB file workloads. Both upload and download resume are supported, with protocol-specific implementations.
6.5.1 Upload Resume
- Before uploading, check if a partial file exists at the remote destination
- If it exists and
ResumePartialis enabled, query the remote file size - Seek the local file stream to the remote file's size (the byte offset where the previous upload stopped)
- Begin uploading from that offset, appending to the remote file
- After completion, verify the remote file size matches the expected total
If the remote partial file is larger than the local file (indicating corruption), delete the remote file and restart from the beginning.
6.5.2 Download Resume
- Before downloading, check if a partial local file exists
- If it exists and
ResumePartialis enabled, query the local file size - Request data from the remote server starting at the local file's size offset
- Append to the local file
- After completion, verify the local file size matches the remote file's total size
6.5.3 Resume Tracking in JobContext
When a transfer step completes (fully or partially), it writes resume metadata to the JobContext:
{
"2.transfer_state": {
"remote_path": "/incoming/large_file.dat",
"local_path": "/data/courier/temp/exec-123/large_file.dat",
"bytes_transferred": 4294967296,
"total_bytes": 6442450944,
"completed": false
}
}
On job resume (after pause or retriable failure), the step reads this metadata and resumes from bytes_transferred instead of restarting. This works in conjunction with the Job Engine's checkpoint system (Section 5.5).
6.6 Atomic Upload Pattern
To prevent downstream systems from reading partially uploaded files, Courier supports an atomic upload pattern configurable per step:
- Upload the file as
\{filename\}\{suffix\}(default suffix:.tmp) - On successful upload completion, rename to the final
\{filename\} - If the upload fails, delete the partial
.tmpfile (best effort)
{
"step_type": "sftp.upload",
"config": {
"connection_id": "<uuid>",
"local_path": "context:1.decrypted_file",
"remote_path": "/outgoing/invoice.csv",
"atomic_upload": true,
"atomic_suffix": ".tmp"
}
}
This is especially important in environments where partner systems poll directories for new files — without atomic upload, they may pick up a half-written file.
6.7 Host Key Verification (SFTP)
Each SFTP connection has a configurable host key verification policy:
TrustOnFirstUse (TOFU) — Default and recommended. On the first connection, the server's host key fingerprint is stored in the known_hosts table. On subsequent connections, the fingerprint is compared. If it changes, the connection is rejected with a HostKeyMismatchException and the connection transitions to a RequiresAttention state until an admin re-approves the new fingerprint.
Manual — The admin must provide the expected host key fingerprint before the first connection. The connection will not succeed until a matching fingerprint is configured. Most secure option for high-security environments.
AlwaysTrust (Insecure) — Accept any host key without verification. This disables MITM protection and should only be used for development, testing, or legacy environments where host keys change frequently and the network path is trusted.
Restrictions on AlwaysTrust:
- Admin-only: Setting
host_key_policy = 'always_trust'requires the Admin role. Operators receive error3004: Insecure host key policy requires admin. Controlled by system settingsecurity.insecure_trust_require_admin(default:true). - Blocked in FIPS mode: When
security.fips_mode_enabled = true,AlwaysTrustis rejected with error3005: Insecure host key policy not allowed in FIPS mode. FIPS compliance implies a secure operating environment where MITM protection must be enforced. - Audit on every use: Every connection (not just configuration change) using
AlwaysTrustgenerates an audit eventInsecureHostKeyPolicyUsedwith the connection ID, remote host, and the actual host key fingerprint that was accepted without verification. - UI warning: The connection detail page displays a persistent red banner: "Host key verification disabled — this connection is vulnerable to man-in-the-middle attacks."
- Blocked in production by default: System setting
security.insecure_trust_allow_production(default:false). When false,AlwaysTrustis only allowed if the environment isDevelopmentorStaging. Production deployments must use TOFU or Manual.
SSH.NET callback implementation:
Host key verification in SSH.NET is not automatic — it requires an explicit callback on the HostKeyReceived event. If no handler is attached, SSH.NET accepts all host keys by default, which is itself an insecure behavior. Courier always attaches a handler:
private void ConfigureHostKeyVerification(SftpClient client, Connection connection)
{
client.HostKeyReceived += (sender, e) =>
{
var fingerprint = $"SHA256:{Convert.ToBase64String(e.HostKeyHash)}";
switch (connection.HostKeyPolicy)
{
case HostKeyPolicy.AlwaysTrust:
// Accept, but log for audit trail
_auditService.LogInsecureHostKeyAccepted(
connection.Id, connection.Host, fingerprint);
e.CanTrust = true;
break;
case HostKeyPolicy.TrustOnFirstUse:
var stored = _knownHostService.GetFingerprint(connection.Id);
if (stored == null)
{
// First connection — store and trust
_knownHostService.StoreFingerprint(
connection.Id, fingerprint, e.HostKeyName, "system");
e.CanTrust = true;
}
else if (stored == fingerprint)
{
_knownHostService.UpdateLastSeen(connection.Id, fingerprint);
e.CanTrust = true;
}
else
{
// Mismatch — reject and flag
e.CanTrust = false;
_connectionService.SetRequiresAttention(connection.Id,
$"Host key changed: expected {stored}, got {fingerprint}");
}
break;
case HostKeyPolicy.Manual:
e.CanTrust = connection.StoredHostFingerprint == fingerprint;
break;
}
};
}
#### 6.7.1 Known Hosts Table
| Column | Type | Description |
|-----------------|-----------|-------------------------------------------------------|
| `connection_id` | UUID | FK to the connection |
| `fingerprint` | TEXT | SHA-256 fingerprint of the server's host key |
| `key_type` | TEXT | Algorithm (e.g., `ssh-rsa`, `ssh-ed25519`) |
| `first_seen` | TIMESTAMP | When the fingerprint was first recorded |
| `last_seen` | TIMESTAMP | Last successful connection with this fingerprint |
| `approved_by` | TEXT | User who approved (for TOFU auto-approvals: "system") |
### 6.8 SSH Key Store
SSH keys used for SFTP authentication are stored in a dedicated key store, separate from the PGP Key Store (Section 7.3). While the security patterns are similar (encryption at rest, audit logging), the key formats, operations, and lifecycle are different enough to warrant separation.
#### 6.8.1 SSH Key Entity
| Field | Type | Description |
|----------------------|-----------|--------------------------------------------------------|
| `id` | UUID | Internal identifier referenced by connections |
| `name` | TEXT | Human-readable label (e.g., "Partner X Auth Key") |
| `key_type` | ENUM | `RSA_2048`, `RSA_4096`, `ED25519`, `ECDSA_256`, etc. |
| `public_key_data` | TEXT | OpenSSH-format public key |
| `private_key_data` | BYTEA | AES-256 encrypted private key material |
| `passphrase_hash` | TEXT | Encrypted passphrase (nullable) |
| `fingerprint` | TEXT | SHA-256 fingerprint of the public key |
| `status` | ENUM | `Active`, `Retired`, `Deleted` |
| `created_at` | TIMESTAMP | When the key was generated or imported |
| `created_by` | TEXT | User who created the key |
| `notes` | TEXT | Free-text (e.g., which servers accept this key) |
#### 6.8.2 Supported Key Formats
**Import**: OpenSSH format, PEM (PKCS#1, PKCS#8), PuTTY PPK (v2 and v3). SSH.NET handles all of these natively via `PrivateKeyFile`.
**Export**: OpenSSH format (public key for adding to `authorized_keys` on remote servers).
**Generation**: Courier can generate SSH key pairs (RSA 2048/4096, Ed25519) using SSH.NET's key generation utilities. Generated keys are immediately encrypted and stored.
#### 6.8.3 Encryption at Rest
SSH private keys are encrypted using the same envelope encryption pattern as PGP keys (Section 7.3.6): a random AES-256 DEK per key, encrypted with AES-256-GCM, with the DEK wrapped by the Azure Key Vault KEK via wrap/unwrap operations. The KEK never leaves Key Vault. The stored blob includes the KEK version, wrapped DEK, IV, auth tag, and ciphertext.
### 6.9 Credential Storage
Connection passwords and SSH key passphrases are encrypted at rest using the same envelope encryption pattern used throughout Courier (Section 7.3.6):
1. A random 256-bit DEK is generated per credential
2. The credential is encrypted with AES-256-GCM using the DEK
3. The DEK is wrapped by the Azure Key Vault KEK via the `WrapKey` operation
4. The wrapped DEK, IV, auth tag, KEK version, and ciphertext are stored as BYTEA in the connection entity
5. Decryption happens in memory at connection time via Key Vault `UnwrapKey` — plaintext credentials are never written to disk or logs
Credential values are never returned in API responses. The API returns only a boolean `has_password` / `has_ssh_key` to indicate whether credentials are configured.
### 6.10 Directory Operations
Directory operations are available as both standalone step types and as utility methods on `ITransferClient`:
| Step Type Key | Description |
|----------------------|-------------------------------------------------------|
| `remote.mkdir` | Create a directory on the remote server |
| `remote.rmdir` | Delete a directory on the remote server |
| `remote.list` | List directory contents (output to JobContext) |
These step types are protocol-agnostic — they resolve the correct `ITransferClient` implementation based on the referenced connection's protocol. The step configuration includes the connection ID and the remote path:
```json
\{
"step_type": "remote.mkdir",
"config": \{
"connection_id": "<uuid>",
"remote_path": "/outgoing/2026/02/20",
"recursive": true
\}
\}
Recursive directory creation (mkdir -p equivalent) is supported for SFTP. For FTP/FTPS, recursive creation is emulated by creating each path segment sequentially.
6.11 Test Connection Endpoint
The API exposes a test endpoint for validating connection configuration without running a full job:
POST /api/connections/\{id\}/test
The test operation:
- Opens a connection using the stored configuration and credentials
- Authenticates with the configured method
- Lists the root directory (or a configured base path) to verify access
- For SFTP: records the server's host key fingerprint and supported algorithms. Host key verification runs through the
HostKeyReceivedcallback per the connection'shost_key_policy(Section 6.7). - For FTPS: validates the TLS handshake and server certificate per the connection's
tls_cert_policy(Section 6.3.2). Returns the certificate subject, issuer, thumbprint, and expiration for display in the UI. - Measures round-trip latency
- Disconnects
Response:
\{
"success": true,
"latency_ms": 142,
"server_banner": "OpenSSH_9.6",
"host_key_fingerprint": "SHA256:xxxxxxxxxxx",
"supported_algorithms": ["[email protected]", "[email protected]"],
"tls_certificate": \{
"subject": "CN=partner-sftp.example.com",
"issuer": "CN=Let's Encrypt Authority X3",
"thumbprint_sha256": "AB:CD:EF:...",
"not_after": "2026-12-01T00:00:00Z",
"policy_errors": "None"
\}
\}
On failure, the response includes a diagnostic error message with actionable details (e.g., "Authentication failed: server rejected password", "Connection timed out after 30 seconds", "Host key mismatch: expected SHA256:xxx, got SHA256:yyy").
The frontend UI uses this endpoint to provide a "Test Connection" button on the connection configuration form.
6.12 Transfer Progress Reporting
All upload and download operations report progress via IProgress<TransferProgress> at regular intervals. The engine uses this data for:
- Audit logging: Total bytes transferred, average transfer rate, and duration are recorded in the step audit entry
- Timeout detection: If no progress is reported within the step's timeout window, the step is timed out (consistent with Section 5.12)
- V2 UI progress: Real-time transfer progress for the frontend dashboard
Progress is reported every 1MB transferred or every 5 seconds, whichever comes first. For small files (under 1MB), progress is reported once at completion.
6.13 TLS Configuration (FTPS)
FTPS connections support configurable TLS settings for compatibility with diverse server environments:
- Minimum TLS version: Default
TLS 1.2. Can be lowered toTLS 1.1orTLS 1.0for legacy servers (flagged with a warning in the UI). - Certificate validation: Enabled by default. Can be disabled for self-signed certificates. When disabled, a warning is displayed in the UI and logged on each connection.
- Client certificate: Optional client certificate for mutual TLS authentication. Certificate stored encrypted in the connection entity.
6.14 SSH Algorithm Configuration (SFTP)
For SFTP connections, administrators can restrict or prefer specific cryptographic algorithms to match partner server requirements or security policies:
\{
"ssh_algorithms": \{
"key_exchange": ["ecdh-sha2-nistp256", "diffie-hellman-group14-sha256"],
"encryption": ["[email protected]", "aes256-ctr"],
"mac": ["hmac-sha2-256", "hmac-sha2-512"],
"host_key": ["ssh-ed25519", "rsa-sha2-256"]
\}
\}
If not configured, SSH.NET's defaults are used (which prioritize modern, secure algorithms). This setting is primarily needed when connecting to legacy servers that only support older algorithms — the UI flags any algorithm configuration that includes known-weak algorithms.
6.15 Connection Audit Log
All connection activity is recorded in the connection_audit_log table:
| Column | Type | Description |
|---|---|---|
id | UUID | Audit record ID |
connection_id | UUID | FK to the connection |
operation | ENUM | Connected, Disconnected, AuthSuccess, AuthFailed, Upload, Download, Rename, Delete, Mkdir, Rmdir, TestConnection, HostKeyApproved, HostKeyRejected |
job_execution_id | UUID | FK to job execution (nullable — null for test connections) |
performed_by | TEXT | User or system |
performed_at | TIMESTAMP | When the operation occurred |
bytes_transferred | BIGINT | For upload/download operations |
duration_ms | INT | Operation duration |
details | JSONB | Additional context (error messages, server responses, etc.) |
This log, combined with the Job audit trail (Section 5.15), provides complete traceability of all file movements through Courier.