/claim #416

This PR adds a focused, incremental implementation to support additional rclone destination types while keeping existing S3 behavior intact.

What changed:

  • Backend backup/rclone pathing
    • generalized destination + prefix path building for non-S3 providers
    • added provider mapping in backup utils for:
      • S3 (existing behavior)
      • FTP
      • SFTP
      • Google Drive (drive)
      • OneDrive
  • Backup flows updated (database/compose/web-server/retention)
    • all upload and retention commands now use provider-aware destination builders
  • Destination test connection
    • switched from hardcoded S3-only command generation to provider-aware rclone command generation
  • UI updates for destination setup
    • provider list now includes FTP/SFTP/GDrive/OneDrive entries
    • destination screen copy updated from S3-only wording
    • form labels adapt per provider (e.g. username/password for FTP/SFTP, client ID/secret + token for drive providers)
  • Tests
    • added utility coverage for provider mapping + destination path generation in apps/dokploy/__test__/utils/backups.test.ts

Why this approach:

  • keeps the existing destination model and S3 flows stable
  • unblocks multi-destination support without introducing a large schema migration
  • enables incremental extension for additional rclone providers using the same mapping pattern

Validation notes:

  • Added/updated tests in apps/dokploy/__test__/utils/backups.test.ts
  • Could not execute vitest in this environment because workspace dependencies are not installed (node_modules missing) and repository requires Node ^24.4.0 while current environment is Node 22.x.

I can follow up with a second PR for richer provider-specific UX (advanced fields / token refresh workflows) if this incremental backend-compatible approach looks good.

Greptile Summary

This PR adds support for FTP, SFTP, Google Drive, and OneDrive as rclone backup destinations alongside existing S3. The implementation is well-structured:

Strengths:

  • Provider-aware helpers (getRcloneDestination, getRclonePrefixPath, getS3Credentials) cleanly abstract destination paths and credential handling
  • Existing S3 flows remain stable and intact
  • New providers are integrated uniformly across all backup types (postgres, mysql, mariadb, mongo, compose, web-server) and test-connection endpoint
  • Tests added for provider mapping and path generation
  • PR approach is incremental and unblocks multi-destination support without requiring schema migration

Approach is sound: The generalized helpers are reused across all backup modules, reducing duplication and making future provider additions straightforward. The pattern is consistent and maintainable.

No blocking issues identified.

Confidence Score: 4/5

  • The PR implements new provider support through clean, well-structured helpers while preserving existing S3 behavior. The approach is incremental and maintainable.
  • The implementation follows a solid pattern: provider-aware helpers are reused consistently across all backup modules and endpoints. Existing S3 flows remain stable. Tests verify the provider mapping and path generation logic. The code is clear and the incremental approach reduces risk by not requiring schema changes. The only additional assurance would be runtime validation of the new FTP/SFTP/GDrive/OneDrive flows in a test environment, which is standard practice for new provider integrations.
  • No files require special attention. The changes are well-distributed and follow consistent patterns.

Last reviewed commit: 0bb0682

(2/5) Greptile learns from your feedback when you react with thumbs up/down!

Context used:

  • Rule from dashboard - AGENTS.md (source)

Claim

Total prize pool $50
Total paid $0
Status Pending
Submitted March 03, 2026
Last updated March 03, 2026

Contributors

LI

Liuyi Yu

@yuliuyi717-ux

100%

Sponsors

DO

Dokploy

@Dokploy

$50