Compare commits

..

7 Commits

Author SHA1 Message Date
Johannes Zellner 51a69cce41 Ensure notifications are sorted by time descending
(cherry picked from commit 63310c44c0)
2019-05-14 09:49:26 -07:00
Johannes Zellner 8686832bd1 4.0.3 changes 2019-05-14 16:57:45 +02:00
Girish Ramakrishnan c93c06ba88 gcs: fix crash
(cherry picked from commit 05d3f8a667)
2019-05-12 18:14:17 -07:00
Girish Ramakrishnan 36d64b3566 4.0.2 changes
(cherry picked from commit 3fa45ea728)
2019-05-12 14:04:26 -07:00
Girish Ramakrishnan e0c531564c Add option to skip backup before update
(cherry picked from commit a7d2098f09)
2019-05-12 13:59:07 -07:00
Girish Ramakrishnan 1e0ec75f0a gcdns: fix crash
(cherry picked from commit e1ecb49d59)
2019-05-12 12:57:06 -07:00
Girish Ramakrishnan 36ac02d29f Fix crash because params was undefined
(cherry picked from commit 800e25a7a7)
2019-05-10 13:08:26 -07:00
380 changed files with 28066 additions and 31895 deletions
-5
View File
@@ -1,5 +0,0 @@
{
"node": true,
"unused": true,
"esversion": 8
}
-686
View File
@@ -1600,689 +1600,3 @@
[4.0.3]
* Fix dashboard issue for non-admins
[4.1.0]
* Remove password requirement for uninstalling apps and users
* Hosting provider edition
* Enforce limits in mail container
* Fix crash when using unauthenticated relay
* Fix domain and tag filtering
* Customizable app icons
* Remove obsolete X-Frame-Options from nginx configs
* Give SFTP access based on access restriction
[4.1.1]
* Add UI hint about SFTP access restriction
[4.1.2]
* Accept incoming mail from a private relay
* Fix issue where unused addon images were not pruned
* Add UI for redirect from multiple domains
* Allow apps to be relocated to custom data directory
* Make all cloudron env vars have CLOUDRON_ prefix
* Update manifest version to 2
* Fix issue where DKIM keys were inaccessible
* Fix DKIM selector conflict when adding same domain across multiple cloudrons
* Fix name.com DNS backend issue for naked domains
* Add DigitalOcean Frankfurt (fra1) region for backup storage
[4.1.3]
* Update manifest format package
[4.1.4]
* Add CLOUDRON_ prefix to MySQL addon variables
[4.1.5]
* Make the terminal addon button inject variables based on manifest version
* Preserve addon passwords correctly when using v2 manifest
* Show error message instead of logging out user when invalid 2FA token is provided
* Ensure redis vars are renamed with manifest v2
* Add missing Scaleway Object Storage to restore UI
* Fix Exoscale endpoints in restore UI
* Reset the app icon when showing the configure UI
[4.1.6]
* Fix issue where CLOUDRON_APP_HOSTNAME was incorrectly set
* Remove chat link from the footer of login screen
* Add support for oplog tailing in mongodb
* Fix LDAP not accessible via scheduler containers
[4.1.7]
* Fix issue where login looped when admin bit was removed
[4.2.0]
* Fix issue where tar backups with files > 8GB was corrupt
* Add SparkPost as mail relay backend
* Add Wasabi storage backend
* TOTP tokens are now checked for with +- 60 seconds
* IP based restore
* Fix issue where task logs were not getting rotated correctly
* Add notification for box update
* User enable/disable flag
* Check disk space before various operations like install, update, backup etc
* Collect per app du information
* Set Cloudron specific UA for healthchecks
* Show message why an app task is 'pending'
* Rework app task system so that we can now pass dynamic arguments
* Add external LDAP server integration
[4.2.1]
* Rework the app configuration routes & UI
* Fine grained eventlog for app configuration
* Update Haraka to 2.8.24
* Set sieve_max_redirects to 64
* SRS support for mail forwarding
* Fix issue where sieve responses were not sent via the relay
* File based session store
* Fix API token error reporting for namecheap backend
[4.2.2]
* Fix typos in migration
[4.2.3]
* Remove flicker of custom icon
* Preserve PROVIDER setting from cloudron.conf
* Add Skip backup option when updating an app
* Fix bug where nginx was not reloaded on cert renewal
[4.2.4]
* Fix demo settings state regression
[4.2.5]
* Fix the demo settins fix
[4.2.6]
* Fix configuration of empty app location (subdomain)
[4.2.7]
* Fix issue where the icon for normal users was displayed incorrectly
* Kill stuck backup processes after 12 hours and notify admins
* Reconfigure email apps when mail domain is added/removed
* Fix crash when only udp ports are defined
[4.3.0]
* Add timeout to kill long running tasks in case they get stuck
* email: Auto-subscribe to Spam folder
* Allow setting a custom CSP policy
* ticket: when email is down, add a field to provide alternate contact email
* Re-work app import flow
* Add pagination and search to mailbox and mail alias listing
* Add UI and workflow to add a private registry
* Show external LDAP connector
* Network view: Allow IP address detection to be configurable
* Add support for custom docker registry
* Resolve any lists and aliases in a mailing list
* Rename Accounts view to Profile
* Add search for groups and user association UI
[4.3.1]
* Make logout from all button logout from all sessions
* List unstable apps by default
* Fix crash when listing mailboxes
[4.3.2]
* Update manifestformat module
[4.3.3]
* Fix bug where stopped containers got started on server restart
* Fix external LDAP UI and syncing
* Fix timeout being too low in docker proxy
* Make manifest.id optional for custom apps
* Fix registry detection in private images
* Make mailbox domain configurable for apps
[4.3.4]
* Do not error if fallback certs went missing
* Add 'New Apps' section to Appstore view
* Fix issue where graphs of some apps were not appearing
[4.4.0]
* Show swap in graphs
* Make avatars customizable
* Hide access tokens from logs
* Add missing '@' sign for email address in app mailbox
* Add app fqdn to backup progress message
* import: add option to import app in-place
* import: add option to import app from arbitrary backup config
* Show download progress for rsync backups
* Fix various repair workflows
* acme2: Implement post-as-get
[4.4.1]
* ami: fix AWS provider validation
[4.4.2]
* Fix crash when reporting that DKIM is not setup correctly
* Stopped apps cannot be updated or auto-updated
* eventlog: track support ticket creation and remote support status
[4.4.3]
* Add restart button in recovery section
* Fix issue where memory usage was not computed correctly
* cloudflare: support API tokens
[4.4.4]
* Fix bug where restart button in terminal was not working
* Add search field in apps view
* Make app view tags and domain filter persistent
* Add timezone UI
[4.4.5]
* Fix user listing regression in group edit dialog
* Do not show error page for 503
* Add mail list and mail box update events
* Certs of stopped apps are not renewed anymore
* Fix broken memory sliders in the services UI
* Set CPU Shares
* Update nodejs to 12.14.1
* Update MySQL addon packet size to 64M
[5.0.0]
* Show backup disk usage in graphs
* Add per-user app passwords
* Make app not responding page customizable
* Make footer customizable
* Add UI to import backups
* Display timestamps in browser timezone in the UI
* Mail eventlog and usage
* Add user roles - owner, admin, user manager and user
* Setup logrotate configs for collectd since upstream does not set it up
* mail: Add X-Envelope-To and X-Envelope-From headers for incoming mails
* linode: add object storage backend
* restore: carefully replace backup config
* spam: add default corpus and global db
[5.0.1]
* Show backup disk usage in graphs
* Add per-user app passwords
* Make app not responding page customizable
* Make footer customizable
* Add UI to import backups
* Display timestamps in browser timezone in the UI
* Mail eventlog and usage
* Add user roles - owner, admin, user manager and user
* Setup logrotate configs for collectd since upstream does not set it up
* mail: Add X-Envelope-To and X-Envelope-From headers for incoming mails
* linode: add object storage backend
* restore: carefully replace backup config
* spam: add default corpus and global db
[5.0.2]
* Show backup disk usage in graphs
* Add per-user app passwords
* Make app not responding page customizable
* Make footer customizable
* Add UI to import backups
* Display timestamps in browser timezone in the UI
* Mail eventlog and usage
* Add user roles - owner, admin, user manager and user
* Setup logrotate configs for collectd since upstream does not set it up
* mail: Add X-Envelope-To and X-Envelope-From headers for incoming mails
* linode: add object storage backend
* restore: carefully replace backup config
* spam: per mailbox bayes db and training
[5.0.3]
* Show backup disk usage in graphs
* Add per-user app passwords
* Make app not responding page customizable
* Make footer customizable
* Add UI to import backups
* Display timestamps in browser timezone in the UI
* Mail eventlog and usage
* Add user roles - owner, admin, user manager and user
* Setup logrotate configs for collectd since upstream does not set it up
* mail: Add X-Envelope-To and X-Envelope-From headers for incoming mails
* linode: add object storage backend
* restore: carefully replace backup config
* spam: per mailbox bayes db and training
[5.0.4]
* Fix potential previlige escalation because of ghost file
* linode: dns backend
* make branding routes owner only
* add branding API
* Add app start/stop/restart events
* Use the primary email for LE account
* make mail eventlog more descriptive
[5.0.5]
* Fix bug where incoming mail from dynamic hostnames was rejected
* Increase token expiry
* Fix bug in tag UI where tag removal did not work
[5.0.6]
* Make mail eventlog only visible to owners
* Make app password work with sftp
[5.1.0]
* Add turn addon
* Fix disk usage display
* Drop support for TLSv1 and TLSv1.1
* Make cert validation work for ECC certs
* Add type filter to mail eventlog
* mail: Fix listing of mailboxes and aliases in the UI
* branding: fix login page title
* Only a Cloudron owner can install/update/exec apps with the docker addon
* security: reset tokens are only valid for a day
* mail: fix eventlog db perms
* Fix various bugs in the disk graphs
[5.1.1]
* Add turn addon
* Fix disk usage display
* Drop support for TLSv1 and TLSv1.1
* Make cert validation work for ECC certs
* Add type filter to mail eventlog
* mail: Fix listing of mailboxes and aliases in the UI
* branding: fix login page title
* Only a Cloudron owner can install/update/exec apps with the docker addon
* security: reset tokens are only valid for a day
* mail: fix eventlog db perms
* Fix various bugs in the disk graphs
* Fix collectd installation
* graphs: sort disk contents by usage
* backups: show apps that are not automatically backed up in backup view
[5.1.2]
* Add turn addon
* Fix disk usage display
* Drop support for TLSv1 and TLSv1.1
* Make cert validation work for ECC certs
* Add type filter to mail eventlog
* mail: Fix listing of mailboxes and aliases in the UI
* branding: fix login page title
* Only a Cloudron owner can install/update/exec apps with the docker addon
* security: reset tokens are only valid for a day
* mail: fix eventlog db perms
* Fix various bugs in the disk graphs
* Fix collectd installation
* graphs: sort disk contents by usage
* backups: show apps that are not automatically backed up in backup view
* turn: deny local address peers https://www.rtcsec.com/2020/04/01-slack-webrtc-turn-compromise/
[5.1.3]
* Fix crash with misconfigured reverse proxy
* Fix issue where invitation links are not working anymore
[5.1.4]
* Add support for custom .well-known documents to be served
* Add ECDHE-RSA-AES128-SHA256 to cipher list
* Fix GPG signature verification
[5.1.5]
* Check for .well-known routes upstream as fallback. This broke nextcloud's caldav/carddav
[5.2.0]
* acme: request ECC certs
* less-strict DKIM check to allow users to set a stronger DKIM key
* Add members only flag to mailing list
* oauth: add backward compat layer for backup and uninstall
* fix bug in disk usage sorting
* mail: aliases can be across domains
* mail: allow an external MX to be set
* Add UI to download backup config as JSON (and import it)
* Ensure stopped apps are getting backed up
* Add OVH Object Storage backend
* Add per-app redis status and configuration to Services
* spam: large emails were not scanned
* mail relay: fix delivery event log
* manual update check always gets the latest updates
* graphs: fix issue where large number of apps would crash the box code (query param limit exceeded)
* backups: fix various security issues in encypted backups (thanks @mehdi)
* graphs: add app graphs
* older encrypted backups cannot be used in this version
* Add backup listing UI
* stopping an app will stop dependent services
* Add new wasabi s3 storage region us-east-2
* mail: Fix bug where SRS translation was done on the main domain instead of mailing list domain
* backups: add retention policy
* Drop `NET_RAW` caps from container preventing sniffing of network traffic
[5.2.1]
* Fix app disk graphs
* restart apps on addon container change
[5.2.2]
* regression: import UI
* Mbps -> MBps
* Remove verbose logs
* Set dmode in tar extract
* mail: fix crash in audit logs
* import: fix crash because encryption is unset
* create redis with the correct label
[5.2.3]
* Do not restart stopped apps
[5.2.4]
* mail: enable/disable incoming mail was showing an error
* Do not trigger backup of stopped apps. Instead, we will just retain it's existing backups
based on retention policy
* remove broken disk graphs
* fix OVH backups
[5.3.0]
* better nginx config for higher loads
* backups: add CIFS storage provider
* backups: add SSHFS storage provider
* backups: add NFS storage provider
* s3: use vhost style
* Fix crash when redis config was set
* Update schedule was unselected in the UI
* cloudron-setup: --provider is now optional
* show warning for unstable updates
* add forumUrl to app manifest
* postgresql: add unaccent extension for peertube
* mail: Add Auto-Submitted header to NDRs
* backups: ensure that the latest backup of installed apps is always preserved
* add nginx logs
* mail: make authentication case insensitive
* Fix timeout issues in postgresql and mysql addon
* Do not count stopped apps for memory use
* LDAP group synchronization
[5.3.1]
* better nginx config for higher loads
* backups: add CIFS storage provider
* backups: add SSHFS storage provider
* backups: add NFS storage provider
* s3: use vhost style
* Fix crash when redis config was set
* Update schedule was unselected in the UI
* cloudron-setup: --provider is now optional
* show warning for unstable updates
* add forumUrl to app manifest
* postgresql: add unaccent extension for peertube
* mail: Add Auto-Submitted header to NDRs
* backups: ensure that the latest backup of installed apps is always preserved
* add nginx logs
* mail: make authentication case insensitive
* Fix timeout issues in postgresql and mysql addon
* Do not count stopped apps for memory use
* LDAP group synchronization
[5.3.2]
* Do not install sshfs package
* 'provider' is not required anymore in various API calls
* redis: Set maxmemory and maxmemory-policy
* Add mlock capability to manifest (for vault app)
[5.3.3]
* Fix issue where some postinstall messages where causing angular to infinite loop
[5.3.4]
* Fix issue in database error handling
[5.4.0]
* Update nginx to 1.18 for various security fixes
* Add ping capability (for statping app)
* Fix bug where aliases were displayed incorrectly in SOGo
* Add univention as LDAP provider
* Bump max_connection for postgres addon to 200
* mail: Add pagination to mailing list API
* Allow admin to lock email and display name of users
* Allow admin to ensure all users have 2FA setup
* ami: fix regression where we didn't send provider as part of get status call
* nginx: hide version
* backups: add b2 provider
* Add filemanager webinterface
* Add darkmode
* Add note that password reset and invite links expire in 24 hours
[5.4.1]
* Update nginx to 1.18 for various security fixes
* Add ping capability (for statping app)
* Fix bug where aliases were displayed incorrectly in SOGo
* Add univention as LDAP provider
* Bump max_connection for postgres addon to 200
* mail: Add pagination to mailing list API
* Allow admin to lock email and display name of users
* Allow admin to ensure all users have 2FA setup
* ami: fix regression where we didn't send provider as part of get status call
* nginx: hide version
* backups: add b2 provider
* Add filemanager webinterface
* Add darkmode
* Add note that password reset and invite links expire in 24 hours
[5.5.0]
* postgresql: update to PostgreSQL 11
* postgresql: add citext extension to whitelist for loomio
* postgresql: add btree_gist,postgres_fdw,pg_stat_statements,plpgsql extensions for gitlab
* SFTP/Filebrowser: fix access of external data directories
* Fix contrast issues in dark mode
* Add option to delete mailbox data when mailbox is delete
* Allow days/hours of backups and updates to be configurable
* backup cleaner: fix issue where referenced backups where not counted against time periods
* route53: fix issue where verification failed if user had more than 100 zones
* rework task workers to run them in a separate cgroup
* backups: now much faster thanks to reworking of task worker
* When custom fallback cert is set, make sure it's used over LE certs
* mongodb: update to MongoDB 4.0.19
* List groups ordered by name
* Invite links are now valid for a week
* Update release GPG key
* Add pre-defined variables ($CLOUDRON_APPID) for better post install messages
* filemanager: show folder first
[5.6.0]
* Remove IP nginx configuration that redirects to dashboard after activation
* dashboard: looks for search string in app title as well
* Add vaapi caps for transcoding
* Fix issue where the long mongodb database names where causing app indices of rocket.chat to overflow (> 127)
* Do not resize swap if swap file exists. This means that users can now control how swap is allocated on their own.
* SFTP: fix issue where parallel rebuilds would cause an error
* backups: make part size configurable
* mail: set max email size
* mail: allow mail server location to be set
* spamassassin: custom configs and wl/bl
* Do not automatically update to unstable release
* scheduler: reduce container churn
* mail: add API to set banner
* Fix bug where systemd 237 ignores --nice value in systemd-run
* postgresql: enable uuid-ossp extension
* firewall: add blocklist
* HTTP URLs now redirect directly to the HTTPS of the final domain
* linode: Add singapore region
* ovh: add sydney region
* s3: makes multi-part copies in parallel
[5.6.1]
* Blocklists are now stored in a text file instead of json
* regenerate nginx configs
[5.6.2]
* Update docker to 19.03.12
* Fix sorting of user listing in the UI
* namecheap: fix crash when server returns invalid response
* unlink ghost file automatically on successful login
* Bump mysql addon connection limit to 200
* Fix install issue where `/dev/dri` may not be present
* import: when importing filesystem backups, the input box is a path
* firewall: fix race condition where blocklist was not added in correct position in the FORWARD chain
* services: fix issue where services where scaled up/down too fast
* turn: realm variable was not updated properly on dashboard change
* nginx: add splash pages for IP based browser access
* Give services panel a separate top-level view
* Add app state filter
* gcs: copy concurrency was not used
* Mention why an app update cannot be applied and provide shortcut to start the app if stopped
* Remove version from footer into the setting view
* Give services panel a separate top-level view
* postgresql: set collation order explicity when creating database to C.UTF-8 (for confluence)
* rsync: fix error while goes missing when syncing
* Pre-select app domain by default in the redirection drop down
* robots: preseve leading and trailing whitespaces/newlines
[5.6.3]
* Fix postgres locale issue
[6.0.0]
* Focal support
* Reduce duration of self-signed certs to 800 days
* Better backup config filename when downloading
* branding: footer can have template variables like %YEAR% and %VERSION%
* sftp: secure the API with a token
* filemanager: Add extract context menu item
* Do not download docker images if present locally
* sftp: disable access to non-admins by default
* postgresql: whitelist pgcrypto extension for loomio
* filemanager: Add new file creation action and collapse new and upload actions
* rsync: add warning to remove lifecycle rules
* Add volume management
* backups: adjust node's heap size based on memory limit
* s3: diasble per-chunk timeout
* logs: more descriptive log file names on download
* collectd: remove collectd config when app stopped (and add it back when started)
* Apps can optionally request an authwall to be installed in front of them
* mailbox can now owned by a group
* linode: enable dns provider in setup view
* dns: apps can now use the dns port
* httpPaths: allow apps to specify forwarding from custom paths to container ports (for OLS)
* add elasticemail smtp relay option
* mail: add option to fts using solr
* mail: change the namespace separator of new installations to /
* mail: enable acl
* Disable THP
* filemanager: allow download dirs as zip files
* aws: add china region
* security: fix issue where apps could send with any username (but valid password)
* i18n support
[6.0.1]
* app: add export route
* mail: on location change, fix lock up when one or more domains have invalid credentials
* mail: fix crash because of write after timeout closure
* scaleway: fix installation issue where THP is not enabled in kernel
[6.1.0]
* mail: update haraka to 2.8.27. this fixes zero-length queue file crash
* update: set/unset appStoreId from the update route
* proxyauth: Do not follow redirects
* proxyauth: add 2FA
* appstore: add category translations
* appstore: add media category
* prepend the version to assets when sourcing to avoid cache hits on update
* filemanger: list volumes of the app
* Display upload size and size progress
* nfs: chown the backups for hardlinks to work
* remove user add/remove/role change email notifications
* persist update indicator across restarts
* cloudron-setup: add --generate-setup-token
* dashboard: pass accessToken query param to automatically login
* wellknown: add a way to set well known docs
* oom: notification mails have links to dashboard
* collectd: do not install xorg* packages
* apptask: backup/restore tasks now use the backup memory limit configuration
* eventlog: add logout event
* mailbox: include alias in mailbox search
* proxyAuth: add path exclusion
* turn: fix for CVE-2020-26262
* app password: fix regression where apps are not listed anymore in the UI
* Support for multiDomain apps (domain aliases)
* netcup: add dns provider
* Container swap size is now dynamically determined based on system RAM/swap ratio
[6.1.1]
* Fix bug where platform does not start if memory limits could not be applied
[6.1.2]
* App disk usage was not shown in graphs
* Email autoconfig
* Fix SOGo login
[6.2.0]
* ovh: object storage URL has changed from s3 to storage subdomain
* ionos: add profit bricks object storage
* update node to 14.15.4
* update docker to 20.10.3
* new base image 3.0.0
* postgresql updated to 12.5
* redis updated to 5.0.7
* dovecot updated to 2.3.7
* proxyAuth: fix docker UA detection
* registry config: add UI to disable it
* update solr to 8.8.1
* firewall: fix issue where script errored when having more than 15 wl/bl ports
* If groups are used, do not allow app installation without choosing the access settings
* tls addon
* Do not overwrite existing DMARC record
* Sync dns records
* Dry run restore
* linode: show cloudron is installing when user SSHs
* mysql: disable bin logs
* Show cancel task button if task is still running after 2 minutes
* filemanager: fix various bugs involving file names with spaces
* Change Referrer-policy default to 'same-origin'
* rsync: preserve and restore symlinks
* Clean up backups function now removes missing backups
[6.2.1]
* Avoid updown notifications on full restore
* Add retries to downloader logic in installer
[6.2.2]
* Fix ENOBUFS issue with backups when collecting fs metadata
[6.2.3]
* Fix addon crashes with missing databases
* Update mail container for LMTP cert fix
* Fix services view showing yellow icon
[6.2.4]
* Another addon crash fix
[6.2.5]
* update: set memory limit properly
* Fix bug where renew certs button did not work
* sftp: fix rebuild condition
* Fix display of user management/dashboard visiblity for email apps
* graphite: disable tagdb and reduce log noise
[6.2.6]
* Fix issue where collectd is restarted too quickly before graphite
[6.2.7]
* redis: backup before upgrade
[6.2.8]
* linode object storage: update aws sdk to make it work again
* Fix crash in blocklist setting when source and list have mixed ip versions
* mysql: bump connection limit to 200
* namecheap: fix issue where DNS updates and del were not working
* turn: turn off verbose logging
* Fix crash when parsing df output (set LC_ALL for box service)
[6.3.0]
* mail: allow TLS from internal hosts
* tokens: add lastUsedTime
* update: set memory limit properly
* addons: better error handling
* filemanager: various enhancements
* sftp: fix rebuild condition
* app mailbox is now optional
* Fix display of user management/dashboard visiblity for email apps
* graphite: disable tagdb and reduce log noise
* hsts: change max-age to 2 years
* clone: copy over redis memory limit
* namecheap: fix bug where records were not removed
* add UI to disable 2FA of a user
* mail: add active flag to mailboxes and lists
* Implement OCSP stapling
* security: send new browser login location notification email
* backups: add fqdn to the backup filename
* import all boxdata settings into the database
* volumes: generate systemd mount configs based on type
* postgresql: set max conn limit per db
* ubuntu 16: add alert about EOL
* clone: save and restore app config
* app import: restore icon, tag, label, proxy configs etc
* sieve: fix redirects to not do SRS
* notifications are now system level instead of per-user
* vultr DNS
* vultr object storage
* mail: do not forward spam to mailing lists
[6.3.1]
* Fix cert migration issues
+1 -1
View File
@@ -1,5 +1,5 @@
The Cloudron Subscription license
Copyright (c) 2020 Cloudron UG
Copyright (c) 2019 Cloudron UG
With regard to the Cloudron Software:
+16 -30
View File
@@ -1,5 +1,3 @@
![Translation status](https://translate.cloudron.io/widgets/cloudron/-/svg-badge.svg)
# Cloudron
[Cloudron](https://cloudron.io) is the best way to run apps on your server.
@@ -31,9 +29,9 @@ anyone to effortlessly host web applications on their server on their own terms.
* Trivially migrate to another server keeping your apps and data (for example, switch your
infrastructure provider or move to a bigger server).
* Comprehensive [REST API](https://docs.cloudron.io/api/).
* Comprehensive [REST API](https://cloudron.io/documentation/developer/api/).
* [CLI](https://docs.cloudron.io/custom-apps/cli/) to configure apps.
* [CLI](https://cloudron.io/documentation/cli/) to configure apps.
* Alerts, audit logs, graphs, dns management ... and much more
@@ -43,42 +41,30 @@ Try our demo at https://my.demo.cloudron.io (username: cloudron password: cloudr
## Installing
[Install script](https://docs.cloudron.io/installation/) - [Pricing](https://cloudron.io/pricing.html)
You can install the Cloudron platform on your own server or get a managed server
from cloudron.io. In either case, the Cloudron platform will keep your server and
apps up-to-date and secure.
* [Selfhosting](https://cloudron.io/documentation/installation/) - [Pricing](https://cloudron.io/pricing.html)
* [Managed Hosting](https://cloudron.io/managed.html)
**Note:** This repo is a small part of what gets installed on your server - there is
the dashboard, database addons, graph container, base image etc. Cloudron also relies
on external services such as the App Store for apps to be installed. As such, don't
clone this repo and npm install and expect something to work.
## Development
## Documentation
This is the backend code of Cloudron. The frontend code is [here](https://git.cloudron.io/cloudron/dashboard).
* [Documentation](https://cloudron.io/documentation/)
The way to develop is to first install a full instance of Cloudron in a VM. Then you can use the [hotfix](https://git.cloudron.io/cloudron/cloudron-machine)
tool to patch the VM with the latest code.
## Related repos
```
SSH_PASSPHRASE=sshkeypassword cloudron-machine hotfix --cloudron my.example.com --release 6.0.0 --ssh-key keyname
```
The [base image repo](https://git.cloudron.io/cloudron/docker-base-image) is the parent image of all
the containers in the Cloudron.
## License
## Community
Please note that the Cloudron code is under a source-available license. This is not the same as an
open source license but ensures the code is available for introspection (and hacking!).
## Contributions
Just to give some heads up, we are a bit restrictive in merging changes. We are a small team and
would like to keep our maintenance burden low. It might be best to discuss features first in the [forum](https://forum.cloudron.io),
to also figure out how many other people will use it to justify maintenance for a feature.
# Localization
![Translation status](https://translate.cloudron.io/widgets/cloudron/-/287x66-white.png)
## Support
* [Documentation](https://docs.cloudron.io/)
* [Chat](https://chat.cloudron.io)
* [Forum](https://forum.cloudron.io/)
* [Support](mailto:support@cloudron.io)
+193
View File
@@ -0,0 +1,193 @@
#!/bin/bash
set -eu -o pipefail
assertNotEmpty() {
: "${!1:? "$1 is not set."}"
}
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
INSTANCE_TYPE="t2.micro"
BLOCK_DEVICE="DeviceName=/dev/sda1,Ebs={VolumeSize=20,DeleteOnTermination=true,VolumeType=gp2}"
SSH_KEY_NAME="id_rsa_yellowtent"
revision=$(git rev-parse HEAD)
ami_name=""
server_id=""
server_ip=""
destroy_server="yes"
deploy_env="prod"
image_id=""
args=$(getopt -o "" -l "revision:,name:,no-destroy,env:,region:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--env) deploy_env="$2"; shift 2;;
--revision) revision="$2"; shift 2;;
--name) ami_name="$2"; shift 2;;
--no-destroy) destroy_server="no"; shift 2;;
--region)
case "$2" in
"us-east-1")
image_id="ami-6edd3078"
security_group="sg-a5e17fd9"
subnet_id="subnet-b8fbc0f1"
;;
"eu-central-1")
image_id="ami-5aee2235"
security_group="sg-19f5a770" # everything open on eu-central-1
subnet_id=""
;;
*)
echo "Unknown aws region $2"
exit 1
;;
esac
export AWS_DEFAULT_REGION="$2" # used by the aws cli tool
shift 2
;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
# TODO fix this
export AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY}"
export AWS_SECRET_ACCESS_KEY="${AWS_ACCESS_SECRET}"
readonly ssh_keys="${HOME}/.ssh/id_rsa_yellowtent"
readonly SSH="ssh -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
if [[ ! -f "${ssh_keys}" ]]; then
echo "caas ssh key is missing at ${ssh_keys} (pick it up from secrets repo)"
exit 1
fi
if [[ -z "${image_id}" ]]; then
echo "--region is required (us-east-1 or eu-central-1)"
exit 1
fi
function get_pretty_revision() {
local git_rev="$1"
local sha1=$(git rev-parse --short "${git_rev}" 2>/dev/null)
echo "${sha1}"
}
function wait_for_ssh() {
echo "=> Waiting for ssh connection"
while true; do
echo -n "."
if $SSH ubuntu@${server_ip} echo "hello"; then
echo ""
break
fi
sleep 5
done
}
now=$(date "+%Y-%m-%d-%H%M%S")
pretty_revision=$(get_pretty_revision "${revision}")
if [[ -z "${ami_name}" ]]; then
ami_name="box-${deploy_env}-${pretty_revision}-${now}"
fi
echo "=> Create EC2 instance"
id=$(aws ec2 run-instances --image-id "${image_id}" --instance-type "${INSTANCE_TYPE}" --security-group-ids "${security_group}" --block-device-mappings "${BLOCK_DEVICE}" --key-name "${SSH_KEY_NAME}" --subnet-id "${subnet_id}" --associate-public-ip-address \
| $JSON Instances \
| $JSON 0.InstanceId)
[[ -z "$id" ]] && exit 1
echo "Instance created ID $id"
echo "=> Waiting for instance to get a public IP"
while true; do
server_ip=$(aws ec2 describe-instances --instance-ids ${id} \
| $JSON Reservations.0.Instances \
| $JSON 0.PublicIpAddress)
if [[ ! -z "${server_ip}" ]]; then
echo ""
break
fi
echo -n "."
sleep 1
done
echo "Got public IP ${server_ip}"
wait_for_ssh
echo "=> Fetching cloudron-setup"
while true; do
if $SSH ubuntu@${server_ip} wget "https://cloudron.io/cloudron-setup" -O "cloudron-setup"; then
echo ""
break
fi
echo -n "."
sleep 5
done
echo "=> Running cloudron-setup"
$SSH ubuntu@${server_ip} sudo /bin/bash "cloudron-setup" --env "${deploy_env}" --provider "ami" --skip-reboot
wait_for_ssh
echo "=> Removing ssh key"
$SSH ubuntu@${server_ip} sudo rm /home/ubuntu/.ssh/authorized_keys /root/.ssh/authorized_keys
echo "=> Creating AMI"
image_id=$(aws ec2 create-image --instance-id "${id}" --name "${ami_name}" | $JSON ImageId)
[[ -z "$id" ]] && exit 1
echo "Creating AMI with Id ${image_id}"
echo "=> Waiting for AMI to be created"
while true; do
state=$(aws ec2 describe-images --image-ids ${image_id} \
| $JSON Images \
| $JSON 0.State)
if [[ "${state}" == "available" ]]; then
echo ""
break
fi
echo -n "."
sleep 5
done
if [[ "${destroy_server}" == "yes" ]]; then
echo "=> Deleting EC2 instance"
while true; do
state=$(aws ec2 terminate-instances --instance-id "${id}" \
| $JSON TerminatingInstances \
| $JSON 0.CurrentState.Name)
if [[ "${state}" == "shutting-down" ]]; then
echo ""
break
fi
echo -n "."
sleep 5
done
fi
echo ""
echo "Done."
echo ""
echo "New AMI is: ${image_id}"
echo ""
+261
View File
@@ -0,0 +1,261 @@
#!/bin/bash
if [[ -z "${DIGITAL_OCEAN_TOKEN}" ]]; then
echo "Script requires DIGITAL_OCEAN_TOKEN env to be set"
exit 1
fi
if [[ -z "${JSON}" ]]; then
echo "Script requires JSON env to be set to path of JSON binary"
exit 1
fi
readonly CURL="curl --retry 5 -s -u ${DIGITAL_OCEAN_TOKEN}:"
function debug() {
echo "$@" >&2
}
function get_ssh_key_id() {
id=$($CURL "https://api.digitalocean.com/v2/account/keys" \
| $JSON ssh_keys \
| $JSON -c "this.name === \"$1\"" \
| $JSON 0.id)
[[ -z "$id" ]] && exit 1
echo "$id"
}
function create_droplet() {
local ssh_key_id="$1"
local box_name="$2"
local image_region="sfo2"
local ubuntu_image_slug="ubuntu-16-04-x64"
local box_size="1gb"
local data="{\"name\":\"${box_name}\",\"size\":\"${box_size}\",\"region\":\"${image_region}\",\"image\":\"${ubuntu_image_slug}\",\"ssh_keys\":[ \"${ssh_key_id}\" ],\"backups\":false}"
id=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets" | $JSON droplet.id)
[[ -z "$id" ]] && exit 1
echo "$id"
}
function get_droplet_ip() {
local droplet_id="$1"
ip=$($CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}" | $JSON "droplet.networks.v4[0].ip_address")
[[ -z "$ip" ]] && exit 1
echo "$ip"
}
function get_droplet_id() {
local droplet_name="$1"
id=$($CURL "https://api.digitalocean.com/v2/droplets?per_page=200" | $JSON "droplets" | $JSON -c "this.name === '${droplet_name}'" | $JSON "[0].id")
[[ -z "$id" ]] && exit 1
echo "$id"
}
function power_off_droplet() {
local droplet_id="$1"
local data='{"type":"power_off"}'
local response=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions")
local event_id=`echo "${response}" | $JSON action.id`
if [[ -z "${event_id}" ]]; then
debug "Got no event id, assuming already powered off."
debug "Response: ${response}"
return
fi
debug "Powered off droplet. Event id: ${event_id}"
debug -n "Waiting for droplet to power off"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function power_on_droplet() {
local droplet_id="$1"
local data='{"type":"power_on"}'
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
debug "Powered on droplet. Event id: ${event_id}"
if [[ -z "${event_id}" ]]; then
debug "Got no event id, assuming already powered on"
return
fi
debug -n "Waiting for droplet to power on"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function get_image_id() {
local snapshot_name="$1"
local image_id=""
if ! response=$($CURL "https://api.digitalocean.com/v2/images?per_page=200"); then
echo "Failed to get image listing. ${response}"
return 1
fi
if ! image_id=$(echo "$response" \
| $JSON images \
| $JSON -c "this.name === \"${snapshot_name}\"" 0.id); then
echo "Failed to parse curl response: ${response}"
return 1
fi
if [[ -z "${image_id}" ]]; then
echo "Failed to get image id of ${snapshot_name}. reponse: ${response}"
return 1
fi
echo "${image_id}"
}
function snapshot_droplet() {
local droplet_id="$1"
local snapshot_name="$2"
local data="{\"type\":\"snapshot\",\"name\":\"${snapshot_name}\"}"
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
debug "Droplet snapshotted as ${snapshot_name}. Event id: ${event_id}"
debug -n "Waiting for snapshot to complete"
while true; do
if ! response=$($CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}"); then
echo "Could not get action status. ${response}"
continue
fi
if ! event_status=$(echo "${response}" | $JSON action.status); then
echo "Could not parse action.status from response. ${response}"
continue
fi
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug "! done"
if ! image_id=$(get_image_id "${snapshot_name}"); then
return 1
fi
echo "${image_id}"
}
function destroy_droplet() {
local droplet_id="$1"
# TODO: check for 204 status
$CURL -X DELETE "https://api.digitalocean.com/v2/droplets/${droplet_id}"
debug "Droplet destroyed"
debug ""
}
function transfer_image() {
local image_id="$1"
local region_slug="$2"
local data="{\"type\":\"transfer\",\"region\":\"${region_slug}\"}"
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/images/${image_id}/actions" | $JSON action.id`
echo "${event_id}"
}
function wait_for_image_event() {
local image_id="$1"
local event_id="$2"
debug -n "Waiting for ${event_id}"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/images/${image_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function transfer_image_to_all_regions() {
local image_id="$1"
xfer_events=()
image_regions=(ams2) ## sfo1 is where the image is created
for image_region in ${image_regions[@]}; do
xfer_event=$(transfer_image ${image_id} ${image_region})
echo "Image transfer to ${image_region} initiated. Event id: ${xfer_event}"
xfer_events+=("${xfer_event}")
sleep 1
done
echo "Image transfer initiated, but they will take some time to get transferred."
for xfer_event in ${xfer_events[@]}; do
$vps wait_for_image_event "${image_id}" "${xfer_event}"
done
}
if [[ $# -lt 1 ]]; then
debug "<command> <params...>"
exit 1
fi
case $1 in
get_ssh_key_id)
get_ssh_key_id "${@:2}"
;;
create)
create_droplet "${@:2}"
;;
get_id)
get_droplet_id "${@:2}"
;;
get_ip)
get_droplet_ip "${@:2}"
;;
power_on)
power_on_droplet "${@:2}"
;;
power_off)
power_off_droplet "${@:2}"
;;
snapshot)
snapshot_droplet "${@:2}"
;;
destroy)
destroy_droplet "${@:2}"
;;
transfer_image_to_all_regions)
transfer_image_to_all_regions "${@:2}"
;;
*)
echo "Unknown command $1"
exit 1
esac
+21 -57
View File
@@ -4,7 +4,8 @@ set -euv -o pipefail
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly arg_infraversionpath="${SOURCE_DIR}/../src"
readonly arg_provider="${1:-generic}"
readonly arg_infraversionpath="${SOURCE_DIR}/${2:-}"
function die {
echo $1
@@ -13,9 +14,6 @@ function die {
export DEBIAN_FRONTEND=noninteractive
readonly ubuntu_codename=$(lsb_release -cs)
readonly ubuntu_version=$(lsb_release -rs)
# hold grub since updating it breaks on some VPS providers. also, dist-upgrade will trigger it
apt-mark hold grub* >/dev/null
apt-get -o Dpkg::Options::="--force-confdef" update -y
@@ -29,57 +27,41 @@ debconf-set-selections <<< 'mysql-server mysql-server/root_password_again passwo
# this enables automatic security upgrades (https://help.ubuntu.com/community/AutomaticSecurityUpdates)
# resolvconf is needed for unbound to work property after disabling systemd-resolved in 18.04
ubuntu_version=$(lsb_release -rs)
ubuntu_codename=$(lsb_release -cs)
gpg_package=$([[ "${ubuntu_version}" == "16.04" ]] && echo "gnupg" || echo "gpg")
mysql_package=$([[ "${ubuntu_version}" == "20.04" ]] && echo "mysql-server-8.0" || echo "mysql-server-5.7")
apt-get -y install --no-install-recommends \
apt-get -y install \
acl \
apparmor \
build-essential \
cifs-utils \
cron \
curl \
debconf-utils \
dmsetup \
$gpg_package \
ipset \
iptables \
libpython2.7 \
linux-generic \
logrotate \
$mysql_package \
nfs-common \
mysql-server-5.7 \
nginx-full \
openssh-server \
pwgen \
resolvconf \
sshfs \
sudo \
swaks \
tzdata \
unattended-upgrades \
unbound \
unzip \
xfsprogs
echo "==> installing nginx for xenial for TLSv3 support"
curl -sL http://nginx.org/packages/ubuntu/pool/nginx/n/nginx/nginx_1.18.0-2~${ubuntu_codename}_amd64.deb -o /tmp/nginx.deb
# apt install with install deps (as opposed to dpkg -i)
apt install -y /tmp/nginx.deb
rm /tmp/nginx.deb
# on some providers like scaleway the sudo file is changed and we want to keep the old one
apt-get -o Dpkg::Options::="--force-confold" install -y --no-install-recommends sudo
# this ensures that unattended upgades are enabled, if it was disabled during ubuntu install time (see #346)
# debconf-set-selection of unattended-upgrades/enable_auto_updates + dpkg-reconfigure does not work
cp /usr/share/unattended-upgrades/20auto-upgrades /etc/apt/apt.conf.d/20auto-upgrades
echo "==> Installing node.js"
readonly node_version=14.15.4
mkdir -p /usr/local/node-${node_version}
curl -sL https://nodejs.org/dist/v${node_version}/node-v${node_version}-linux-x64.tar.gz | tar zxf - --strip-components=1 -C /usr/local/node-${node_version}
ln -sf /usr/local/node-${node_version}/bin/node /usr/bin/node
ln -sf /usr/local/node-${node_version}/bin/npm /usr/bin/npm
apt-get install -y --no-install-recommends python # Install python which is required for npm rebuild
mkdir -p /usr/local/node-10.15.1
curl -sL https://nodejs.org/dist/v10.15.1/node-v10.15.1-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-10.15.1
ln -sf /usr/local/node-10.15.1/bin/node /usr/bin/node
ln -sf /usr/local/node-10.15.1/bin/npm /usr/bin/npm
apt-get install -y python # Install python which is required for npm rebuild
[[ "$(python --version 2>&1)" == "Python 2.7."* ]] || die "Expecting python version to be 2.7.x"
# https://docs.docker.com/engine/installation/linux/ubuntulinux/
@@ -90,10 +72,9 @@ mkdir -p /etc/systemd/system/docker.service.d
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2" > /etc/systemd/system/docker.service.d/cloudron.conf
# there are 3 packages for docker - containerd, CLI and the daemon
readonly docker_version=20.10.3
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.4.3-1_amd64.deb" -o /tmp/containerd.deb
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.2.2-3_amd64.deb" -o /tmp/containerd.deb
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
# apt install with install deps (as opposed to dpkg -i)
apt install -y /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
rm /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
@@ -106,7 +87,7 @@ fi
# do not upgrade grub because it might prompt user and break this script
echo "==> Enable memory accounting"
apt-get -y --no-upgrade --no-install-recommends install grub2-common
apt-get -y --no-upgrade install grub2-common
sed -e 's/^GRUB_CMDLINE_LINUX="\(.*\)"$/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
update-grub
@@ -125,37 +106,19 @@ for image in ${images}; do
done
echo "==> Install collectd"
# without this, libnotify4 will install gnome-shell
apt-get install -y libnotify4 --no-install-recommends
if ! apt-get install -y --no-install-recommends libcurl3-gnutls collectd collectd-utils; then
if ! apt-get install -y collectd collectd-utils; then
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
echo "Failed to install collectd. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
fi
# https://bugs.launchpad.net/ubuntu/+source/collectd/+bug/1872281
[[ "${ubuntu_version}" == "20.04" ]] && echo -e "\nLD_PRELOAD=/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8.so" >> /etc/default/collectd
# some hosts like atlantic install ntp which conflicts with timedatectl. https://serverfault.com/questions/1024770/ubuntu-20-04-time-sync-problems-and-possibly-incorrect-status-information
echo "==> Configuring host"
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
if systemctl is-active ntp; then
systemctl stop ntp
apt purge -y ntp
fi
timedatectl set-ntp 1
# mysql follows the system timezone
timedatectl set-timezone UTC
echo "==> Adding sshd configuration warning"
sed -e '/Port 22/ i # NOTE: Cloudron only supports moving SSH to port 202. See https://docs.cloudron.io/security/#securing-ssh-access' -i /etc/ssh/sshd_config
# https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068
echo "==> Disabling motd news"
if [ -f "/etc/default/motd-news" ]; then
sed -i 's/^ENABLED=.*/ENABLED=0/' /etc/default/motd-news
fi
# Disable bind for good measure (on online.net, kimsufi servers these are pre-installed)
# Disable bind for good measure (on online.net, kimsufi servers these are pre-installed and conflicts with unbound)
systemctl stop bind9 || true
systemctl disable bind9 || true
@@ -167,7 +130,7 @@ systemctl disable dnsmasq || true
systemctl stop postfix || true
systemctl disable postfix || true
# on ubuntu 18.04 and 20.04, this is the default. this requires resolvconf for DNS to work further after the disable
# on ubuntu 18.04, this is the default. this requires resolvconf for DNS to work further after the disable
systemctl stop systemd-resolved || true
systemctl disable systemd-resolved || true
@@ -176,3 +139,4 @@ systemctl disable systemd-resolved || true
ip6=$([[ -s /proc/net/if_inet6 ]] && echo "yes" || echo "no")
echo -e "server:\n\tinterface: 127.0.0.1\n\tdo-ip6: ${ip6}" > /etc/unbound/unbound.conf.d/cloudron-network.conf
systemctl restart unbound
+52 -49
View File
@@ -2,65 +2,68 @@
'use strict';
// prefix all output with a timestamp
// debug() already prefixes and uses process.stderr NOT console.*
['log', 'info', 'warn', 'debug', 'error'].forEach(function (log) {
var orig = console[log];
console[log] = function () {
orig.apply(console, [new Date().toISOString()].concat(Array.prototype.slice.call(arguments)));
};
});
require('supererror')({ splatchError: true });
let async = require('async'),
dockerProxy = require('./src/dockerproxy.js'),
fs = require('fs'),
config = require('./src/config.js'),
ldap = require('./src/ldap.js'),
paths = require('./src/paths.js'),
proxyAuth = require('./src/proxyauth.js'),
dockerProxy = require('./src/dockerproxy.js'),
server = require('./src/server.js');
const NOOP_CALLBACK = function () { };
function setupLogging(callback) {
if (process.env.BOX_ENV === 'test') return callback();
const logfileStream = fs.createWriteStream(paths.BOX_LOG_FILE, { flags:'a' });
process.stdout.write = process.stderr.write = logfileStream.write.bind(logfileStream);
callback();
}
console.log();
console.log('==========================================');
console.log(' Cloudron will use the following settings ');
console.log('==========================================');
console.log();
console.log(' Environment: ', config.CLOUDRON ? 'CLOUDRON' : 'TEST');
console.log(' Version: ', config.version());
console.log(' Admin Origin: ', config.adminOrigin());
console.log(' Appstore API server origin: ', config.apiServerOrigin());
console.log(' Appstore Web server origin: ', config.webServerOrigin());
console.log(' SysAdmin Port: ', config.get('sysadminPort'));
console.log(' LDAP Server Port: ', config.get('ldapPort'));
console.log(' Docker Proxy Port: ', config.get('dockerProxyPort'));
console.log();
console.log('==========================================');
console.log();
async.series([
setupLogging,
server.start, // do this first since it also inits the database
proxyAuth.start,
server.start,
ldap.start,
dockerProxy.start
], function (error) {
if (error) {
console.log('Error starting server', error);
console.error('Error starting server', error);
process.exit(1);
}
// require those here so that logging handler is already setup
require('supererror');
const debug = require('debug')('box:box');
process.on('SIGINT', function () {
debug('Received SIGINT. Shutting down.');
proxyAuth.stop(NOOP_CALLBACK);
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
dockerProxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
process.on('SIGTERM', function () {
debug('Received SIGTERM. Shutting down.');
proxyAuth.stop(NOOP_CALLBACK);
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
dockerProxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
process.on('uncaughtException', function (error) {
console.error((error && error.stack) ? error.stack : error);
setTimeout(process.exit.bind(process, 1), 3000);
});
console.log(`Cloudron is up and running. Logs are at ${paths.BOX_LOG_FILE}`); // this goes to journalctl
console.log('Cloudron is up and running');
});
var NOOP_CALLBACK = function () { };
process.on('SIGINT', function () {
console.log('Received SIGINT. Shutting down.');
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
dockerProxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
process.on('SIGTERM', function () {
console.log('Received SIGTERM. Shutting down.');
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
dockerProxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
@@ -12,6 +12,8 @@ exports.up = function(db, callback) {
db.all('SELECT * FROM users WHERE admin=1', function (error, results) {
if (error) return done(error);
console.dir(results);
async.eachSeries(results, function (r, next) {
db.runSql('INSERT INTO groupMembers (groupId, userId) VALUES (?, ?)', [ ADMIN_GROUP_ID, r.id ], next);
}, done);
@@ -1,6 +1,12 @@
'use strict';
var async = require('async');
var async = require('async'),
crypto = require('crypto'),
fs = require('fs'),
os = require('os'),
path = require('path'),
safe = require('safetydance'),
tldjs = require('tldjs');
exports.up = function(db, callback) {
db.all('SELECT * FROM apps, subdomains WHERE apps.id=subdomains.appId AND type="primary"', function (error, apps) {
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE mail ADD COLUMN dkimSelector VARCHAR(128) NOT NULL DEFAULT "cloudron"', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE mail DROP COLUMN dkimSelector', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,14 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps DROP FOREIGN KEY apps_owner_constraint'),
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN ownerId')
], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,29 +0,0 @@
'use strict';
var async = require('async'),
fs = require('fs');
exports.up = function(db, callback) {
if (!fs.existsSync('/etc/cloudron/cloudron.conf')) {
console.log('Unable to locate cloudron.conf');
return callback();
}
const config = JSON.parse(fs.readFileSync('/etc/cloudron/cloudron.conf', 'utf8'));
async.series([
fs.writeFile.bind(null, '/etc/cloudron/PROVIDER', config.provider, 'utf8'),
db.runSql.bind(db, 'START TRANSACTION;'),
// we use replace instead of insert because the cloudron-setup adds api/web_server_origin even for legacy setups
db.runSql.bind(db, 'REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'api_server_origin', config.apiServerOrigin ]),
db.runSql.bind(db, 'REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'web_server_origin', config.webServerOrigin ]),
db.runSql.bind(db, 'REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'admin_domain', config.adminDomain ]),
db.runSql.bind(db, 'REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'admin_fqdn', config.adminFqdn ]),
db.runSql.bind(db, 'REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'demo', config.isDemo ]),
db.runSql.bind(db, 'COMMIT')
], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN active BOOLEAN DEFAULT 1', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN active', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,17 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN taskId INTEGER'),
db.runSql.bind(db, 'ALTER TABLE apps ADD CONSTRAINT apps_task_constraint FOREIGN KEY(taskId) REFERENCES tasks(id)')
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE app DROP FOREIGN KEY apps_task_constraint'),
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN taskId'),
], callback);
};
@@ -1,12 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps DROP updateConfigJson, DROP restoreConfigJson, DROP oldConfigJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE installationProgress errorJson TEXT', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE errorJson installationProgress TEXT', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN source VARCHAR(128) DEFAULT ""', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN source', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,26 +0,0 @@
'use strict';
let async = require('async');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE tasks CHANGE errorMessage errorJson TEXT', [], function (error) {
if (error) console.error(error);
// convert error messages into json
db.all('SELECT id, errorJson FROM apps', function (error, apps) {
async.eachSeries(apps, function (app, iteratorDone) {
if (app.errorJson === 'null') return iteratorDone();
if (app.errorJson === null) return iteratorDone();
db.runSql('UPDATE apps SET errorJson = ? WHERE id = ?', [ JSON.stringify({ message: app.errorJson }), app.id ], iteratorDone);
}, callback);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE tasks CHANGE errorJson errorMessage TEXT', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,21 +0,0 @@
'use strict';
var async = require('async');
// imports mailbox entries for existing users
exports.up = function(db, callback) {
db.all('SELECT * FROM mailboxes', function (error, mailboxes) {
async.eachSeries(mailboxes, function (mailbox, iteratorDone) {
if (!mailbox.membersJson) return iteratorDone();
let members = JSON.parse(mailbox.membersJson);
members = members.map((m) => m && m.indexOf('@') === -1 ? `${m}@${mailbox.domain}` : m); // only because we don't do things in a xction
db.runSql('UPDATE mailboxes SET membersJson=? WHERE name=? AND domain=?', [ JSON.stringify(members), mailbox.name, mailbox.domain ], iteratorDone);
}, callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,19 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('UPDATE apps SET runState=? WHERE runState IS NULL', [ 'running' ], function (error) {
if (error) return callback(error);
db.runSql('ALTER TABLE apps MODIFY runState VARCHAR(512) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE app MODIFY runState VARCHAR(512)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,10 +0,0 @@
'use strict';
exports.up = function(db, callback) {
// We clear all demo state in the Cloudron...the demo cloudron needs manual intervention afterwards
db.runSql('REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'demo', '' ], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,30 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN reverseProxyConfigJson TEXT', function (error) {
if (error) return callback(error);
db.all('SELECT id, robotsTxt FROM apps', function (error, apps) {
if (error) return callback(error);
async.eachSeries(apps, function (app, iteratorDone) {
if (!app.robotsTxt) return iteratorDone();
db.runSql('UPDATE apps SET reverseProxyConfigJson=? WHERE id=?', [ JSON.stringify({ robotsTxt: JSON.stringify(app.robotsTxt) }), app.id ], iteratorDone);
}, function (error) {
if (error) return callback(error);
db.runSql('ALTER TABLE apps DROP COLUMN robotsTxt', callback);
});
});
});
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN reverseProxyConfigJson'),
], callback);
};
@@ -1,13 +0,0 @@
'use strict';
var fs = require('fs');
exports.up = function(db, callback) {
let sysinfoConfig = { provider: 'generic' };
db.runSql('REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'sysinfo_config', JSON.stringify(sysinfoConfig) ], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,27 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN mailboxDomain VARCHAR(128)'),
function setDefaultMailboxDomain(done) {
db.all('SELECT * FROM apps, subdomains WHERE apps.id=subdomains.appId AND type="primary"', function (error, apps) {
if (error) return done(error);
async.eachSeries(apps, function (app, iteratorDone) {
db.runSql('UPDATE apps SET mailboxDomain=? WHERE id=?', [ app.domain, app.id ], iteratorDone);
}, done);
});
},
db.runSql.bind(db, 'ALTER TABLE apps MODIFY COLUMN mailboxDomain VARCHAR(128) NOT NULL'),
db.runSql.bind(db, 'ALTER TABLE apps ADD CONSTRAINT apps_mailDomain_constraint FOREIGN KEY(mailboxDomain) REFERENCES domains(domain)'),
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE app DROP FOREIGN KEY apps_mailDomain_constraint'),
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN mailboxDomain'),
], callback);
};
@@ -1,22 +0,0 @@
'use strict';
let async = require('async');
exports.up = function(db, callback) {
db.runSql('SELECT * FROM domains', function (error, domains) {
if (error) return callback(error);
async.eachSeries(domains, function (domain, iteratorCallback) {
if (domain.provider !== 'cloudflare') return iteratorCallback();
let config = JSON.parse(domain.configJson);
config.tokenType = 'GlobalApiKey';
db.runSql('UPDATE domains SET configJson = ? WHERE domain = ?', [ JSON.stringify(config), domain.domain ], iteratorCallback);
}, callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN cpuShares INTEGER DEFAULT 512', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN cpuShares', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,26 +0,0 @@
'use strict';
exports.up = function(db, callback) {
var cmd = 'CREATE TABLE appPasswords(' +
'id VARCHAR(128) NOT NULL UNIQUE,' +
'name VARCHAR(128) NOT NULL,' +
'userId VARCHAR(128) NOT NULL,' +
'identifier VARCHAR(128) NOT NULL,' +
'hashedPassword VARCHAR(1024) NOT NULL,' +
'creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
'FOREIGN KEY(userId) REFERENCES users(id),' +
'UNIQUE (name, userId),' +
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
db.runSql(cmd, function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('DROP TABLE appPasswords', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,22 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('DROP TABLE authcodes', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
var cmd = `CREATE TABLE IF NOT EXISTS authcodes(
authCode VARCHAR(128) NOT NULL UNIQUE,
userId VARCHAR(128) NOT NULL,
clientId VARCHAR(128) NOT NULL,
expiresAt BIGINT NOT NULL,
PRIMARY KEY(authCode)) CHARACTER SET utf8 COLLATE utf8_bin`;
db.runSql(cmd, function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,24 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('DROP TABLE clients', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
var cmd = `CREATE TABLE IF NOT EXISTS clients(
id VARCHAR(128) NOT NULL UNIQUE,
appId VARCHAR(128) NOT NULL,
type VARCHAR(16) NOT NULL,
clientSecret VARCHAR(512) NOT NULL,
redirectURI VARCHAR(512) NOT NULL,
scope VARCHAR(512) NOT NULL,
PRIMARY KEY(id)) CHARACTER SET utf8 COLLATE utf8_bin`;
db.runSql(cmd, function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE domains DROP COLUMN locked', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE domains ADD COLUMN locked BOOLEAN DEFAULT 0', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,40 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'START TRANSACTION;'),
db.runSql.bind(db, 'ALTER TABLE users ADD COLUMN role VARCHAR(32)'),
function migrateAdminFlag(done) {
db.all('SELECT * FROM users ORDER BY createdAt', function (error, results) {
if (error) return done(error);
let ownerFound = false;
async.eachSeries(results, function (user, next) {
let role;
if (!ownerFound && user.admin) {
role = 'owner';
ownerFound = true;
console.log(`Designating ${user.username} ${user.email} ${user.id} as the owner of this cloudron`);
} else {
role = user.admin ? 'admin' : 'user';
}
db.runSql('UPDATE users SET role=? WHERE id=?', [ role, user.id ], next);
}, done);
});
},
db.runSql.bind(db, 'ALTER TABLE users DROP COLUMN admin'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY role VARCHAR(32) NOT NULL'),
db.runSql.bind(db, 'COMMIT')
], callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN role', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN resetTokenCreationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN resetTokenCreationTime', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,28 +0,0 @@
'use strict';
let async = require('async');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY mailboxDomain VARCHAR(128)', [], function (error) { // make it nullable
if (error) console.error(error);
// clear mailboxName/Domain for apps that do not use mail addons
db.all('SELECT * FROM apps', function (error, apps) {
if (error) return callback(error);
async.eachSeries(apps, function (app, iteratorDone) {
var manifest = JSON.parse(app.manifestJson);
if (manifest.addons['sendmail'] || manifest.addons['recvmail']) return iteratorDone();
db.runSql('UPDATE apps SET mailboxName=?, mailboxDomain=? WHERE id=?', [ null, null, app.id ], iteratorDone);
}, callback);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY manifestJson VARCHAR(128) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE mailboxes ADD COLUMN membersOnly BOOLEAN DEFAULT 0', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN membersOnly', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,28 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN aliasDomain VARCHAR(128)'),
function setAliasDomain(done) {
db.all('SELECT * FROM mailboxes', function (error, mailboxes) {
async.eachSeries(mailboxes, function (mailbox, iteratorDone) {
if (!mailbox.aliasTarget) return iteratorDone();
db.runSql('UPDATE mailboxes SET aliasDomain=? WHERE name=? AND domain=?', [ mailbox.domain, mailbox.name, mailbox.domain ], iteratorDone);
}, done);
});
},
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD CONSTRAINT mailboxes_aliasDomain_constraint FOREIGN KEY(aliasDomain) REFERENCES mail(domain)'),
db.runSql.bind(db, 'ALTER TABLE mailboxes CHANGE aliasTarget aliasName VARCHAR(128)')
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP FOREIGN KEY mailboxes_aliasDomain_constraint'),
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP COLUMN aliasDomain'),
db.runSql.bind(db, 'ALTER TABLE mailboxes CHANGE aliasName aliasTarget VARCHAR(128)')
], callback);
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN servicesConfigJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN servicesConfigJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN bindsJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN bindsJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,35 +0,0 @@
'use strict';
const backups = require('../src/backups.js'),
fs = require('fs');
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var backupConfig = JSON.parse(results[0].value);
if (backupConfig.key) {
backupConfig.encryption = backups.generateEncryptionKeysSync(backupConfig.key);
backups.cleanupCacheFilesSync();
fs.writeFileSync('/home/yellowtent/platformdata/BACKUP_PASSWORD',
'This file contains your Cloudron backup password.\nBefore Cloudron v5.2, this was saved in the database.' +
'From Cloudron 5.2, this password is not required anymore. We generate strong keys based off this password and use those keys to encrypt the backups.\n' +
'This means that the password is only required at decryption/restore time.\n\n' +
'This file can be safely removed and only exists for the off-chance that you do not remember your backup password.\n\n' +
`Password: ${backupConfig.key}\n`,
'utf8');
} else {
backupConfig.encryption = null;
}
delete backupConfig.key;
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [ JSON.stringify(backupConfig) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE backups CHANGE version packageVersion VARCHAR(128) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE backups CHANGE packageVersion version VARCHAR(128) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,24 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE backups ADD COLUMN encryptionVersion INTEGER', function (error) {
if (error) return callback(error);
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var backupConfig = JSON.parse(results[0].value);
if (!backupConfig.encryption) return callback(null);
// mark old encrypted backups as v1
db.runSql('UPDATE backups SET encryptionVersion=1', callback);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE backups DROP COLUMN encryptionVersion', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,18 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var backupConfig = JSON.parse(results[0].value);
backupConfig.retentionPolicy = { keepWithinSecs: backupConfig.retentionSecs };
delete backupConfig.retentionSecs;
// mark old encrypted backups as v1
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [ JSON.stringify(backupConfig) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,18 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var backupConfig = JSON.parse(results[0].value);
if (backupConfig.provider !== 'minio' && backupConfig.provider !== 's3-v4-compat') return callback();
backupConfig.s3ForcePathStyle = true; // usually minio is self-hosted. s3 v4 compat, we don't know
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [ JSON.stringify(backupConfig) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,17 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
// http://stackoverflow.com/questions/386294/what-is-the-maximum-length-of-a-valid-email-address
async.series([
db.runSql.bind(db, 'ALTER TABLE appPasswords DROP INDEX name'),
db.runSql.bind(db, 'ALTER TABLE appPasswords ADD CONSTRAINT appPasswords_name_userId_identifier UNIQUE (name, userId, identifier)'),
], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE userGroups ADD COLUMN source VARCHAR(128) DEFAULT ""', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE userGroups DROP COLUMN source', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,38 +0,0 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE backups ADD COLUMN identifier VARCHAR(128)', function (error) {
if (error) return callback(error);
db.all('SELECT * FROM backups', function (error, backups) {
if (error) return callback(error);
async.eachSeries(backups, function (backup, next) {
let identifier = 'unknown';
if (backup.type === 'box') {
identifier = 'box';
} else {
const match = backup.id.match(/app_(.+?)_.+/);
if (match) identifier = match[1];
}
db.runSql('UPDATE backups SET identifier=? WHERE id=?', [ identifier, backup.id ], next);
}, function (error) {
if (error) return callback(error);
db.runSql('ALTER TABLE backups MODIFY COLUMN identifier VARCHAR(128) NOT NULL', callback);
});
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE backups DROP COLUMN identifier', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,16 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
if (error) console.error(error);
db.runSql('ALTER TABLE users DROP COLUMN modifiedAt', callback);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN ts', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,29 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var backupConfig = JSON.parse(results[0].value);
if (backupConfig.intervalSecs === 6 * 60 * 60) { // every 6 hours
backupConfig.schedulePattern = '00 00 5,11,17,23 * * *';
} else if (backupConfig.intervalSecs === 12 * 60 * 60) { // every 12 hours
backupConfig.schedulePattern = '00 00 5,17 * * *';
} else if (backupConfig.intervalSecs === 24 * 60 * 60) { // every day
backupConfig.schedulePattern = '00 00 23 * * *';
} else if (backupConfig.intervalSecs === 3 * 24 * 60 * 60) { // every 3 days (based on day)
backupConfig.schedulePattern = '00 00 23 * * 1,3,5';
} else if (backupConfig.intervalSecs === 7 * 24 * 60 * 60) { // every week (saturday)
backupConfig.schedulePattern = '00 00 23 * * 6';
} else { // default to everyday
backupConfig.schedulePattern = '00 00 23 * * *';
}
delete backupConfig.intervalSecs;
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [ JSON.stringify(backupConfig) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,23 +0,0 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="admin_domain"', function (error, results) {
if (error || results.length === 0) return callback(error);
const adminDomain = results[0].value;
async.series([
db.runSql.bind(db, 'INSERT INTO settings (name, value) VALUES (?, ?)', [ 'mail_domain', adminDomain ]),
db.runSql.bind(db, 'INSERT INTO settings (name, value) VALUES (?, ?)', [ 'mail_fqdn', `my.${adminDomain}` ])
], callback);
});
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'DELETE FROM settings WHERE name="mail_domain"'),
db.runSql.bind(db, 'DELETE FROM settings WHERE name="mail_fqdn"'),
], callback);
};
@@ -1,22 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
db.runSql('SELECT * FROM settings WHERE name=?', ['app_autoupdate_pattern'], function (error, results) {
if (error || results.length === 0) return callback(error); // will use defaults from box code
var updatePattern = results[0].value; // use app auto update patter for the box as well
async.series([
db.runSql.bind(db, 'START TRANSACTION;'),
db.runSql.bind(db, 'DELETE FROM settings WHERE name=? OR name=?', ['app_autoupdate_pattern', 'box_autoupdate_pattern']),
db.runSql.bind(db, 'INSERT settings (name, value) VALUES(?, ?)', ['autoupdate_pattern', updatePattern]),
db.runSql.bind(db, 'COMMIT')
], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE mail ADD COLUMN bannerJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE mail DROP COLUMN bannerJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,27 +0,0 @@
'use strict';
const OLD_FIREWALL_CONFIG_JSON = '/home/yellowtent/boxdata/firewall-config.json';
const PORTS_FILE = '/home/yellowtent/boxdata/firewall/ports.json';
const BLOCKLIST_FILE = '/home/yellowtent/boxdata/firewall/blocklist.txt';
const fs = require('fs');
exports.up = function (db, callback) {
if (!fs.existsSync(OLD_FIREWALL_CONFIG_JSON)) return callback();
try {
const dataJson = fs.readFileSync(OLD_FIREWALL_CONFIG_JSON, 'utf8');
const data = JSON.parse(dataJson);
fs.writeFileSync(BLOCKLIST_FILE, data.blocklist.join('\n') + '\n', 'utf8');
fs.writeFileSync(PORTS_FILE, JSON.stringify({ allowed_tcp_ports: data.allowed_tcp_ports }, null, 4), 'utf8');
fs.unlinkSync(OLD_FIREWALL_CONFIG_JSON);
} catch (error) {
console.log('Error migrating old firewall config', error);
}
callback();
};
exports.down = function (db, callback) {
callback();
};
@@ -1,40 +0,0 @@
'use strict';
exports.up = function(db, callback) {
var cmd1 = 'CREATE TABLE volumes(' +
'id VARCHAR(128) NOT NULL UNIQUE,' +
'name VARCHAR(256) NOT NULL UNIQUE,' +
'hostPath VARCHAR(1024) NOT NULL UNIQUE,' +
'creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
var cmd2 = 'CREATE TABLE appMounts(' +
'appId VARCHAR(128) NOT NULL,' +
'volumeId VARCHAR(128) NOT NULL,' +
'readOnly BOOLEAN DEFAULT 1,' +
'UNIQUE KEY appMounts_appId_volumeId (appId, volumeId),' +
'FOREIGN KEY(appId) REFERENCES apps(id),' +
'FOREIGN KEY(volumeId) REFERENCES volumes(id)) CHARACTER SET utf8 COLLATE utf8_bin;';
db.runSql(cmd1, function (error) {
if (error) console.error(error);
db.runSql(cmd2, function (error) {
if (error) console.error(error);
db.runSql('ALTER TABLE apps DROP COLUMN bindsJson', callback);
});
});
};
exports.down = function(db, callback) {
db.runSql('DROP TABLE appMounts', function (error) {
if (error) console.error(error);
db.runSql('DROP TABLE volumes', function (error) {
if (error) console.error(error);
callback(error);
});
});
};
@@ -1,16 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN proxyAuth BOOLEAN DEFAULT 0', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN proxyAuth', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,18 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN ownerType VARCHAR(16)'),
db.runSql.bind(db, 'UPDATE mailboxes SET ownerType=?', [ 'user' ]),
db.runSql.bind(db, 'ALTER TABLE mailboxes MODIFY ownerType VARCHAR(16) NOT NULL'),
], callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE mailboxes DROP COLUMN ownerType', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,13 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN httpPort')
], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,29 +0,0 @@
'use strict';
const async = require('async'),
iputils = require('../src/iputils.js');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN containerIp VARCHAR(16) UNIQUE', function (error) {
if (error) console.error(error);
let baseIp = iputils.intFromIp('172.18.16.0');
db.all('SELECT * FROM apps', function (error, apps) {
if (error) return callback(error);
async.eachSeries(apps, function (app, iteratorDone) {
const nextIp = iputils.ipFromInt(++baseIp);
db.runSql('UPDATE apps SET containerIp=? WHERE id=?', [ nextIp, app.id ], iteratorDone);
}, callback);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN containerIp', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,21 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT * FROM settings WHERE name=?', ['platform_config'], function (error, results) {
let value;
if (error || results.length === 0) {
value = { sftp: { requireAdmin: true } };
} else {
value = JSON.parse(results[0].value);
if (!value.sftp) value.sftp = {};
value.sftp.requireAdmin = true;
}
// existing installations may not even have the key. so use REPLACE instead of UPDATE
db.runSql('REPLACE INTO settings (name, value) VALUES (?, ?)', [ 'platform_config', JSON.stringify(value) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,18 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'CREATE TABLE groupMembers_copy(groupId VARCHAR(128) NOT NULL, userId VARCHAR(128) NOT NULL, FOREIGN KEY(groupId) REFERENCES userGroups(id), FOREIGN KEY(userId) REFERENCES users(id), UNIQUE (groupId, userId)) CHARACTER SET utf8 COLLATE utf8_bin'), // In mysql CREATE TABLE.. LIKE does not copy indexes
db.runSql.bind(db, 'INSERT INTO groupMembers_copy SELECT * FROM groupMembers GROUP BY groupId, userId'),
db.runSql.bind(db, 'DROP TABLE groupMembers'),
db.runSql.bind(db, 'ALTER TABLE groupMembers_copy RENAME TO groupMembers')
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE groupMembers DROP INDEX groupMembers_member'),
], callback);
};
@@ -1,51 +0,0 @@
'use strict';
const async = require('async'),
safe = require('safetydance');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE domains ADD COLUMN wellKnownJson TEXT', function (error) {
if (error) return callback(error);
// keep the paths around, so that we don't need to trigger a re-configure. the old nginx config will use the paths
// the new one will proxy calls to the box code
const WELLKNOWN_DIR = '/home/yellowtent/boxdata/well-known';
const output = safe.child_process.execSync('find . -type f -printf "%P\n"', { cwd: WELLKNOWN_DIR, encoding: 'utf8' });
if (!output) return callback();
const paths = output.trim().split('\n');
if (paths.length === 0) return callback(); // user didn't configure any well-known
let wellKnown = {};
for (let path of paths) {
const fqdn = path.split('/', 1)[0];
const loc = path.slice(fqdn.length+1);
const doc = safe.fs.readFileSync(`${WELLKNOWN_DIR}/${path}`, { encoding: 'utf8' });
if (!doc) continue;
wellKnown[fqdn] = {};
wellKnown[fqdn][loc] = doc;
}
console.log('Migrating well-known', JSON.stringify(wellKnown, null, 4));
async.eachSeries(Object.keys(wellKnown), function (fqdn, iteratorDone) {
db.runSql('UPDATE domains SET wellKnownJson=? WHERE domain=?', [ JSON.stringify(wellKnown[fqdn]), fqdn ], function (error, result) {
if (error) {
console.error(error); // maybe the domain does not exist anymore
} else if (result.affectedRows === 0) {
console.log(`Could not migrate wellknown as domain ${fqdn} is missing`);
}
iteratorDone();
});
}, function (error) {
callback(error);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE domains DROP COLUMN wellKnownJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,23 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT * FROM settings WHERE name=?', ['platform_config'], function (error, results) {
if (error || results.length === 0) return callback(null);
let value = JSON.parse(results[0].value);
for (const serviceName of Object.keys(value)) {
const service = value[serviceName];
if (!service.memorySwap) continue;
service.memoryLimit = service.memorySwap;
delete service.memorySwap;
delete service.memory;
}
db.runSql('UPDATE settings SET value=? WHERE name=?', [ JSON.stringify(value), 'platform_config' ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,28 +0,0 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
db.all('SELECT * FROM apps', function (error, apps) {
if (error) return callback(error);
async.eachSeries(apps, function (app, iteratorDone) {
if (!app.servicesConfigJson) return iteratorDone();
let servicesConfig = JSON.parse(app.servicesConfigJson);
for (const serviceName of Object.keys(servicesConfig)) {
const service = servicesConfig[serviceName];
if (!service.memorySwap) continue;
service.memoryLimit = service.memorySwap;
delete service.memorySwap;
delete service.memory;
}
db.runSql('UPDATE apps SET servicesConfigJson=? WHERE id=?', [ JSON.stringify(servicesConfig), app.id ], iteratorDone);
}, callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,9 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'services_config', 'platform_config' ], callback);
};
exports.down = function(db, callback) {
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'platform_config', 'services_config' ], callback);
};
@@ -1,10 +0,0 @@
'use strict';
exports.up = function(db, callback) {
/* this contained an invalid migration of OVH URLs from s3 subdomain to storage subdomain. See https://forum.cloudron.io/topic/4584/issue-with-backups-listings-and-saving-backup-config-in-6-2 */
callback();
};
exports.down = function(db, callback) {
callback();
};
@@ -1,16 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="registry_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
var registryConfig = JSON.parse(results[0].value);
if (!registryConfig.provider) registryConfig.provider = 'other';
db.runSql('UPDATE settings SET value=? WHERE name="registry_config"', [ JSON.stringify(registryConfig) ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE tokens ADD COLUMN lastUsedTime TIMESTAMP NULL', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE tokens DROP COLUMN lastUsedTime', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,16 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN enableMailbox BOOLEAN DEFAULT 1', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN enableMailbox', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,17 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE mailboxes ADD COLUMN active BOOLEAN DEFAULT 1', function (error) {
if (error) return callback(error);
callback();
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE mailboxes DROP COLUMN active', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,37 +0,0 @@
'use strict';
const async = require('async'),
fs = require('fs'),
path = require('path');
const AVATAR_DIR = '/home/yellowtent/boxdata/profileicons';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN avatar MEDIUMBLOB', function (error) {
if (error) return callback(error);
fs.readdir(AVATAR_DIR, function (error, filenames) {
if (error && error.code === 'ENOENT') return callback();
if (error) return callback(error);
async.eachSeries(filenames, function (filename, iteratorCallback) {
const avatar = fs.readFileSync(path.join(AVATAR_DIR, filename));
const userId = filename;
db.runSql('UPDATE users SET avatar=? WHERE id=?', [ avatar, userId ], iteratorCallback);
}, function (error) {
if (error) return callback(error);
fs.rmdir(AVATAR_DIR, { recursive: true }, callback);
});
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN avatar', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,20 +0,0 @@
'use strict';
const fs = require('fs');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE settings ADD COLUMN valueBlob MEDIUMBLOB', function (error) {
if (error) return callback(error);
fs.readFile('/home/yellowtent/boxdata/avatar.png', { encoding: 'base64' }, function (error, avatar) {
if (error && error.code === 'ENOENT') return callback();
if (error) return callback(error);
db.runSql('INSERT INTO settings (name, valueBlob) VALUES (?, ?)', [ 'cloudron_avatar', avatar ], callback);
});
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN loginLocationsJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN loginLocationsJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,42 +0,0 @@
'use strict';
const async = require('async'),
fs = require('fs'),
path = require('path');
const APPICONS_DIR = '/home/yellowtent/boxdata/appicons';
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN icon MEDIUMBLOB'),
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN appStoreIcon MEDIUMBLOB'),
function migrateIcons(next) {
fs.readdir(APPICONS_DIR, function (error, filenames) {
if (error && error.code === 'ENOENT') return next();
if (error) return next(error);
async.eachSeries(filenames, function (filename, iteratorCallback) {
const icon = fs.readFileSync(path.join(APPICONS_DIR, filename));
const appId = filename.split('.')[0];
if (filename.endsWith('.user.png')) {
db.runSql('UPDATE apps SET icon=? WHERE id=?', [ icon, appId ], iteratorCallback);
} else {
db.runSql('UPDATE apps SET appStoreIcon=? WHERE id=?', [ icon, appId ], iteratorCallback);
}
}, function (error) {
if (error) return next(error);
fs.rmdir(APPICONS_DIR, { recursive: true }, next);
});
});
}
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN icon'),
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN appStoreIcon'),
], callback);
};
@@ -1,15 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,20 +0,0 @@
'use strict';
exports.up = function(db, callback) {
const cmd = 'CREATE TABLE blobs(' +
'id VARCHAR(128) NOT NULL UNIQUE,' +
'value MEDIUMBLOB,' +
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
db.runSql(cmd, function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('DROP TABLE blobs', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,49 +0,0 @@
'use strict';
const async = require('async'),
fs = require('fs'),
safe = require('safetydance');
const BOX_DATA_DIR = '/home/yellowtent/boxdata';
const PLATFORM_DATA_DIR = '/home/yellowtent/platformdata';
exports.up = function (db, callback) {
let funcs = [];
const acmeKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/acme/acme.key`);
if (acmeKey) {
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'acme_account_key', acmeKey ]));
funcs.push(fs.rmdir.bind(fs, `${BOX_DATA_DIR}/acme`, { recursive: true }));
}
const dhparams = safe.fs.readFileSync(`${BOX_DATA_DIR}/dhparams.pem`);
if (dhparams) {
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/dhparams.pem`, dhparams);
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'dhparams', dhparams ]));
// leave the dhparms here for the moment because startup code regenerates box nginx config and reloads nginx. at that point,
// nginx config of apps has not been re-generated yet and the reload fails. post 6.3, this file can be removed in start.sh
// funcs.push(fs.unlink.bind(fs, `${BOX_DATA_DIR}/dhparams.pem`));
}
const turnSecret = safe.fs.readFileSync(`${BOX_DATA_DIR}/addon-turn-secret`);
if (turnSecret) {
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'addon_turn_secret', turnSecret ]));
funcs.push(fs.unlink.bind(fs, `${BOX_DATA_DIR}/addon-turn-secret`));
}
// sftp keys get moved to platformdata in start.sh
const sftpPublicKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/sftp/ssh/ssh_host_rsa_key.pub`);
const sftpPrivateKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`);
if (sftpPublicKey) {
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key.pub`, sftpPublicKey);
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`, sftpPrivateKey);
safe.fs.chmodSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`, 0o600);
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'sftp_public_key', sftpPublicKey ]));
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'sftp_private_key', sftpPrivateKey ]));
funcs.push(fs.rmdir.bind(fs, `${BOX_DATA_DIR}/sftp`, { recursive: true }));
}
async.series(funcs, callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,31 +0,0 @@
'use strict';
const async = require('async'),
fs = require('fs'),
safe = require('safetydance');
const BOX_DATA_DIR = '/home/yellowtent/boxdata';
const PLATFORM_DATA_DIR = '/home/yellowtent/platformdata';
exports.up = function (db, callback) {
if (!fs.existsSync(`${BOX_DATA_DIR}/firewall`)) return callback();
const ports = safe.fs.readFileSync(`${BOX_DATA_DIR}/firewall/ports.json`);
if (ports) {
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/firewall/ports.json`, ports);
}
const blocklist = safe.fs.readFileSync(`${BOX_DATA_DIR}/firewall/blocklist.txt`);
async.series([
(next) => {
if (!blocklist) return next();
db.runSql('INSERT INTO settings (name, valueBlob) VALUES (?, ?)', [ 'firewall_blocklist', blocklist ], next);
},
fs.writeFile.bind(fs, `${PLATFORM_DATA_DIR}/firewall/blocklist.txt`, blocklist || ''),
fs.rmdir.bind(fs, `${BOX_DATA_DIR}/firewall`, { recursive: true })
], callback);
};
exports.down = function(db, callback) {
callback();
};
@@ -1,38 +0,0 @@
'use strict';
const async = require('async'),
safe = require('safetydance');
const CERTS_DIR = '/home/yellowtent/boxdata/certs',
PLATFORM_CERTS_DIR = '/home/yellowtent/platformdata/nginx/cert';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE domains ADD COLUMN fallbackCertificateJson MEDIUMTEXT', function (error) {
if (error) return callback(error);
db.all('SELECT * FROM domains', [ ], function (error, domains) {
if (error) return callback(error);
async.eachSeries(domains, function (domain, iteratorDone) {
// b94dbf5fa33a6d68d784571721ff44348c2d88aa seems to have moved certs from platformdata to boxdata
let cert = safe.fs.readFileSync(`${CERTS_DIR}/${domain.domain}.host.cert`, 'utf8');
let key = safe.fs.readFileSync(`${CERTS_DIR}/${domain.domain}.host.key`, 'utf8');
if (!cert) {
cert = safe.fs.readFileSync(`${PLATFORM_CERTS_DIR}/${domain.domain}.host.cert`, 'utf8');
key = safe.fs.readFileSync(`${PLATFORM_CERTS_DIR}/${domain.domain}.host.key`, 'utf8');
}
const fallbackCertificate = { cert, key };
db.runSql('UPDATE domains SET fallbackCertificateJson=? WHERE domain=?', [ JSON.stringify(fallbackCertificate), domain.domain ], iteratorDone);
}, callback);
});
});
};
exports.down = function(db, callback) {
async.series([
db.runSql.run(db, 'ALTER TABLE domains DROP COLUMN fallbackCertificateJson')
], callback);
};
@@ -1,34 +0,0 @@
'use strict';
const async = require('async'),
fs = require('fs'),
safe = require('safetydance');
const CERTS_DIR = '/home/yellowtent/boxdata/certs';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE subdomains ADD COLUMN certificateJson MEDIUMTEXT', function (error) {
if (error) return callback(error);
db.all('SELECT * FROM subdomains', [ ], function (error, subdomains) {
if (error) return callback(error);
async.eachSeries(subdomains, function (subdomain, iteratorDone) {
const cert = safe.fs.readFileSync(`${CERTS_DIR}/${subdomain.subdomain}.${subdomain.domain}.user.cert`, 'utf8');
const key = safe.fs.readFileSync(`${CERTS_DIR}/${subdomain.subdomain}.${subdomain.domain}.user.key`, 'utf8');
if (!cert || !key) return iteratorDone();
const certificate = { cert, key };
db.runSql('UPDATE subdomains SET certificateJson=? WHERE domain=? AND subdomain=?', [ JSON.stringify(certificate), subdomain.domain, subdomain.subdomain ], iteratorDone);
}, callback);
});
});
};
exports.down = function(db, callback) {
async.series([
db.runSql.run(db, 'ALTER TABLE subdomains DROP COLUMN certificateJson')
], callback);
};
@@ -1,52 +0,0 @@
'use strict';
const async = require('async'),
child_process = require('child_process'),
fs = require('fs'),
path = require('path'),
safe = require('safetydance');
const OLD_CERTS_DIR = '/home/yellowtent/boxdata/certs';
const NEW_CERTS_DIR = '/home/yellowtent/platformdata/nginx/cert';
exports.up = function(db, callback) {
fs.readdir(OLD_CERTS_DIR, function (error, filenames) {
if (error && error.code === 'ENOENT') return callback();
if (error) return callback(error);
filenames = filenames.filter(f => f.endsWith('.key') && !f.endsWith('.host.key') && !f.endsWith('.user.key')); // ignore fallback and user keys
async.eachSeries(filenames, function (filename, iteratorCallback) {
const privateKeyFile = filename;
const privateKey = fs.readFileSync(path.join(OLD_CERTS_DIR, filename));
const certificateFile = filename.replace(/\.key$/, '.cert');
const certificate = safe.fs.readFileSync(path.join(OLD_CERTS_DIR, certificateFile));
if (!certificate) {
console.log(`${certificateFile} is missing. skipping migration`);
return iteratorCallback();
}
const csrFile = filename.replace(/\.key$/, '.csr');
const csr = safe.fs.readFileSync(path.join(OLD_CERTS_DIR, csrFile));
if (!csr) {
console.log(`${csrFile} is missing. skipping migration`);
return iteratorCallback();
}
async.series([
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${privateKeyFile}`, privateKey),
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${certificateFile}`, certificate),
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${csrFile}`, csr),
], iteratorCallback);
}, function (error) {
if (error) return callback(error);
child_process.execSync(`cp ${OLD_CERTS_DIR}/* ${NEW_CERTS_DIR}`); // this way we copy the non-migrated ones like .host, .user etc as well
fs.rmdir(OLD_CERTS_DIR, { recursive: true }, callback);
});
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,17 +0,0 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE volumes ADD COLUMN mountType VARCHAR(16) DEFAULT "noop"'),
db.runSql.bind(db, 'ALTER TABLE volumes ADD COLUMN mountOptionsJson TEXT')
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE volumes DROP COLUMN mountType'),
db.runSql.bind(db, 'ALTER TABLE volumes DROP COLUMN mountOptionsJson')
], callback);
};
@@ -1,21 +0,0 @@
'use strict';
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE backups ADD INDEX creationTime_index (creationTime)'),
db.runSql.bind(db, 'ALTER TABLE eventlog ADD INDEX creationTime_index (creationTime)'),
db.runSql.bind(db, 'ALTER TABLE notifications ADD INDEX creationTime_index (creationTime)'),
db.runSql.bind(db, 'ALTER TABLE tasks ADD INDEX creationTime_index (creationTime)'),
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE backups DROP INDEX creationTime_index'),
db.runSql.bind(db, 'ALTER TABLE eventlog DROP INDEX creationTime_index'),
db.runSql.bind(db, 'ALTER TABLE notifications DROP INDEX creationTime_index'),
db.runSql.bind(db, 'ALTER TABLE tasks DROP INDEX creationTime_index'),
], callback);
};
@@ -1,33 +0,0 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
if (error) return callback(error);
db.runSql('ALTER TABLE users ADD INDEX creationTime_index (creationTime)', function (error) {
if (error) return callback(error);
db.all('SELECT id, createdAt FROM users', function (error, results) {
if (error) return callback(error);
async.eachSeries(results, function (r, iteratorDone) {
const creationTime = new Date(r.createdAt);
db.runSql('UPDATE users SET creationTime=? WHERE id=?', [ creationTime, r.id ], iteratorDone);
}, function (error) {
if (error) return callback(error);
db.runSql('ALTER TABLE users DROP COLUMN createdAt', callback);
});
});
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN creationTime', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -1,27 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
if (error || results.length === 0) return callback(error);
const backupConfig = JSON.parse(results[0].value);
if (backupConfig.provider === 'sshfs' || backupConfig.provider === 'cifs' || backupConfig.provider === 'nfs' || backupConfig.externalDisk) {
backupConfig.chown = backupConfig.provider === 'nfs' || backupConfig.provider === 'sshfs' || backupConfig.externalDisk;
backupConfig.preserveAttributes = !!backupConfig.externalDisk;
backupConfig.provider = 'mountpoint';
if (backupConfig.externalDisk) {
backupConfig.mountPoint = backupConfig.backupFolder;
backupConfig.prefix = '';
delete backupConfig.backupFolder;
delete backupConfig.externalDisk;
}
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [JSON.stringify(backupConfig)], callback);
} else {
callback();
}
});
};
exports.down = function(db, callback) {
callback();
};
@@ -1,13 +0,0 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE notifications DROP COLUMN userId', function (error) {
if (error) return callback(error);
db.runSql('DELETE FROM notifications', callback); // just clear notifications table
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE notifications ADD COLUMN userId VARCHAR(128) NOT NULL', callback);
};
+46 -96
View File
@@ -6,7 +6,7 @@
#### Strict mode is enabled
#### VARCHAR - stored as part of table row (use for strings)
#### TEXT - stored offline from table row (use for strings)
#### BLOB (64KB), MEDIUMBLOB (16MB), LONGBLOB (4GB) - stored offline from table row (use for binary data)
#### BLOB - stored offline from table row (use for binary data)
#### https://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
#### Times are stored in the database in UTC. And precision is seconds
@@ -20,35 +20,26 @@ CREATE TABLE IF NOT EXISTS users(
email VARCHAR(254) NOT NULL UNIQUE,
password VARCHAR(1024) NOT NULL,
salt VARCHAR(512) NOT NULL,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
createdAt VARCHAR(512) NOT NULL,
modifiedAt VARCHAR(512) NOT NULL,
displayName VARCHAR(512) DEFAULT "",
fallbackEmail VARCHAR(512) DEFAULT "",
twoFactorAuthenticationSecret VARCHAR(128) DEFAULT "",
twoFactorAuthenticationEnabled BOOLEAN DEFAULT false,
source VARCHAR(128) DEFAULT "",
role VARCHAR(32),
resetToken VARCHAR(128) DEFAULT "",
resetTokenCreationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
active BOOLEAN DEFAULT 1,
avatar MEDIUMBLOB,
locationJson TEXT, // { locations: [{ ip, userAgent, city, country, ts }] }
admin BOOLEAN DEFAULT false,
INDEX creationTime_index (creationTime),
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS userGroups(
id VARCHAR(128) NOT NULL UNIQUE,
name VARCHAR(254) NOT NULL UNIQUE,
source VARCHAR(128) DEFAULT "",
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS groupMembers(
groupId VARCHAR(128) NOT NULL,
userId VARCHAR(128) NOT NULL,
FOREIGN KEY(groupId) REFERENCES userGroups(id),
FOREIGN KEY(userId) REFERENCES users(id),
UNIQUE (groupId, userId));
FOREIGN KEY(userId) REFERENCES users(id));
CREATE TABLE IF NOT EXISTS tokens(
id VARCHAR(128) NOT NULL UNIQUE,
@@ -58,45 +49,53 @@ CREATE TABLE IF NOT EXISTS tokens(
clientId VARCHAR(128),
scope VARCHAR(512) NOT NULL,
expires BIGINT NOT NULL, // FIXME: make this a timestamp
lastUsedTime TIMESTAMP NULL,
PRIMARY KEY(accessToken));
CREATE TABLE IF NOT EXISTS clients(
id VARCHAR(128) NOT NULL UNIQUE, // prefixed with cid- to identify token easily in auth routes
appId VARCHAR(128) NOT NULL, // name of the client (for external apps) or id of app (for built-in apps)
type VARCHAR(16) NOT NULL,
clientSecret VARCHAR(512) NOT NULL,
redirectURI VARCHAR(512) NOT NULL,
scope VARCHAR(512) NOT NULL,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS apps(
id VARCHAR(128) NOT NULL UNIQUE,
appStoreId VARCHAR(128) NOT NULL, // empty for custom apps
installationState VARCHAR(512) NOT NULL, // the active task on the app
runState VARCHAR(512) NOT NULL, // if the app is stopped
appStoreId VARCHAR(128) NOT NULL,
installationState VARCHAR(512) NOT NULL,
installationProgress TEXT,
runState VARCHAR(512),
health VARCHAR(128),
healthTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app last responded
containerId VARCHAR(128),
manifestJson TEXT,
httpPort INTEGER, // this is the nginx proxy port and not manifest.httpPort
location VARCHAR(128) NOT NULL,
domain VARCHAR(128) NOT NULL,
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app was installed
updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the last app update was done
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, // when this db record was updated (useful for UI caching)
memoryLimit BIGINT DEFAULT 0,
cpuShares INTEGER DEFAULT 512,
xFrameOptions VARCHAR(512),
sso BOOLEAN DEFAULT 1, // whether user chose to enable SSO
debugModeJson TEXT, // options for development mode
reverseProxyConfigJson TEXT, // { robotsTxt, csp }
robotsTxt TEXT,
enableBackup BOOLEAN DEFAULT 1, // misnomer: controls automatic daily backups
enableAutomaticUpdate BOOLEAN DEFAULT 1,
enableMailbox BOOLEAN DEFAULT 1, // whether sendmail addon is enabled
mailboxName VARCHAR(128), // mailbox of this app
mailboxDomain VARCHAR(128), // mailbox domain of this apps
mailboxName VARCHAR(128), // mailbox of this app. default allocated as '.app'
label VARCHAR(128), // display name
tagsJson VARCHAR(2048), // array of tags
dataDir VARCHAR(256) UNIQUE,
taskId INTEGER, // current task
errorJson TEXT,
servicesConfigJson TEXT, // app services configuration
containerIp VARCHAR(16) UNIQUE, // this is not-null because of ip allocation fails, user can 'repair'
appStoreIcon MEDIUMBLOB,
icon MEDIUMBLOB,
FOREIGN KEY(mailboxDomain) REFERENCES domains(domain),
FOREIGN KEY(taskId) REFERENCES tasks(id),
// the following fields do not belong here, they can be removed when we use a queue for apptask
restoreConfigJson VARCHAR(256), // used to pass backupId to restore from to apptask
oldConfigJson TEXT, // used to pass old config to apptask (configure, restore)
updateConfigJson TEXT, // used to pass new config to apptask (update)
ownerId VARCHAR(128),
FOREIGN KEY(ownerId) REFERENCES users(id),
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS appPortBindings(
@@ -107,10 +106,16 @@ CREATE TABLE IF NOT EXISTS appPortBindings(
FOREIGN KEY(appId) REFERENCES apps(id),
PRIMARY KEY(hostPort));
CREATE TABLE IF NOT EXISTS authcodes(
authCode VARCHAR(128) NOT NULL UNIQUE,
userId VARCHAR(128) NOT NULL,
clientId VARCHAR(128) NOT NULL,
expiresAt BIGINT NOT NULL, // ## FIXME: make this a timestamp
PRIMARY KEY(authCode));
CREATE TABLE IF NOT EXISTS settings(
name VARCHAR(128) NOT NULL UNIQUE,
value TEXT,
valueBlob MEDIUMBLOB,
PRIMARY KEY(name));
CREATE TABLE IF NOT EXISTS appAddonConfigs(
@@ -129,17 +134,14 @@ CREATE TABLE IF NOT EXISTS appEnvVars(
CREATE TABLE IF NOT EXISTS backups(
id VARCHAR(128) NOT NULL,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
packageVersion VARCHAR(128) NOT NULL, /* app version or box version */
encryptionVersion INTEGER, /* when null, unencrypted backup */
version VARCHAR(128) NOT NULL, /* app version or box version */
type VARCHAR(16) NOT NULL, /* 'box' or 'app' */
identifier VARCHAR(128) NOT NULL, /* 'box' or the app id */
dependsOn TEXT, /* comma separate list of objects this backup depends on */
state VARCHAR(16) NOT NULL,
manifestJson TEXT, /* to validate if the app can be installed in this version of box */
format VARCHAR(16) DEFAULT "tgz",
preserveSecs INTEGER DEFAULT 0,
INDEX creationTime_index (creationTime),
PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS eventlog(
@@ -149,7 +151,6 @@ CREATE TABLE IF NOT EXISTS eventlog(
data TEXT, /* free flowing json based on action */
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX creationTime_index (creationTime),
PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS domains(
@@ -158,9 +159,7 @@ CREATE TABLE IF NOT EXISTS domains(
provider VARCHAR(16) NOT NULL,
configJson TEXT, /* JSON containing the dns backend provider config */
tlsConfigJson TEXT, /* JSON containing the tls provider config */
wellKnownJson TEXT, /* JSON containing well known docs for this domain */
fallbackCertificateJson MEDIUMTEXT,
locked BOOLEAN,
PRIMARY KEY (domain))
@@ -174,9 +173,6 @@ CREATE TABLE IF NOT EXISTS mail(
mailFromValidation BOOLEAN DEFAULT 1,
catchAllJson TEXT,
relayJson TEXT,
bannerJson TEXT,
dkimSelector VARCHAR(128) NOT NULL DEFAULT "cloudron",
FOREIGN KEY(domain) REFERENCES domains(domain),
PRIMARY KEY(domain))
@@ -194,26 +190,19 @@ CREATE TABLE IF NOT EXISTS mailboxes(
name VARCHAR(128) NOT NULL,
type VARCHAR(16) NOT NULL, /* 'mailbox', 'alias', 'list' */
ownerId VARCHAR(128) NOT NULL, /* user id */
ownerType VARCHAR(16) NOT NULL,
aliasName VARCHAR(128), /* the target name type is an alias */
aliasDomain VARCHAR(128), /* the target domain */
membersJson TEXT, /* members of a group. fully qualified */
membersOnly BOOLEAN DEFAULT false,
aliasTarget VARCHAR(128), /* the target name type is an alias */
membersJson TEXT, /* members of a group */
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
domain VARCHAR(128),
active BOOLEAN DEFAULT 1,
FOREIGN KEY(domain) REFERENCES mail(domain),
FOREIGN KEY(aliasDomain) REFERENCES mail(domain),
UNIQUE (name, domain));
CREATE TABLE IF NOT EXISTS subdomains(
appId VARCHAR(128) NOT NULL,
domain VARCHAR(128) NOT NULL,
subdomain VARCHAR(128) NOT NULL,
type VARCHAR(128) NOT NULL, /* primary or redirect */
certificateJson MEDIUMTEXT,
type VARCHAR(128) NOT NULL,
FOREIGN KEY(domain) REFERENCES domains(domain),
FOREIGN KEY(appId) REFERENCES apps(id),
@@ -224,61 +213,22 @@ CREATE TABLE IF NOT EXISTS tasks(
type VARCHAR(32) NOT NULL,
percent INTEGER DEFAULT 0,
message TEXT,
errorJson TEXT,
resultJson TEXT,
errorMessage TEXT,
result TEXT,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX creationTime_index (creationTime),
PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS notifications(
id int NOT NULL AUTO_INCREMENT,
userId VARCHAR(128) NOT NULL,
eventId VARCHAR(128), // reference to eventlog. can be null
title VARCHAR(512) NOT NULL,
message TEXT,
acknowledged BOOLEAN DEFAULT false,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX creationTime_index (creationTime),
FOREIGN KEY(eventId) REFERENCES eventlog(id),
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS appPasswords(
id VARCHAR(128) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL,
userId VARCHAR(128) NOT NULL,
identifier VARCHAR(128) NOT NULL, // resourceId: app id or mail or webadmin
hashedPassword VARCHAR(1024) NOT NULL,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE KEY appPasswords_name_appId_identifier (name, userId, identifier)
FOREIGN KEY(userId) REFERENCES users(id),
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS volumes(
id VARCHAR(128) NOT NULL UNIQUE,
name VARCHAR(256) NOT NULL UNIQUE,
hostPath VARCHAR(1024) NOT NULL UNIQUE,
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
mountType VARCHAR(16) DEFAULT "noop",
mountOptionsJson TEXT,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS appMounts(
appId VARCHAR(128) NOT NULL,
volumeId VARCHAR(128) NOT NULL,
readOnly BOOLEAN DEFAULT 1,
UNIQUE KEY appMounts_appId_volumeId (appId, volumeId),
FOREIGN KEY(appId) REFERENCES apps(id),
FOREIGN KEY(volumeId) REFERENCES volumes(id));
CREATE TABLE IF NOT EXISTS blobs(
id VARCHAR(128) NOT NULL UNIQUE,
value TEXT,
PRIMARY KEY(id));
CHARACTER SET utf8 COLLATE utf8_bin;
+2610 -2147
View File
File diff suppressed because it is too large Load Diff
+63 -57
View File
@@ -10,80 +10,86 @@
"type": "git",
"url": "https://git.cloudron.io/cloudron/box.git"
},
"engines": {
"node": ">=4.0.0 <=4.1.1"
},
"dependencies": {
"@google-cloud/dns": "^2.1.0",
"@google-cloud/storage": "^5.8.5",
"@sindresorhus/df": "git+https://github.com/cloudron-io/df.git#type",
"async": "^3.2.0",
"aws-sdk": "^2.906.0",
"basic-auth": "^2.0.1",
"body-parser": "^1.19.0",
"cloudron-manifestformat": "^5.10.2",
"connect": "^3.7.0",
"connect-lastmile": "^2.1.0",
"@google-cloud/dns": "^0.9.2",
"@google-cloud/storage": "^2.5.0",
"@sindresorhus/df": "^3.1.0",
"async": "^2.6.2",
"aws-sdk": "^2.441.0",
"body-parser": "^1.18.3",
"cloudron-manifestformat": "^2.14.2",
"connect": "^3.6.6",
"connect-ensure-login": "^0.1.1",
"connect-lastmile": "^1.0.2",
"connect-timeout": "^1.9.0",
"cookie-parser": "^1.4.5",
"cookie-session": "^1.4.0",
"cron": "^1.8.2",
"db-migrate": "^0.11.12",
"db-migrate-mysql": "^2.1.2",
"debug": "^4.3.1",
"delay": "^5.0.0",
"dockerode": "^3.3.0",
"delay": "^5.0.0",
"ejs": "^3.1.6",
"ejs-cli": "^2.2.1",
"express": "^4.17.1",
"ipaddr.js": "^2.0.0",
"js-yaml": "^4.1.0",
"json": "^11.0.0",
"jsonwebtoken": "^8.5.1",
"ldapjs": "^2.2.4",
"lodash": "^4.17.21",
"cookie-parser": "^1.4.4",
"cookie-session": "^1.3.3",
"cron": "^1.7.0",
"csurf": "^1.9.0",
"db-migrate": "^0.11.5",
"db-migrate-mysql": "^1.1.10",
"debug": "^4.1.1",
"dockerode": "^2.5.8",
"ejs": "^2.6.1",
"ejs-cli": "^2.0.1",
"express": "^4.16.4",
"express-session": "^1.16.1",
"js-yaml": "^3.13.1",
"json": "^9.0.6",
"ldapjs": "^1.0.2",
"lodash.chunk": "^4.2.0",
"mime": "^2.5.2",
"moment": "^2.29.1",
"moment-timezone": "^0.5.33",
"morgan": "^1.10.0",
"multiparty": "^4.2.2",
"mustache-express": "^1.3.0",
"mysql": "^2.18.1",
"nodemailer": "^6.6.0",
"mime": "^2.4.2",
"moment-timezone": "^0.5.25",
"morgan": "^1.9.1",
"multiparty": "^4.2.1",
"mysql": "^2.17.1",
"namecheap": "github:joshuakarjala/node-namecheap#464a952",
"nodemailer": "^6.1.1",
"nodemailer-smtp-transport": "^2.7.4",
"oauth2orize": "^1.11.0",
"once": "^1.4.0",
"pretty-bytes": "^5.6.0",
"parse-links": "^0.1.0",
"passport": "^0.4.0",
"passport-http": "^0.3.0",
"passport-http-bearer": "^1.0.1",
"passport-local": "^1.0.0",
"passport-oauth2-client-password": "^0.1.2",
"progress-stream": "^2.0.0",
"proxy-middleware": "^0.15.0",
"qrcode": "^1.4.4",
"readdirp": "^3.6.0",
"request": "^2.88.2",
"rimraf": "^3.0.2",
"qrcode": "^1.3.3",
"readdirp": "^3.0.0",
"request": "^2.88.0",
"rimraf": "^2.6.3",
"s3-block-read-stream": "^0.5.0",
"safetydance": "^2.0.1",
"semver": "^7.3.5",
"safetydance": "^0.7.1",
"semver": "^6.0.0",
"showdown": "^1.9.0",
"speakeasy": "^2.0.0",
"split": "^1.0.1",
"superagent": "^6.1.0",
"superagent": "^5.0.2",
"supererror": "^0.7.2",
"tar-fs": "github:cloudron-io/tar-fs#ignore_stat_error",
"tar-stream": "^2.2.0",
"tar-stream": "^2.0.1",
"tldjs": "^2.3.1",
"ua-parser-js": "^0.7.28",
"underscore": "^1.13.1",
"uuid": "^8.3.2",
"validator": "^13.6.0",
"ws": "^7.4.5",
"xml2js": "^0.4.23"
"underscore": "^1.9.1",
"uuid": "^3.3.2",
"valid-url": "^1.0.9",
"validator": "^10.11.0",
"ws": "^6.2.1"
},
"devDependencies": {
"expect.js": "*",
"hock": "^1.4.1",
"js2xmlparser": "^4.0.1",
"mocha": "^8.4.0",
"hock": "^1.3.3",
"js2xmlparser": "^4.0.0",
"mocha": "^6.1.4",
"mock-aws-s3": "git+https://github.com/cloudron-io/mock-aws-s3.git",
"nock": "^13.0.11",
"node-sass": "^6.0.0",
"recursive-readdir": "^2.2.2"
"nock": "^10.0.6",
"node-sass": "^4.11.0",
"recursive-readdir": "^2.2.2",
"sinon": "^7.3.2"
},
"scripts": {
"test": "./runTests",
+9 -22
View File
@@ -2,11 +2,11 @@
set -eu
readonly source_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly DATA_DIR="${HOME}/.cloudron_test"
readonly DEFAULT_TESTS="./src/test/*-test.js ./src/routes/test/*-test.js"
! "${source_dir}/src/test/checkInstall" && exit 1
! "${SOURCE_dir}/src/test/checkInstall" && exit 1
# cleanup old data dirs some of those docker container data requires sudo to be removed
echo "=> Provide root password to purge any leftover data in ${DATA_DIR} and load apparmor profile:"
@@ -22,29 +22,19 @@ fi
mkdir -p ${DATA_DIR}
cd ${DATA_DIR}
mkdir -p appsdata
mkdir -p boxdata/mail boxdata/certs boxdata/mail/dkim/localhost boxdata/mail/dkim/foobar.com
mkdir -p platformdata/addons/mail/banner platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons platformdata/logrotate.d platformdata/backup platformdata/logs/tasks platformdata/sftp/ssh platformdata/firewall platformdata/update
sudo mkdir -p /mnt/cloudron-test-music /media/cloudron-test-music # volume test
# translations
mkdir -p box/dashboard/dist/translation
cp -r ${source_dir}/../dashboard/dist/translation/* box/dashboard/dist/translation
mkdir -p boxdata/appicons boxdata/mail boxdata/certs boxdata/mail/dkim/localhost boxdata/mail/dkim/foobar.com
mkdir -p platformdata/addons/mail platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons platformdata/logrotate.d platformdata/backup platformdata/logs/tasks
# put cert
echo "=> Generating a localhost selfsigned cert"
openssl req -x509 -newkey rsa:2048 -keyout platformdata/nginx/cert/host.key -out platformdata/nginx/cert/host.cert -days 3650 -subj '/CN=localhost' -nodes -config <(cat /etc/ssl/openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:*.localhost"))
# clear out any containers if FAST is unset
if [[ -z ${FAST+x} ]]; then
echo "=> Delete all docker containers first"
docker ps -qa | xargs --no-run-if-empty docker rm -f
echo "==> To skip this run with: FAST=1 ./runTests"
else
echo "==> WARNING!! Skipping docker container cleanup, the database might not be pristine!"
fi
# clear out any containers
echo "=> Delete all docker containers first"
docker ps -qa | xargs --no-run-if-empty docker rm -f
# create docker network (while the infra code does this, most tests skip infra setup)
docker network create --subnet=172.18.0.0/16 --ip-range=172.18.0.0/20 cloudron || true
docker network create --subnet=172.18.0.0/16 cloudron || true
# create the same mysql server version to test with
OUT=`docker inspect mysql-server` || true
@@ -62,9 +52,6 @@ while ! mysqladmin ping -h"${MYSQL_IP}" --silent; do
sleep 1
done
echo "=> Create iptables blocklist"
sudo ipset create cloudron_blocklist hash:net || true
echo "=> Starting cloudron-syslog"
cloudron-syslog --logdir "${DATA_DIR}/platformdata/logs/" &
@@ -72,7 +59,7 @@ echo "=> Ensure database"
mysql -h"${MYSQL_IP}" -uroot -ppassword -e 'CREATE DATABASE IF NOT EXISTS box'
echo "=> Run database migrations"
cd "${source_dir}"
cd "${SOURCE_dir}"
BOX_ENV=test DATABASE_URL=mysql://root:password@${MYSQL_IP}/box node_modules/.bin/db-migrate up
echo "=> Run tests with mocha"
+106
View File
@@ -0,0 +1,106 @@
#!/bin/bash
set -eu -o pipefail
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400 --http1.1"
ip=""
dns_config=""
tls_cert_file=""
tls_key_file=""
license_file=""
backup_config=""
args=$(getopt -o "" -l "ip:,backup-config:,license:,dns-config:,tls-cert:,tls-key:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--ip) ip="$2"; shift 2;;
--dns-config) dns_config="$2"; shift 2;;
--tls-cert) tls_cert_file="$2"; shift 2;;
--tls-key) tls_key_file="$2"; shift 2;;
--license) license_file="$2"; shift 2;;
--backup-config) backup_config="$2"; shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
# validate arguments in the absence of data
if [[ -z "${ip}" ]]; then
echo "--ip is required"
exit 1
fi
if [[ -z "${dns_config}" ]]; then
echo "--dns-config is required"
exit 1
fi
if [[ ! -f "${license_file}" ]]; then
echo "--license must be a valid license file"
exit 1
fi
function get_status() {
key="$1"
if status=$($curl -q -f -k "https://${ip}/api/v1/cloudron/status" 2>/dev/null); then
currentValue=$(echo "${status}" | python3 -c 'import sys, json; print(json.dumps(json.load(sys.stdin)[sys.argv[1]]))' "${key}")
echo "${currentValue}"
return 0
fi
return 1
}
function wait_for_status() {
key="$1"
expectedValue="$2"
echo "wait_for_status: $key to be $expectedValue"
while true; do
if currentValue=$(get_status "${key}"); then
echo "wait_for_status: $key is current: $currentValue expecting: $expectedValue"
if [[ "${currentValue}" == $expectedValue ]]; then
break
fi
fi
sleep 3
done
}
echo "=> Waiting for cloudron to be ready"
wait_for_status "version" '*'
domain=$(echo "${dns_config}" | python3 -c 'import json,sys;obj=json.load(sys.stdin);print(obj["domain"])')
echo "Provisioning Cloudron ${domain}"
if [[ -n "${tls_cert_file}" && -n "${tls_key_file}" ]]; then
tls_cert=$(cat "${tls_cert_file}" | awk '{printf "%s\\n", $0}')
tls_key=$(cat "${tls_key_file}" | awk '{printf "%s\\n", $0}')
fallback_cert=$(printf '{ "cert": "%s", "key": "%s", "provider": "fallback", "restricted": true }' "${tls_cert}" "${tls_key}")
else
fallback_cert=None
fi
tls_config='{ "provider": "fallback" }'
dns_config=$(echo "${dns_config}" | python3 -c "import json,sys;obj=json.load(sys.stdin);obj.update(tlsConfig=${tls_config});obj.update(fallbackCertficate=${fallback_cert});print(json.dumps(obj))")
license=$(cat "${license_file}")
if [[ -z "${backup_config:-}" ]]; then
backup_config='{ "provider": "filesystem", "backupFolder": "/var/backups", "format": "tgz" }'
fi
setupData=$(printf '{ "dnsConfig": %s, "autoconf": { "appstoreConfig": %s, "backupConfig": %s } }' "${dns_config}" "${license}" "${backup_config}")
if ! setupResult=$($curl -kq -X POST -H "Content-Type: application/json" -d "${setupData}" https://${ip}/api/v1/cloudron/setup); then
echo "Failed to setup with ${setupData} ${setupResult}"
exit 1
fi
wait_for_status "webadminStatus" '*"tls": true*'
echo "Cloudron is ready at https://my-${domain}"
+72 -61
View File
@@ -2,12 +2,6 @@
set -eu -o pipefail
function exitHandler() {
rm -f /etc/update-motd.d/91-cloudron-install-in-progress
}
trap exitHandler EXIT
# change this to a hash when we make a upgrade release
readonly LOG_FILE="/var/log/cloudron-setup.log"
readonly MINIMUM_DISK_SIZE_GB="18" # this is the size of "/" and required to fit in docker images 18 is a safe bet for different reporting on 20GB min
@@ -41,46 +35,41 @@ if [[ "${disk_size_gb}" -lt "${MINIMUM_DISK_SIZE_GB}" ]]; then
exit 1
fi
# do not use is-active in case box service is down and user attempts to re-install
if systemctl cat box.service >/dev/null 2>&1; then
if systemctl -q is-active box; then
echo "Error: Cloudron is already installed. To reinstall, start afresh"
exit 1
fi
initBaseImage="true"
provider="generic"
# provisioning data
provider=""
requestedVersion=""
installServerOrigin="https://api.cloudron.io"
apiServerOrigin="https://api.cloudron.io"
webServerOrigin="https://cloudron.io"
sourceTarballUrl=""
rebootServer="true"
setupToken=""
license=""
args=$(getopt -o "" -l "help,skip-baseimage-init,provider:,version:,env:,skip-reboot,generate-setup-token" -n "$0" -- "$@")
args=$(getopt -o "" -l "help,skip-baseimage-init,provider:,version:,env:,skip-reboot,license:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--help) echo "See https://docs.cloudron.io/installation/ on how to install Cloudron"; exit 0;;
--help) echo "See https://cloudron.io/documentation/installation/ on how to install Cloudron"; exit 0;;
--provider) provider="$2"; shift 2;;
--version) requestedVersion="$2"; shift 2;;
--env)
if [[ "$2" == "dev" ]]; then
apiServerOrigin="https://api.dev.cloudron.io"
webServerOrigin="https://dev.cloudron.io"
installServerOrigin="https://api.dev.cloudron.io"
elif [[ "$2" == "staging" ]]; then
apiServerOrigin="https://api.staging.cloudron.io"
webServerOrigin="https://staging.cloudron.io"
installServerOrigin="https://api.staging.cloudron.io"
elif [[ "$2" == "unstable" ]]; then
installServerOrigin="https://api.dev.cloudron.io"
fi
shift 2;;
--license) license="$2"; shift 2;;
--skip-baseimage-init) initBaseImage="false"; shift;;
--skip-reboot) rebootServer="false"; shift;;
--generate-setup-token) setupToken="$(openssl rand -hex 10)"; shift;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
@@ -94,34 +83,49 @@ fi
# Only --help works with mismatched ubuntu
ubuntu_version=$(lsb_release -rs)
if [[ "${ubuntu_version}" != "16.04" && "${ubuntu_version}" != "18.04" && "${ubuntu_version}" != "20.04" ]]; then
echo "Cloudron requires Ubuntu 16.04, 18.04 or 20.04" > /dev/stderr
if [[ "${ubuntu_version}" != "16.04" && "${ubuntu_version}" != "18.04" ]]; then
echo "Cloudron requires Ubuntu 16.04 or 18.04" > /dev/stderr
exit 1
fi
# Install MOTD file for stack script style installations. this is removed by the trap exit handler. Heredoc quotes prevents parameter expansion
cat > /etc/update-motd.d/91-cloudron-install-in-progress <<'EOF'
#!/bin/bash
printf "**********************************************************************\n\n"
printf "\t\t\tWELCOME TO CLOUDRON\n"
printf "\t\t\t-------------------\n"
printf '\n\e[1;32m%-6s\e[m\n\n' "Cloudron is installing. Run 'tail -f /var/log/cloudron-setup.log' to view progress."
printf "Cloudron overview - https://docs.cloudron.io/ \n"
printf "Cloudron setup - https://docs.cloudron.io/installation/#setup \n"
printf "\nFor help and more information, visit https://forum.cloudron.io\n\n"
printf "**********************************************************************\n"
EOF
chmod +x /etc/update-motd.d/91-cloudron-install-in-progress
# Can only write after we have confirmed script has root access
echo "Running cloudron-setup with args : $@" > "${LOG_FILE}"
# validate arguments in the absence of data
if [[ -z "${provider}" ]]; then
echo "--provider is required (azure, contabo, digitalocean, ec2, exoscale, galaxygate, gce, hetzner, lightsail, linode, netcup, ovh, rosehosting, scaleway, upcloud, vultr or generic)"
exit 1
elif [[ \
"${provider}" != "ami" && \
"${provider}" != "azure" && \
"${provider}" != "caas" && \
"${provider}" != "cloudscale" && \
"${provider}" != "contabo" && \
"${provider}" != "digitalocean" && \
"${provider}" != "digitalocean-mp" && \
"${provider}" != "ec2" && \
"${provider}" != "exoscale" && \
"${provider}" != "galaxygate" && \
"${provider}" != "digitalocean" && \
"${provider}" != "gce" && \
"${provider}" != "hetzner" && \
"${provider}" != "lightsail" && \
"${provider}" != "linode" && \
"${provider}" != "linode-stackscript" && \
"${provider}" != "netcup" && \
"${provider}" != "netcup-image" && \
"${provider}" != "ovh" && \
"${provider}" != "rosehosting" && \
"${provider}" != "scaleway" && \
"${provider}" != "upcloud" && \
"${provider}" != "upcloud-image" && \
"${provider}" != "vultr" && \
"${provider}" != "generic" \
]]; then
echo "--provider must be one of: azure, cloudscale.ch, contabo, digitalocean, ec2, exoscale, galaxygate, gce, hetzner, lightsail, linode, netcup, ovh, rosehosting, scaleway, upcloud, vultr or generic"
exit 1
fi
echo ""
echo "##############################################"
echo " Cloudron Setup (${requestedVersion:-latest})"
@@ -134,20 +138,32 @@ echo " Join us at https://forum.cloudron.io for any questions."
echo ""
if [[ "${initBaseImage}" == "true" ]]; then
echo "=> Installing software-properties-common"
if ! apt-get install -y software-properties-common &>> "${LOG_FILE}"; then
echo "Could not install software-properties-common (for add-apt-repository below). See ${LOG_FILE}"
exit 1
fi
echo "=> Ensure required apt sources"
if ! add-apt-repository universe &>> "${LOG_FILE}"; then
echo "Could not add required apt sources (for nginx-full). See ${LOG_FILE}"
exit 1
fi
echo "=> Updating apt and installing script dependencies"
if ! apt-get update &>> "${LOG_FILE}"; then
echo "Could not update package repositories. See ${LOG_FILE}"
exit 1
fi
if ! DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -y install --no-install-recommends curl python3 ubuntu-standard software-properties-common -y &>> "${LOG_FILE}"; then
if ! apt-get install curl python3 ubuntu-standard -y &>> "${LOG_FILE}"; then
echo "Could not install setup dependencies (curl). See ${LOG_FILE}"
exit 1
fi
fi
echo "=> Checking version"
if ! releaseJson=$($curl -s "${installServerOrigin}/api/v1/releases?boxVersion=${requestedVersion}"); then
if ! releaseJson=$($curl -s "${apiServerOrigin}/api/v1/releases?boxVersion=${requestedVersion}"); then
echo "Failed to get release information"
exit 1
fi
@@ -173,46 +189,41 @@ fi
if [[ "${initBaseImage}" == "true" ]]; then
echo -n "=> Installing base dependencies and downloading docker images (this takes some time) ..."
# initializeBaseUbuntuImage.sh args (provider, infraversion path) are only to support installation of pre 5.3 Cloudrons
if ! /bin/bash "${box_src_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh" "generic" "../src" &>> "${LOG_FILE}"; then
if ! /bin/bash "${box_src_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh" "${provider}" "../src" &>> "${LOG_FILE}"; then
echo "Init script failed. See ${LOG_FILE} for details"
exit 1
fi
echo ""
fi
# The provider flag is still used for marketplace images
# NOTE: this install script only supports 3.x and above
echo "=> Installing version ${version} (this takes some time) ..."
mkdir -p /etc/cloudron
echo "${provider}" > /etc/cloudron/PROVIDER
[[ ! -z "${setupToken}" ]] && echo "${setupToken}" > /etc/cloudron/SETUP_TOKEN
cat > "/etc/cloudron/cloudron.conf" <<CONF_END
{
"apiServerOrigin": "${apiServerOrigin}",
"webServerOrigin": "${webServerOrigin}",
"provider": "${provider}"
}
CONF_END
[[ -n "${license}" ]] && echo -n "$license" > /etc/cloudron/LICENSE
if ! /bin/bash "${box_src_tmp_dir}/scripts/installer.sh" &>> "${LOG_FILE}"; then
echo "Failed to install cloudron. See ${LOG_FILE} for details"
exit 1
fi
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('api_server_origin', '${apiServerOrigin}');" 2>/dev/null
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('web_server_origin', '${webServerOrigin}');" 2>/dev/null
echo -n "=> Waiting for cloudron to be ready (this takes some time) ..."
while true; do
echo -n "."
if status=$($curl -s -f "http://localhost:3000/api/v1/cloudron/status" 2>/dev/null); then
if status=$($curl -q -f "http://localhost:3000/api/v1/cloudron/status" 2>/dev/null); then
break # we are up and running
fi
sleep 10
done
if ! ip=$(curl -s --fail --connect-timeout 2 --max-time 2 https://api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p'); then
ip='<IP>'
fi
if [[ -z "${setupToken}" ]]; then
url="https://${ip}"
else
url="https://${ip}/?setupToken=${setupToken}"
fi
echo -e "\n\n${GREEN}After reboot, visit ${url} and accept the self-signed certificate to finish setup.${DONE}\n"
echo -e "\n\n${GREEN}Visit https://<IP> and accept the self-signed certificate to finish setup.${DONE}\n"
if [[ "${rebootServer}" == "true" ]]; then
systemctl stop box mysql # sometimes mysql ends up having corrupt privilege tables
@@ -220,7 +231,7 @@ if [[ "${rebootServer}" == "true" ]]; then
read -p "The server has to be rebooted to apply all the settings. Reboot now ? [Y/n] " yn
yn=${yn:-y}
case $yn in
[Yy]* ) exitHandler; systemctl reboot;;
[Yy]* ) systemctl reboot;;
* ) exit;;
esac
fi
+31 -60
View File
@@ -1,7 +1,5 @@
#!/bin/bash
set -eu -o pipefail
# This script collects diagnostic information to help debug server related issues
# It also enables SSH access for the cloudron support team
@@ -13,39 +11,25 @@ HELP_MESSAGE="
This script collects diagnostic information to help debug server related issues
Options:
--owner-login Login as owner
--enable-ssh Enable SSH access for the Cloudron support team
--help Show this message
"
# We require root
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root. Run with sudo"
echo "This script should be run as root." > /dev/stderr
exit 1
fi
enableSSH="false"
args=$(getopt -o "" -l "help,enable-ssh,admin-login,owner-login" -n "$0" -- "$@")
args=$(getopt -o "" -l "help,enable-ssh" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--help) echo -e "${HELP_MESSAGE}"; exit 0;;
--enable-ssh) enableSSH="true"; shift;;
--admin-login)
# fall through
;&
--owner-login)
admin_username=$(mysql -NB -uroot -ppassword -e "SELECT username FROM box.users WHERE role='owner' AND username IS NOT NULL ORDER BY createdAt LIMIT 1" 2>/dev/null)
admin_password=$(pwgen -1s 12)
dashboard_domain=$(mysql -NB -uroot -ppassword -e "SELECT value FROM box.settings WHERE name='admin_fqdn'" 2>/dev/nul)
ghost_file=/home/yellowtent/platformdata/cloudron_ghost.json
printf '{"%s":"%s"}\n' "${admin_username}" "${admin_password}" > "${ghost_file}"
chown yellowtent:yellowtent "${ghost_file}" && chmod o-r,g-r "${ghost_file}"
echo "Login at https://${dashboard_domain} as ${admin_username} / ${admin_password} . This password may only be used once. ${ghost_file} will be automatically removed after use."
exit 0
;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
@@ -58,7 +42,7 @@ if [[ "`df --output="avail" / | sed -n 2p`" -lt "10240" ]]; then
echo ""
df -h
echo ""
echo "To recover from a full disk, follow the guide at https://docs.cloudron.io/troubleshooting/#recovery-after-disk-full"
echo "To recover from a full disk, follow the guide at https://cloudron.io/documentation/server/#recovery-after-disk-full"
exit 1
fi
@@ -74,11 +58,23 @@ echo -n "Generating Cloudron Support stats..."
# clear file
rm -rf $OUT
echo -e $LINE"DASHBOARD DOMAIN"$LINE >> $OUT
mysql -NB -uroot -ppassword -e "SELECT value FROM box.settings WHERE name='admin_fqdn'" &>> $OUT 2>/dev/null || true
ssh_port=$(cat /etc/ssh/sshd_config | grep "Port " | sed -e "s/.*Port //")
if [[ $SUDO_USER == "" ]]; then
ssh_user="root"
ssh_folder="/root/.ssh/"
authorized_key_file="${ssh_folder}/authorized_keys"
else
ssh_user="$SUDO_USER"
ssh_folder="/home/$SUDO_USER/.ssh/"
authorized_key_file="${ssh_folder}/authorized_keys"
fi
echo -e $LINE"PROVIDER"$LINE >> $OUT
cat /etc/cloudron/PROVIDER &>> $OUT || true
echo -e $LINE"SSH"$LINE >> $OUT
echo "Username: ${ssh_user}" >> $OUT
echo "Port: ${ssh_port}" >> $OUT
echo -e $LINE"cloudron.conf"$LINE >> $OUT
cat /etc/cloudron/cloudron.conf &>> $OUT
echo -e $LINE"Docker container"$LINE >> $OUT
if ! timeout --kill-after 10s 15s docker ps -a &>> $OUT 2>&1; then
@@ -89,64 +85,39 @@ echo -e $LINE"Filesystem stats"$LINE >> $OUT
df -h &>> $OUT
echo -e $LINE"Appsdata stats"$LINE >> $OUT
du -hcsL /home/yellowtent/appsdata/* &>> $OUT || true
du -hcsL /home/yellowtent/appsdata/* &>> $OUT
echo -e $LINE"Boxdata stats"$LINE >> $OUT
du -hcsL /home/yellowtent/boxdata/* &>> $OUT
echo -e $LINE"Backup stats (possibly misleading)"$LINE >> $OUT
du -hcsL /var/backups/* &>> $OUT || true
du -hcsL /var/backups/* &>> $OUT
echo -e $LINE"System daemon status"$LINE >> $OUT
systemctl status --lines=100 box mysql unbound cloudron-syslog nginx collectd docker &>> $OUT
systemctl status --lines=100 cloudron.target box mysql unbound cloudron-syslog nginx collectd docker &>> $OUT
echo -e $LINE"Box logs"$LINE >> $OUT
tail -n 100 /home/yellowtent/platformdata/logs/box.log &>> $OUT
echo -e $LINE"Interface Info"$LINE >> $OUT
ip addr &>> $OUT
echo -e $LINE"Firewall chains"$LINE >> $OUT
iptables -L &>> $OUT
echo "Done"
if [[ "${enableSSH}" == "true" ]]; then
ssh_port=$(cat /etc/ssh/sshd_config | grep "Port " | sed -e "s/.*Port //")
permit_root_login=$(grep -q ^PermitRootLogin.*yes /etc/ssh/sshd_config && echo "yes" || echo "no")
# support.js uses similar logic
if [[ -d /home/ubuntu ]]; then
ssh_user="ubuntu"
keys_file="/home/ubuntu/.ssh/authorized_keys"
else
ssh_user="root"
keys_file="/root/.ssh/authorized_keys"
fi
echo -e $LINE"SSH"$LINE >> $OUT
echo "Username: ${ssh_user}" >> $OUT
echo "Port: ${ssh_port}" >> $OUT
echo "PermitRootLogin: ${permit_root_login}" >> $OUT
echo "Key file: ${keys_file}" >> $OUT
echo -n "Enabling ssh access for the Cloudron support team..."
mkdir -p $(dirname "${keys_file}") # .ssh does not exist sometimes
touch "${keys_file}" # required for concat to work
if ! grep -q "${CLOUDRON_SUPPORT_PUBLIC_KEY}" "${keys_file}"; then
echo -e "\n${CLOUDRON_SUPPORT_PUBLIC_KEY}" >> "${keys_file}"
chmod 600 "${keys_file}"
chown "${ssh_user}" "${keys_file}"
fi
echo "Done"
fi
echo -n "Uploading information..."
# for some reason not using $(cat $OUT) will not contain newlines!?
paste_key=$(curl -X POST ${PASTEBIN}/documents --silent -d "$(cat $OUT)" | python3 -c "import sys, json; print(json.load(sys.stdin)['key'])")
echo "Done"
if [[ "${enableSSH}" == "true" ]]; then
echo -n "Enabling ssh access for the Cloudron support team..."
mkdir -p "${ssh_folder}"
echo -e "\n${CLOUDRON_SUPPORT_PUBLIC_KEY}" >> ${authorized_key_file}
chown -R ${ssh_user} "${ssh_folder}"
chmod 600 "${authorized_key_file}"
echo "Done"
fi
echo ""
echo "Please email the following link to support@cloudron.io"
echo ""
-31
View File
@@ -1,31 +0,0 @@
#!/bin/bash
set -eu -o pipefail
# This script downloads new translation data from weblate at https://translate.cloudron.io
OUT="/home/yellowtent/box/dashboard/dist/translation"
# We require root
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root. Run with sudo"
exit 1
fi
echo "=> Downloading new translation files..."
curl https://translate.cloudron.io/download/cloudron/dashboard/?format=zip -o /tmp/lang.zip
echo "=> Unpacking..."
unzip -jo /tmp/lang.zip -d $OUT
chown -R yellowtent:yellowtent $OUT
# unzip put very restrictive permissions
chmod ua+r $OUT/*
echo "=> Cleanup..."
rm /tmp/lang.zip
echo "=> Done"
echo ""
echo "Reload the dashboard to see the new translations"
echo ""
+2 -2
View File
@@ -41,8 +41,8 @@ if ! $(cd "${SOURCE_DIR}/../dashboard" && git diff --exit-code >/dev/null); then
exit 1
fi
if [[ "$(node --version)" != "v14.15.4" ]]; then
echo "This script requires node 14.15.4"
if [[ "$(node --version)" != "v10.15.1" ]]; then
echo "This script requires node 10.15.1"
exit 1
fi
+44 -70
View File
@@ -11,12 +11,9 @@ if [[ ${EUID} -ne 0 ]]; then
exit 1
fi
function log() {
echo -e "$(date +'%Y-%m-%dT%H:%M:%S')" "==> installer: $1"
}
readonly user=yellowtent
readonly box_src_dir=/home/${user}/box
readonly USER=yellowtent
readonly BOX_SRC_DIR=/home/${USER}/box
readonly BASE_DATA_DIR=/home/${USER}
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400"
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
@@ -25,70 +22,47 @@ readonly box_src_tmp_dir="$(realpath ${script_dir}/..)"
readonly ubuntu_version=$(lsb_release -rs)
readonly ubuntu_codename=$(lsb_release -cs)
readonly is_update=$(systemctl is-active -q box && echo "yes" || echo "no")
readonly is_update=$(systemctl is-active box && echo "yes" || echo "no")
log "Updating from $(cat $box_src_dir/VERSION) to $(cat $box_src_tmp_dir/VERSION)"
echo "==> installer: updating docker"
log "updating docker"
readonly docker_version=20.10.3
if [[ $(docker version --format {{.Client.Version}}) != "${docker_version}" ]]; then
if [[ $(docker version --format {{.Client.Version}}) != "18.09.2" ]]; then
# there are 3 packages for docker - containerd, CLI and the daemon
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.4.3-1_amd64.deb" -o /tmp/containerd.deb
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.2.2-3_amd64.deb" -o /tmp/containerd.deb
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
log "Waiting for all dpkg tasks to finish..."
echo "==> installer: Waiting for all dpkg tasks to finish..."
while fuser /var/lib/dpkg/lock; do
sleep 1
done
while ! dpkg --force-confold --configure -a; do
log "Failed to fix packages. Retry"
echo "==> installer: Failed to fix packages. Retry"
sleep 1
done
# the latest docker might need newer packages
while ! apt update -y; do
log "Failed to update packages. Retry"
echo "==> installer: Failed to update packages. Retry"
sleep 1
done
while ! apt install -y /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb; do
log "Failed to install docker. Retry"
echo "==> installer: Failed to install docker. Retry"
sleep 1
done
rm /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
fi
readonly nginx_version=$(nginx -v 2>&1)
if [[ "${nginx_version}" != *"1.18."* ]]; then
log "installing nginx 1.18"
$curl -sL http://nginx.org/packages/ubuntu/pool/nginx/n/nginx/nginx_1.18.0-2~${ubuntu_codename}_amd64.deb -o /tmp/nginx.deb
# apt install with install deps (as opposed to dpkg -i)
apt install -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --force-yes /tmp/nginx.deb
rm /tmp/nginx.deb
fi
if ! which mount.nfs; then
log "installing nfs-common"
apt install -y nfs-common
fi
if ! which sshfs; then
log "installing sshfs"
apt install -y sshfs
fi
log "updating node"
readonly node_version=14.15.4
if [[ "$(node --version)" != "v${node_version}" ]]; then
mkdir -p /usr/local/node-${node_version}
$curl -sL https://nodejs.org/dist/v${node_version}/node-v${node_version}-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-${node_version}
ln -sf /usr/local/node-${node_version}/bin/node /usr/bin/node
ln -sf /usr/local/node-${node_version}/bin/npm /usr/bin/npm
rm -rf /usr/local/node-10.18.1
echo "==> installer: updating node"
if [[ "$(node --version)" != "v10.15.1" ]]; then
mkdir -p /usr/local/node-10.15.1
$curl -sL https://nodejs.org/dist/v10.15.1/node-v10.15.1-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-10.15.1
ln -sf /usr/local/node-10.15.1/bin/node /usr/bin/node
ln -sf /usr/local/node-10.15.1/bin/npm /usr/bin/npm
rm -rf /usr/local/node-8.11.2 /usr/local/node-8.9.3
fi
# this is here (and not in updater.js) because rebuild requires the above node
@@ -99,31 +73,31 @@ for try in `seq 1 10`; do
# however by default npm drops privileges for npm rebuild
# https://docs.npmjs.com/misc/config#unsafe-perm
if cd "${box_src_tmp_dir}" && npm rebuild --unsafe-perm; then break; fi
log "Failed to rebuild, trying again"
echo "==> installer: Failed to rebuild, trying again"
sleep 5
done
if [[ ${try} -eq 10 ]]; then
log "npm rebuild failed, giving up"
echo "==> installer: npm rebuild failed, giving up"
exit 4
fi
log "downloading new addon images"
echo "==> installer: downloading new addon images"
images=$(node -e "var i = require('${box_src_tmp_dir}/src/infra_version.js'); console.log(i.baseImages.map(function (x) { return x.tag; }).join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
log "\tPulling docker images: ${images}"
echo -e "\tPulling docker images: ${images}"
for image in ${images}; do
while ! docker pull "${image}"; do # this pulls the image using the sha256
log "Could not pull ${image}"
sleep 5
done
while ! docker pull "${image%@sha256:*}"; do # this will tag the image for readability
log "Could not pull ${image%@sha256:*}"
sleep 5
done
if ! docker pull "${image}"; then # this pulls the image using the sha256
echo "==> installer: Could not pull ${image}"
exit 5
fi
if ! docker pull "${image%@sha256:*}"; then # this will tag the image for readability
echo "==> installer: Could not pull ${image%@sha256:*}"
exit 6
fi
done
log "update cloudron-syslog"
echo "==> installer: update cloudron-syslog"
CLOUDRON_SYSLOG_DIR=/usr/local/cloudron-syslog
CLOUDRON_SYSLOG="${CLOUDRON_SYSLOG_DIR}/bin/cloudron-syslog"
CLOUDRON_SYSLOG_VERSION="1.0.3"
@@ -131,26 +105,26 @@ while [[ ! -f "${CLOUDRON_SYSLOG}" || "$(${CLOUDRON_SYSLOG} --version)" != ${CLO
rm -rf "${CLOUDRON_SYSLOG_DIR}"
mkdir -p "${CLOUDRON_SYSLOG_DIR}"
if npm install --unsafe-perm -g --prefix "${CLOUDRON_SYSLOG_DIR}" cloudron-syslog@${CLOUDRON_SYSLOG_VERSION}; then break; fi
log "Failed to install cloudron-syslog, trying again"
echo "===> installer: Failed to install cloudron-syslog, trying again"
sleep 5
done
if ! id "${user}" 2>/dev/null; then
useradd "${user}" -m
if ! id "${USER}" 2>/dev/null; then
useradd "${USER}" -m
fi
if [[ "${is_update}" == "yes" ]]; then
log "stop box service for update"
${box_src_dir}/setup/stop.sh
echo "==> installer: stop cloudron.target service for update"
${BOX_SRC_DIR}/setup/stop.sh
fi
# ensure we are not inside the source directory, which we will remove now
cd /root
log "switching the box code"
rm -rf "${box_src_dir}"
mv "${box_src_tmp_dir}" "${box_src_dir}"
chown -R "${user}:${user}" "${box_src_dir}"
echo "==> installer: switching the box code"
rm -rf "${BOX_SRC_DIR}"
mv "${box_src_tmp_dir}" "${BOX_SRC_DIR}"
chown -R "${USER}:${USER}" "${BOX_SRC_DIR}"
log "calling box setup script"
"${box_src_dir}/setup/start.sh"
echo "==> installer: calling box setup script"
"${BOX_SRC_DIR}/setup/start.sh"
+59 -96
View File
@@ -5,54 +5,41 @@ set -eu -o pipefail
# This script is run after the box code is switched. This means that this script
# should pretty much always succeed. No network logic/download code here.
function log() {
echo -e "$(date +'%Y-%m-%dT%H:%M:%S')" "==> start: $1"
}
log "Cloudron Start"
echo "==> Cloudron Start"
readonly USER="yellowtent"
readonly HOME_DIR="/home/${USER}"
readonly BOX_SRC_DIR="${HOME_DIR}/box"
readonly PLATFORM_DATA_DIR="${HOME_DIR}/platformdata"
readonly APPS_DATA_DIR="${HOME_DIR}/appsdata"
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata"
readonly MAIL_DATA_DIR="${HOME_DIR}/boxdata/mail"
readonly PLATFORM_DATA_DIR="${HOME_DIR}/platformdata" # platform data
readonly APPS_DATA_DIR="${HOME_DIR}/appsdata" # app data
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata" # box data
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly json="$(realpath ${script_dir}/../node_modules/.bin/json)"
readonly ubuntu_version=$(lsb_release -rs)
cp -f "${script_dir}/../scripts/cloudron-support" /usr/bin/cloudron-support
cp -f "${script_dir}/../scripts/cloudron-translation-update" /usr/bin/cloudron-translation-update
# this needs to match the cloudron/base:2.0.0 gid
if ! getent group media; then
addgroup --gid 500 --system media
fi
log "Configuring docker"
echo "==> Configuring docker"
cp "${script_dir}/start/docker-cloudron-app.apparmor" /etc/apparmor.d/docker-cloudron-app
systemctl enable apparmor
systemctl restart apparmor
usermod ${USER} -a -G docker
# unbound (which starts after box code) relies on this interface to exist. dockerproxy also relies on this.
docker network create --subnet=172.18.0.0/16 --ip-range=172.18.0.0/20 cloudron || true
docker network create --subnet=172.18.0.0/16 cloudron || true
mkdir -p "${BOX_DATA_DIR}"
mkdir -p "${APPS_DATA_DIR}"
mkdir -p "${MAIL_DATA_DIR}/dkim"
# keep these in sync with paths.js
log "Ensuring directories"
echo "==> Ensuring directories"
mkdir -p "${PLATFORM_DATA_DIR}/graphite"
mkdir -p "${PLATFORM_DATA_DIR}/mysql"
mkdir -p "${PLATFORM_DATA_DIR}/postgresql"
mkdir -p "${PLATFORM_DATA_DIR}/mongodb"
mkdir -p "${PLATFORM_DATA_DIR}/redis"
mkdir -p "${PLATFORM_DATA_DIR}/addons/mail/banner"
mkdir -p "${PLATFORM_DATA_DIR}/addons/mail"
mkdir -p "${PLATFORM_DATA_DIR}/collectd/collectd.conf.d"
mkdir -p "${PLATFORM_DATA_DIR}/logrotate.d"
mkdir -p "${PLATFORM_DATA_DIR}/acme"
@@ -60,21 +47,19 @@ mkdir -p "${PLATFORM_DATA_DIR}/backup"
mkdir -p "${PLATFORM_DATA_DIR}/logs/backup" \
"${PLATFORM_DATA_DIR}/logs/updater" \
"${PLATFORM_DATA_DIR}/logs/tasks" \
"${PLATFORM_DATA_DIR}/logs/crash" \
"${PLATFORM_DATA_DIR}/logs/collectd"
"${PLATFORM_DATA_DIR}/logs/crash"
mkdir -p "${PLATFORM_DATA_DIR}/update"
mkdir -p "${PLATFORM_DATA_DIR}/sftp/ssh" # sftp keys
mkdir -p "${PLATFORM_DATA_DIR}/firewall"
mkdir -p "${BOX_DATA_DIR}/appicons"
mkdir -p "${BOX_DATA_DIR}/certs"
mkdir -p "${BOX_DATA_DIR}/acme" # acme keys
mkdir -p "${BOX_DATA_DIR}/mail/dkim"
# ensure backups folder exists and is writeable
mkdir -p /var/backups
chmod 777 /var/backups
# can be removed after 6.3
[[ -f "${BOX_DATA_DIR}/updatechecker.json" ]] && mv "${BOX_DATA_DIR}/updatechecker.json" "${PLATFORM_DATA_DIR}/update/updatechecker.json"
rm -rf "${BOX_DATA_DIR}/well-known"
log "Configuring journald"
echo "==> Configuring journald"
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
-i /etc/systemd/journald.conf
@@ -92,29 +77,24 @@ systemctl daemon-reload
systemctl restart systemd-journald
setfacl -n -m u:${USER}:r /var/log/journal/*/system.journal
# Give user access to nginx logs (uses adm group)
usermod -a -G adm ${USER}
log "Setting up unbound"
echo "==> Setting up unbound"
# DO uses Google nameservers by default. This causes RBL queries to fail (host 2.0.0.127.zen.spamhaus.org)
# We do not use dnsmasq because it is not a recursive resolver and defaults to the value in the interfaces file (which is Google DNS!)
# We listen on 0.0.0.0 because there is no way control ordering of docker (which creates the 172.18.0.0/16) and unbound
# If IP6 is not enabled, dns queries seem to fail on some hosts. -s returns false if file missing or 0 size
ip6=$([[ -s /proc/net/if_inet6 ]] && echo "yes" || echo "no")
cp -f "${script_dir}/start/unbound.conf" /etc/unbound/unbound.conf.d/cloudron-network.conf
echo -e "server:\n\tinterface: 0.0.0.0\n\tdo-ip6: ${ip6}\n\taccess-control: 127.0.0.1 allow\n\taccess-control: 172.18.0.1/16 allow\n\tcache-max-negative-ttl: 30\n\tcache-max-ttl: 300\n\t#logfile: /var/log/unbound.log\n\t#verbosity: 10" > /etc/unbound/unbound.conf.d/cloudron-network.conf
# update the root anchor after a out-of-disk-space situation (see #269)
unbound-anchor -a /var/lib/unbound/root.key
log "Adding systemd services"
echo "==> Adding systemd services"
cp -r "${script_dir}/start/systemd/." /etc/systemd/system/
[[ "${ubuntu_version}" == "16.04" ]] && sed -e 's/MemoryMax/MemoryLimit/g' -i /etc/systemd/system/box.service
[[ "${ubuntu_version}" == "16.04" ]] && sed -e 's/Type=notify/Type=simple/g' -i /etc/systemd/system/unbound.service
systemctl daemon-reload
systemctl enable --now cloudron-syslog
systemctl enable unbound
systemctl enable box
systemctl enable cloudron-syslog
systemctl enable cloudron.target
systemctl enable cloudron-firewall
systemctl enable --now cloudron-disable-thp
# update firewall rules
systemctl restart cloudron-firewall
@@ -128,36 +108,35 @@ systemctl restart unbound
# ensure cloudron-syslog runs
systemctl restart cloudron-syslog
log "Configuring sudoers"
$json -f /etc/cloudron/cloudron.conf -I -e "delete this.edition" # can be removed after 4.0
echo "==> Clearing custom.yml"
rm -f /etc/cloudron/custom.yml
echo "==> Configuring sudoers"
rm -f /etc/sudoers.d/${USER}
cp "${script_dir}/start/sudoers" /etc/sudoers.d/${USER}
log "Configuring collectd"
rm -rf /etc/collectd /var/log/collectd.log
echo "==> Configuring collectd"
rm -rf /etc/collectd
ln -sfF "${PLATFORM_DATA_DIR}/collectd" /etc/collectd
cp "${script_dir}/start/collectd/collectd.conf" "${PLATFORM_DATA_DIR}/collectd/collectd.conf"
if [[ "${ubuntu_version}" == "20.04" ]]; then
# https://bugs.launchpad.net/ubuntu/+source/collectd/+bug/1872281
if ! grep -q LD_PRELOAD /etc/default/collectd; then
echo -e "\nLD_PRELOAD=/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8.so" >> /etc/default/collectd
fi
fi
systemctl restart collectd
log "Configuring logrotate"
echo "==> Configuring logrotate"
if ! grep -q "^include ${PLATFORM_DATA_DIR}/logrotate.d" /etc/logrotate.conf; then
echo -e "\ninclude ${PLATFORM_DATA_DIR}/logrotate.d\n" >> /etc/logrotate.conf
fi
rm -f "${PLATFORM_DATA_DIR}/logrotate.d/"*
cp "${script_dir}/start/logrotate/"* "${PLATFORM_DATA_DIR}/logrotate.d/"
rm -f "${PLATFORM_DATA_DIR}/logrotate.d/box-logrotate" "${PLATFORM_DATA_DIR}/logrotate.d/app-logrotate" # remove pre 3.6 config files
# logrotate files have to be owned by root, this is here to fixup existing installations where we were resetting the owner to yellowtent
chown root:root "${PLATFORM_DATA_DIR}/logrotate.d/"
log "Adding motd message for admins"
echo "==> Adding motd message for admins"
cp "${script_dir}/start/cloudron-motd" /etc/update-motd.d/92-cloudron
log "Configuring nginx"
echo "==> Configuring nginx"
# link nginx config to system config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${PLATFORM_DATA_DIR}/nginx" /etc/nginx
@@ -168,15 +147,8 @@ cp "${script_dir}/start/nginx/mime.types" "${PLATFORM_DATA_DIR}/nginx/mime.types
if ! grep -q "^Restart=" /etc/systemd/system/multi-user.target.wants/nginx.service; then
# default nginx service file does not restart on crash
echo -e "\n[Service]\nRestart=always\n" >> /etc/systemd/system/multi-user.target.wants/nginx.service
systemctl daemon-reload
fi
# worker_rlimit_nofile in nginx config can be max this number
mkdir -p /etc/systemd/system/nginx.service.d
if ! grep -q "^LimitNOFILE=" /etc/systemd/system/nginx.service.d/cloudron.conf; then
echo -e "[Service]\nLimitNOFILE=16384\n" > /etc/systemd/system/nginx.service.d/cloudron.conf
fi
systemctl daemon-reload
systemctl start nginx
# restart mysql to make sure it has latest config
@@ -185,63 +157,54 @@ if [[ ! -f /etc/mysql/mysql.cnf ]] || ! diff -q "${script_dir}/start/mysql.cnf"
cp "${script_dir}/start/mysql.cnf" /etc/mysql/mysql.cnf
while true; do
if ! systemctl list-jobs | grep mysql; then break; fi
log "Waiting for mysql jobs..."
echo "Waiting for mysql jobs..."
sleep 1
done
log "Stopping mysql"
systemctl stop mysql
while mysqladmin ping 2>/dev/null; do
log "Waiting for mysql to stop..."
while true; do
if systemctl restart mysql; then break; fi
echo "Restarting MySql again after sometime since this fails randomly"
sleep 1
done
else
systemctl start mysql
fi
# the start/stop of mysql is separate to make sure it got reloaded with latest config and it's up and running before we start the new box code
# when using 'system restart mysql', it seems to restart much later and the box code loses connection during platform startup (dangerous!)
log "Starting mysql"
systemctl start mysql
while ! mysqladmin ping 2>/dev/null; do
log "Waiting for mysql to start..."
sleep 1
done
readonly mysql_root_password="password"
mysqladmin -u root -ppassword password password # reset default root password
if [[ "${ubuntu_version}" == "20.04" ]]; then
# mysql 8 added a new caching_sha2_password scheme which mysqljs does not support
mysql -u root -p${mysql_root_password} -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${mysql_root_password}';"
fi
mysql -u root -p${mysql_root_password} -e 'CREATE DATABASE IF NOT EXISTS box'
# set HOME explicity, because it's not set when the installer calls it. this is done because
# paths.js uses this env var and some of the migrate code requires box code
log "Migrating data"
cd "${BOX_SRC_DIR}"
if ! HOME=${HOME_DIR} BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@127.0.0.1/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up; then
log "DB migration failed"
exit 1
echo "==> Migrating data"
(cd "${BOX_SRC_DIR}" && BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@127.0.0.1/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up)
if [[ ! -f "${BOX_DATA_DIR}/dhparams.pem" ]]; then
echo "==> Generating dhparams (takes forever)"
openssl dhparam -out "${BOX_DATA_DIR}/dhparams.pem" 2048
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
else
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
fi
rm -f /etc/cloudron/cloudron.conf
# old installations used to create appdata/<app>/redis which is now part of old backups and prevents restore
echo "==> Cleaning up stale redis directories"
find "${APPS_DATA_DIR}" -maxdepth 2 -type d -name redis -exec rm -rf {} +
log "Changing ownership"
# note, change ownership after db migrate. this allow db migrate to move files around as root and then we can fix it up here
echo "==> Changing ownership"
# be careful of what is chown'ed here. subdirs like mysql,redis etc are owned by the containers and will stop working if perms change
chown -R "${USER}" /etc/cloudron
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme" "${PLATFORM_DATA_DIR}/backup" "${PLATFORM_DATA_DIR}/logs" "${PLATFORM_DATA_DIR}/update" "${PLATFORM_DATA_DIR}/sftp" "${PLATFORM_DATA_DIR}/firewall"
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme" "${PLATFORM_DATA_DIR}/backup" "${PLATFORM_DATA_DIR}/logs" "${PLATFORM_DATA_DIR}/update"
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}/INFRA_VERSION" 2>/dev/null || true
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}"
chown "${USER}:${USER}" "${APPS_DATA_DIR}"
# do not chown the boxdata/mail directory; dovecot gets upset
chown "${USER}:${USER}" "${BOX_DATA_DIR}"
find "${BOX_DATA_DIR}" -mindepth 1 -maxdepth 1 -not -path "${MAIL_DATA_DIR}" -exec chown -R "${USER}:${USER}" {} \;
chown "${USER}:${USER}" "${MAIL_DATA_DIR}"
chown "${USER}:${USER}" -R "${MAIL_DATA_DIR}/dkim" # this is owned by box currently since it generates the keys
find "${BOX_DATA_DIR}" -mindepth 1 -maxdepth 1 -not -path "${BOX_DATA_DIR}/mail" -exec chown -R "${USER}:${USER}" {} \;
chown "${USER}:${USER}" "${BOX_DATA_DIR}/mail"
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}/mail/dkim" # this is owned by box currently since it generates the keys
log "Starting Cloudron"
systemctl start box
echo "==> Starting Cloudron"
systemctl start cloudron.target
sleep 2 # give systemd sometime to start the processes
log "Almost done"
echo "==> Almost done"
-14
View File
@@ -1,14 +0,0 @@
#!/bin/bash
set -eu
echo "==> Disabling THP"
# https://docs.couchbase.com/server/current/install/thp-disable.html
if [[ -d /sys/kernel/mm/transparent_hugepage ]]; then
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
else
echo "==> kernel does not have THP"
fi
+11 -38
View File
@@ -6,40 +6,11 @@ echo "==> Setting up firewall"
iptables -t filter -N CLOUDRON || true
iptables -t filter -F CLOUDRON # empty any existing rules
# first setup any user IP block lists
ipset create cloudron_blocklist hash:net || true
/home/yellowtent/box/src/scripts/setblocklist.sh
iptables -t filter -A CLOUDRON -m set --match-set cloudron_blocklist src -j DROP
# the DOCKER-USER chain is not cleared on docker restart
if ! iptables -t filter -C DOCKER-USER -m set --match-set cloudron_blocklist src -j DROP; then
iptables -t filter -I DOCKER-USER 1 -m set --match-set cloudron_blocklist src -j DROP
fi
# allow related and establisted connections
iptables -t filter -A CLOUDRON -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t filter -A CLOUDRON -p tcp -m tcp -m multiport --dports 22,80,202,443 -j ACCEPT # 202 is the alternate ssh port
# whitelist any user ports. we used to use --dports but it has a 15 port limit (XT_MULTI_PORTS)
ports_json="/home/yellowtent/platformdata/firewall/ports.json"
if allowed_tcp_ports=$(node -e "console.log(JSON.parse(fs.readFileSync('${ports_json}', 'utf8')).allowed_tcp_ports.join(','))" 2>/dev/null); then
IFS=',' arr=(${allowed_tcp_ports})
for p in "${arr[@]}"; do
iptables -A CLOUDRON -p tcp -m tcp --dport "${p}" -j ACCEPT
done
fi
if allowed_udp_ports=$(node -e "console.log(JSON.parse(fs.readFileSync('${ports_json}', 'utf8')).allowed_udp_ports.join(','))" 2>/dev/null); then
IFS=',' arr=(${allowed_udp_ports})
for p in "${arr[@]}"; do
iptables -A CLOUDRON -p udp -m udp --dport "${p}" -j ACCEPT
done
fi
# turn and stun service
iptables -t filter -A CLOUDRON -p tcp -m multiport --dports 3478,5349 -j ACCEPT
iptables -t filter -A CLOUDRON -p udp -m multiport --dports 3478,5349 -j ACCEPT
iptables -t filter -A CLOUDRON -p udp -m multiport --dports 50000:51000 -j ACCEPT
# NOTE: keep these in sync with src/apps.js validatePortBindings
# allow ssh, http, https, ping, dns
iptables -t filter -I CLOUDRON -m state --state RELATED,ESTABLISHED -j ACCEPT
# ssh is allowed alternately on port 202
iptables -A CLOUDRON -p tcp -m tcp -m multiport --dports 22,25,80,202,443,587,993,4190 -j ACCEPT
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-request -j ACCEPT
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-reply -j ACCEPT
@@ -70,12 +41,14 @@ for port in 80 443; do
iptables -A CLOUDRON_RATELIMIT -p tcp --syn --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
done
# ssh
# ssh smtp ssh msa imap sieve
for port in 22 202; do
iptables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --set --name "public-${port}"
iptables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --update --name "public-${port}" --seconds 10 --hitcount 5 -j CLOUDRON_RATELIMIT_LOG
done
# TODO: move docker platform rules to platform.js so it can be specialized to rate limit only when destination is the mail container
# docker translates (dnat) 25, 587, 993, 4190 in the PREROUTING step
for port in 2525 4190 9993; do
iptables -A CLOUDRON_RATELIMIT -p tcp --syn ! -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 50 -j CLOUDRON_RATELIMIT_LOG
@@ -91,12 +64,12 @@ for port in 3306 5432 6379 27017; do
iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
done
# For ssh, http, https
if ! iptables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null; then
iptables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
fi
# Workaround issue where Docker insists on adding itself first in FORWARD table
# For smtp, imap etc routed via docker/nat
# Workaroud issue where Docker insists on adding itself first in FORWARD table
iptables -D FORWARD -j CLOUDRON_RATELIMIT || true
iptables -I FORWARD 1 -j CLOUDRON_RATELIMIT
echo "==> Setting up firewall done"
+4 -20
View File
@@ -1,30 +1,14 @@
#!/bin/bash
[[ -f /etc/update-motd.d/91-cloudron-install-in-progress ]] && exit
printf "**********************************************************************\n\n"
if [[ -z "$(ls -A /home/yellowtent/boxdata/mail/dkim)" ]]; then
if [[ -f /tmp/.cloudron-motd-cache ]]; then
ip=$(cat /tmp/.cloudron-motd-cache)
elif ! ip=$(curl --fail --connect-timeout 2 --max-time 2 -q https://api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p'); then
ip='<IP>'
fi
echo "${ip}" > /tmp/.cloudron-motd-cache
if [[ ! -f /etc/cloudron/SETUP_TOKEN ]]; then
url="https://${ip}"
else
setupToken="$(cat /etc/cloudron/SETUP_TOKEN)"
url="https://${ip}/?setupToken=${setupToken}"
fi
printf "\t\t\tWELCOME TO CLOUDRON\n"
printf "\t\t\t-------------------\n"
printf '\n\e[1;32m%-6s\e[m\n\n' "Visit ${url} on your browser and accept the self-signed certificate to finish setup."
printf "Cloudron overview - https://docs.cloudron.io/ \n"
printf "Cloudron setup - https://docs.cloudron.io/installation/#setup \n"
printf '\n\e[1;32m%-6s\e[m\n\n' "Visit https://<IP> on your browser and accept the self-signed certificate to finish setup."
printf "Cloudron overview - https://cloudron.io/documentation/ \n"
printf "Cloudron setup - https://cloudron.io/documentation/installation/#setup \n"
else
printf "\t\t\tNOTE TO CLOUDRON ADMINS\n"
printf "\t\t\t-----------------------\n"
@@ -32,7 +16,7 @@ else
printf "Cloudron relies on and may break your installation. Ubuntu security updates\n"
printf "are automatically installed on this server every night.\n"
printf "\n"
printf "Read more at https://docs.cloudron.io/security/#os-updates\n"
printf "Read more at https://cloudron.io/documentation/security/#os-updates\n"
fi
printf "\nFor help and more information, visit https://forum.cloudron.io\n\n"
-5
View File
@@ -4,11 +4,6 @@ set -eu -o pipefail
readonly APPS_SWAP_FILE="/apps.swap"
if [[ -f "${APPS_SWAP_FILE}" ]]; then
echo "Swap file already exists at /apps.swap . Skipping"
exit
fi
# all sizes are in mb
readonly physical_memory=$(LC_ALL=C free -m | awk '/Mem:/ { print $2 }')
readonly swap_size=$((${physical_memory} > 4096 ? 4096 : ${physical_memory})) # min(RAM, 4GB) if you change this, fix enoughResourcesAvailable() in client.js
+37 -23
View File
@@ -57,7 +57,7 @@ LoadPlugin logfile
<Plugin logfile>
LogLevel "info"
File "/home/yellowtent/platformdata/logs/collectd/collectd.log"
File "/var/log/collectd.log"
Timestamp true
PrintSeverity false
</Plugin>
@@ -121,7 +121,7 @@ LoadPlugin memory
#LoadPlugin netlink
#LoadPlugin network
#LoadPlugin nfs
#LoadPlugin nginx
LoadPlugin nginx
#LoadPlugin notify_desktop
#LoadPlugin notify_email
#LoadPlugin ntpd
@@ -149,7 +149,7 @@ LoadPlugin memory
#LoadPlugin statsd
LoadPlugin swap
#LoadPlugin table
#LoadPlugin tail
LoadPlugin tail
#LoadPlugin tail_csv
#LoadPlugin tcpconns
#LoadPlugin teamspeak2
@@ -164,9 +164,7 @@ LoadPlugin swap
#LoadPlugin vmem
#LoadPlugin vserver
#LoadPlugin wireless
<LoadPlugin write_graphite>
FlushInterval 20
</LoadPlugin>
LoadPlugin write_graphite
#LoadPlugin write_http
#LoadPlugin write_riemann
@@ -199,11 +197,42 @@ LoadPlugin swap
IgnoreSelected false
</Plugin>
<Plugin nginx>
URL "http://127.0.0.1/nginx_status"
</Plugin>
<Plugin swap>
ReportByDevice false
ReportBytes true
</Plugin>
<Plugin "tail">
<File "/var/log/nginx/error.log">
Instance "nginx"
<Match>
Regex ".*"
DSType "CounterInc"
Type counter
Instance "errors"
</Match>
</File>
<File "/var/log/nginx/access.log">
Instance "nginx"
<Match>
Regex ".*"
DSType "CounterInc"
Type counter
Instance "requests"
</Match>
<Match>
Regex " \".*\" [0-9]+ [0-9]+ ([0-9]+)"
DSType GaugeAverage
Type delay
Instance "response"
</Match>
</File>
</Plugin>
<Plugin python>
# https://blog.dbrgn.ch/2017/3/10/write-a-collectd-python-plugin/
ModulePath "/home/yellowtent/box/setup/start/collectd/"
@@ -211,23 +240,8 @@ LoadPlugin swap
Interactive false
Import "df"
Import "du"
<Module du>
<Path>
Instance maildata
Dir "/home/yellowtent/boxdata/mail"
</Path>
<Path>
Instance boxdata
Dir "/home/yellowtent/boxdata"
Exclude "mail"
</Path>
<Path>
Instance platformdata
Dir "/home/yellowtent/platformdata"
</Path>
</Module>
# <Module df>
# </Module>
</Plugin>
<Plugin write_graphite>

Some files were not shown because too many files have changed in this diff Show More