Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f4407f3a43 | |||
| 56a82ef808 | |||
| ecce897b5a | |||
| c5c8b1e299 |
+1
-1
@@ -5,7 +5,7 @@
|
||||
},
|
||||
"extends": "eslint:recommended",
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 2020
|
||||
"ecmaVersion": 8
|
||||
},
|
||||
"rules": {
|
||||
"indent": [
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
node_modules/
|
||||
coverage/
|
||||
.nyc_output/
|
||||
webadmin/dist/
|
||||
installer/src/certs/server.key
|
||||
|
||||
|
||||
@@ -2007,538 +2007,3 @@
|
||||
* redis: Set maxmemory and maxmemory-policy
|
||||
* Add mlock capability to manifest (for vault app)
|
||||
|
||||
[5.3.3]
|
||||
* Fix issue where some postinstall messages where causing angular to infinite loop
|
||||
|
||||
[5.3.4]
|
||||
* Fix issue in database error handling
|
||||
|
||||
[5.4.0]
|
||||
* Update nginx to 1.18 for various security fixes
|
||||
* Add ping capability (for statping app)
|
||||
* Fix bug where aliases were displayed incorrectly in SOGo
|
||||
* Add univention as LDAP provider
|
||||
* Bump max_connection for postgres addon to 200
|
||||
* mail: Add pagination to mailing list API
|
||||
* Allow admin to lock email and display name of users
|
||||
* Allow admin to ensure all users have 2FA setup
|
||||
* ami: fix regression where we didn't send provider as part of get status call
|
||||
* nginx: hide version
|
||||
* backups: add b2 provider
|
||||
* Add filemanager webinterface
|
||||
* Add darkmode
|
||||
* Add note that password reset and invite links expire in 24 hours
|
||||
|
||||
[5.4.1]
|
||||
* Update nginx to 1.18 for various security fixes
|
||||
* Add ping capability (for statping app)
|
||||
* Fix bug where aliases were displayed incorrectly in SOGo
|
||||
* Add univention as LDAP provider
|
||||
* Bump max_connection for postgres addon to 200
|
||||
* mail: Add pagination to mailing list API
|
||||
* Allow admin to lock email and display name of users
|
||||
* Allow admin to ensure all users have 2FA setup
|
||||
* ami: fix regression where we didn't send provider as part of get status call
|
||||
* nginx: hide version
|
||||
* backups: add b2 provider
|
||||
* Add filemanager webinterface
|
||||
* Add darkmode
|
||||
* Add note that password reset and invite links expire in 24 hours
|
||||
|
||||
[5.5.0]
|
||||
* postgresql: update to PostgreSQL 11
|
||||
* postgresql: add citext extension to whitelist for loomio
|
||||
* postgresql: add btree_gist,postgres_fdw,pg_stat_statements,plpgsql extensions for gitlab
|
||||
* SFTP/Filebrowser: fix access of external data directories
|
||||
* Fix contrast issues in dark mode
|
||||
* Add option to delete mailbox data when mailbox is delete
|
||||
* Allow days/hours of backups and updates to be configurable
|
||||
* backup cleaner: fix issue where referenced backups where not counted against time periods
|
||||
* route53: fix issue where verification failed if user had more than 100 zones
|
||||
* rework task workers to run them in a separate cgroup
|
||||
* backups: now much faster thanks to reworking of task worker
|
||||
* When custom fallback cert is set, make sure it's used over LE certs
|
||||
* mongodb: update to MongoDB 4.0.19
|
||||
* List groups ordered by name
|
||||
* Invite links are now valid for a week
|
||||
* Update release GPG key
|
||||
* Add pre-defined variables ($CLOUDRON_APPID) for better post install messages
|
||||
* filemanager: show folder first
|
||||
|
||||
[5.6.0]
|
||||
* Remove IP nginx configuration that redirects to dashboard after activation
|
||||
* dashboard: looks for search string in app title as well
|
||||
* Add vaapi caps for transcoding
|
||||
* Fix issue where the long mongodb database names where causing app indices of rocket.chat to overflow (> 127)
|
||||
* Do not resize swap if swap file exists. This means that users can now control how swap is allocated on their own.
|
||||
* SFTP: fix issue where parallel rebuilds would cause an error
|
||||
* backups: make part size configurable
|
||||
* mail: set max email size
|
||||
* mail: allow mail server location to be set
|
||||
* spamassassin: custom configs and wl/bl
|
||||
* Do not automatically update to unstable release
|
||||
* scheduler: reduce container churn
|
||||
* mail: add API to set banner
|
||||
* Fix bug where systemd 237 ignores --nice value in systemd-run
|
||||
* postgresql: enable uuid-ossp extension
|
||||
* firewall: add blocklist
|
||||
* HTTP URLs now redirect directly to the HTTPS of the final domain
|
||||
* linode: Add singapore region
|
||||
* ovh: add sydney region
|
||||
* s3: makes multi-part copies in parallel
|
||||
|
||||
[5.6.1]
|
||||
* Blocklists are now stored in a text file instead of json
|
||||
* regenerate nginx configs
|
||||
|
||||
[5.6.2]
|
||||
* Update docker to 19.03.12
|
||||
* Fix sorting of user listing in the UI
|
||||
* namecheap: fix crash when server returns invalid response
|
||||
* unlink ghost file automatically on successful login
|
||||
* Bump mysql addon connection limit to 200
|
||||
* Fix install issue where `/dev/dri` may not be present
|
||||
* import: when importing filesystem backups, the input box is a path
|
||||
* firewall: fix race condition where blocklist was not added in correct position in the FORWARD chain
|
||||
* services: fix issue where services where scaled up/down too fast
|
||||
* turn: realm variable was not updated properly on dashboard change
|
||||
* nginx: add splash pages for IP based browser access
|
||||
* Give services panel a separate top-level view
|
||||
* Add app state filter
|
||||
* gcs: copy concurrency was not used
|
||||
* Mention why an app update cannot be applied and provide shortcut to start the app if stopped
|
||||
* Remove version from footer into the setting view
|
||||
* Give services panel a separate top-level view
|
||||
* postgresql: set collation order explicity when creating database to C.UTF-8 (for confluence)
|
||||
* rsync: fix error while goes missing when syncing
|
||||
* Pre-select app domain by default in the redirection drop down
|
||||
* robots: preseve leading and trailing whitespaces/newlines
|
||||
|
||||
[5.6.3]
|
||||
* Fix postgres locale issue
|
||||
|
||||
[6.0.0]
|
||||
* Focal support
|
||||
* Reduce duration of self-signed certs to 800 days
|
||||
* Better backup config filename when downloading
|
||||
* branding: footer can have template variables like %YEAR% and %VERSION%
|
||||
* sftp: secure the API with a token
|
||||
* filemanager: Add extract context menu item
|
||||
* Do not download docker images if present locally
|
||||
* sftp: disable access to non-admins by default
|
||||
* postgresql: whitelist pgcrypto extension for loomio
|
||||
* filemanager: Add new file creation action and collapse new and upload actions
|
||||
* rsync: add warning to remove lifecycle rules
|
||||
* Add volume management
|
||||
* backups: adjust node's heap size based on memory limit
|
||||
* s3: diasble per-chunk timeout
|
||||
* logs: more descriptive log file names on download
|
||||
* collectd: remove collectd config when app stopped (and add it back when started)
|
||||
* Apps can optionally request an authwall to be installed in front of them
|
||||
* mailbox can now owned by a group
|
||||
* linode: enable dns provider in setup view
|
||||
* dns: apps can now use the dns port
|
||||
* httpPaths: allow apps to specify forwarding from custom paths to container ports (for OLS)
|
||||
* add elasticemail smtp relay option
|
||||
* mail: add option to fts using solr
|
||||
* mail: change the namespace separator of new installations to /
|
||||
* mail: enable acl
|
||||
* Disable THP
|
||||
* filemanager: allow download dirs as zip files
|
||||
* aws: add china region
|
||||
* security: fix issue where apps could send with any username (but valid password)
|
||||
* i18n support
|
||||
|
||||
[6.0.1]
|
||||
* app: add export route
|
||||
* mail: on location change, fix lock up when one or more domains have invalid credentials
|
||||
* mail: fix crash because of write after timeout closure
|
||||
* scaleway: fix installation issue where THP is not enabled in kernel
|
||||
|
||||
[6.1.0]
|
||||
* mail: update haraka to 2.8.27. this fixes zero-length queue file crash
|
||||
* update: set/unset appStoreId from the update route
|
||||
* proxyauth: Do not follow redirects
|
||||
* proxyauth: add 2FA
|
||||
* appstore: add category translations
|
||||
* appstore: add media category
|
||||
* prepend the version to assets when sourcing to avoid cache hits on update
|
||||
* filemanger: list volumes of the app
|
||||
* Display upload size and size progress
|
||||
* nfs: chown the backups for hardlinks to work
|
||||
* remove user add/remove/role change email notifications
|
||||
* persist update indicator across restarts
|
||||
* cloudron-setup: add --generate-setup-token
|
||||
* dashboard: pass accessToken query param to automatically login
|
||||
* wellknown: add a way to set well known docs
|
||||
* oom: notification mails have links to dashboard
|
||||
* collectd: do not install xorg* packages
|
||||
* apptask: backup/restore tasks now use the backup memory limit configuration
|
||||
* eventlog: add logout event
|
||||
* mailbox: include alias in mailbox search
|
||||
* proxyAuth: add path exclusion
|
||||
* turn: fix for CVE-2020-26262
|
||||
* app password: fix regression where apps are not listed anymore in the UI
|
||||
* Support for multiDomain apps (domain aliases)
|
||||
* netcup: add dns provider
|
||||
* Container swap size is now dynamically determined based on system RAM/swap ratio
|
||||
|
||||
[6.1.1]
|
||||
* Fix bug where platform does not start if memory limits could not be applied
|
||||
|
||||
[6.1.2]
|
||||
* App disk usage was not shown in graphs
|
||||
* Email autoconfig
|
||||
* Fix SOGo login
|
||||
|
||||
[6.2.0]
|
||||
* ovh: object storage URL has changed from s3 to storage subdomain
|
||||
* ionos: add profit bricks object storage
|
||||
* update node to 14.15.4
|
||||
* update docker to 20.10.3
|
||||
* new base image 3.0.0
|
||||
* postgresql updated to 12.5
|
||||
* redis updated to 5.0.7
|
||||
* dovecot updated to 2.3.7
|
||||
* proxyAuth: fix docker UA detection
|
||||
* registry config: add UI to disable it
|
||||
* update solr to 8.8.1
|
||||
* firewall: fix issue where script errored when having more than 15 wl/bl ports
|
||||
* If groups are used, do not allow app installation without choosing the access settings
|
||||
* tls addon
|
||||
* Do not overwrite existing DMARC record
|
||||
* Sync dns records
|
||||
* Dry run restore
|
||||
* linode: show cloudron is installing when user SSHs
|
||||
* mysql: disable bin logs
|
||||
* Show cancel task button if task is still running after 2 minutes
|
||||
* filemanager: fix various bugs involving file names with spaces
|
||||
* Change Referrer-policy default to 'same-origin'
|
||||
* rsync: preserve and restore symlinks
|
||||
* Clean up backups function now removes missing backups
|
||||
|
||||
[6.2.1]
|
||||
* Avoid updown notifications on full restore
|
||||
* Add retries to downloader logic in installer
|
||||
|
||||
[6.2.2]
|
||||
* Fix ENOBUFS issue with backups when collecting fs metadata
|
||||
|
||||
[6.2.3]
|
||||
* Fix addon crashes with missing databases
|
||||
* Update mail container for LMTP cert fix
|
||||
* Fix services view showing yellow icon
|
||||
|
||||
[6.2.4]
|
||||
* Another addon crash fix
|
||||
|
||||
[6.2.5]
|
||||
* update: set memory limit properly
|
||||
* Fix bug where renew certs button did not work
|
||||
* sftp: fix rebuild condition
|
||||
* Fix display of user management/dashboard visiblity for email apps
|
||||
* graphite: disable tagdb and reduce log noise
|
||||
|
||||
[6.2.6]
|
||||
* Fix issue where collectd is restarted too quickly before graphite
|
||||
|
||||
[6.2.7]
|
||||
* redis: backup before upgrade
|
||||
|
||||
[6.2.8]
|
||||
* linode object storage: update aws sdk to make it work again
|
||||
* Fix crash in blocklist setting when source and list have mixed ip versions
|
||||
* mysql: bump connection limit to 200
|
||||
* namecheap: fix issue where DNS updates and del were not working
|
||||
* turn: turn off verbose logging
|
||||
* Fix crash when parsing df output (set LC_ALL for box service)
|
||||
|
||||
[6.3.0]
|
||||
* mail: allow TLS from internal hosts
|
||||
* tokens: add lastUsedTime
|
||||
* update: set memory limit properly
|
||||
* addons: better error handling
|
||||
* filemanager: various enhancements
|
||||
* sftp: fix rebuild condition
|
||||
* app mailbox is now optional
|
||||
* Fix display of user management/dashboard visiblity for email apps
|
||||
* graphite: disable tagdb and reduce log noise
|
||||
* hsts: change max-age to 2 years
|
||||
* clone: copy over redis memory limit
|
||||
* namecheap: fix bug where records were not removed
|
||||
* add UI to disable 2FA of a user
|
||||
* mail: add active flag to mailboxes and lists
|
||||
* Implement OCSP stapling
|
||||
* security: send new browser login location notification email
|
||||
* backups: add fqdn to the backup filename
|
||||
* import all boxdata settings into the database
|
||||
* volumes: generate systemd mount configs based on type
|
||||
* postgresql: set max conn limit per db
|
||||
* ubuntu 16: add alert about EOL
|
||||
* clone: save and restore app config
|
||||
* app import: restore icon, tag, label, proxy configs etc
|
||||
* sieve: fix redirects to not do SRS
|
||||
* notifications are now system level instead of per-user
|
||||
* vultr DNS
|
||||
* vultr object storage
|
||||
* mail: do not forward spam to mailing lists
|
||||
|
||||
[6.3.1]
|
||||
* Fix cert migration issues
|
||||
|
||||
[6.3.2]
|
||||
* Avatar was migrated as base64 instead of binary
|
||||
* Fix issue where filemanager came up empty for CIFS mounts
|
||||
|
||||
[6.3.3]
|
||||
* volumes: add filesystem volume type for shared folders
|
||||
* mail: enable sieve extension editheader
|
||||
* mail: update solr to 8.9.0
|
||||
|
||||
[6.3.4]
|
||||
* Fix issue where old nginx configs where not removed before upgrade
|
||||
|
||||
[6.3.5]
|
||||
* Fix permission issues with sshfs
|
||||
* filemanager: reset selection if directory has changed
|
||||
* branding: fix error highlight with empty cloudron name
|
||||
* better text instead of "Cloudron in the wild"
|
||||
* Make sso login hint translatable
|
||||
* Give unread notifications a small left border
|
||||
* Fix issue where clicking update indicator opened app in new tab
|
||||
* Ensure notifications are only fetched and shown for at least admins
|
||||
* setupaccount: Show input field errors below input field
|
||||
* Set focus automatically for new alias or redirect
|
||||
* eventlog: fix issue where old events are not periodically removed
|
||||
* ssfs: fix chown
|
||||
|
||||
[6.3.6]
|
||||
* Fix broken reboot button
|
||||
* app updated notification shown despite failure
|
||||
* Update translation for sso login information
|
||||
* Hide groups/tags/state filter in app listing for normal users
|
||||
* filemanager: Ensure breadcrumbs and hash are correctly updated on folder navigation
|
||||
* cloudron-setup: check if nginx/docker is already installed
|
||||
* Use the addresses of all available interfaces for port 53 binding
|
||||
* refresh config on appstore login
|
||||
* password reset: check 2fa when enabled
|
||||
|
||||
[7.0.0]
|
||||
* Ubuntu 16 is not supported anymore
|
||||
* Do not use Gravatar as the default but only an option
|
||||
* redis: suppress password warning
|
||||
* setup UI: fix dark mode
|
||||
* wellknown: response to .wellknown/matrix/client
|
||||
* purpose field is not required anymore during appstore signup
|
||||
* sftp: fix symlink deletion
|
||||
* Show correct/new app version info in updated finished notification
|
||||
* Make new login email translatable
|
||||
* Hide ticket form if cloudron.io mail is not verified
|
||||
* Refactor code to use async/await
|
||||
* postgresql: bump shm size and disable parallel queries
|
||||
* update nodejs to 14.17.6
|
||||
* external ldap: If we detect a local user with the same username as found on LDAP/AD we map it
|
||||
* add basic eventlog for apps in app view
|
||||
* Enable sshfs/cifs/nfs in app import UI
|
||||
* Require password for fallback email change
|
||||
* Make password reset logic translatable
|
||||
* support: only verified email address can open support tickets
|
||||
* Logout users without 2FA when mandatory 2fa is enabled
|
||||
* notifications: better oom message for redis
|
||||
* Add way to impersonate users for presetup
|
||||
* mail: open up port 465 for mail submission (TLS)
|
||||
* Implement operator role for apps
|
||||
* sftp: normal users do not have SFTP access anymore. Use operator role instead
|
||||
* eventlog: add service rebuild/restart/configure events
|
||||
* upcloud: add object storage integration
|
||||
* Each app can now have a custom crontab
|
||||
* services: add recovery mode
|
||||
* postgresql: fix restore issue with long table names
|
||||
* recvmail: make the addon work again
|
||||
* mail: update solr to 8.10.0
|
||||
* mail: POP3 support
|
||||
* update docker to 20.10.7
|
||||
* volumes: add remount button
|
||||
* mail: add spam eventlog filter type
|
||||
* mail: configure dnsbl
|
||||
* mail: add duplication detection for lists
|
||||
* mail: add SRS for Sieve Forwarding
|
||||
|
||||
[7.0.1]
|
||||
* Fix matrix wellKnown client migration
|
||||
|
||||
[7.0.2]
|
||||
* mail: POP3 flag was not returned correctly
|
||||
* external ldap: fix crash preventing users from logging in
|
||||
* volumes: ensure we don't crash if mount status is unexpected
|
||||
* backups: set default backup memory limit to 800
|
||||
* users: allow admins to specify password recovery email
|
||||
* retry startup tasks on database error
|
||||
|
||||
[7.0.3]
|
||||
* support: fix remoe support not working for 'root' user
|
||||
* Fix cog icon on app grid item hover for darkmode
|
||||
* Disable password reset and impersonate button for self user instead of hiding them
|
||||
* pop3: fix crash with auth of non-existent mailbox
|
||||
* mail: fix direction field in eventlog of deferred mails
|
||||
* mail: fix eventlog search
|
||||
* mail: save message-id in eventlog
|
||||
* backups: fix issue which resulted in incomplete backups when an app has backups disabled
|
||||
* restore: do not redirect until mail data has been restored
|
||||
* proxyauth: set viewport meta tag in login view
|
||||
|
||||
[7.0.4]
|
||||
* Add password reveal button to login pages
|
||||
* appstore: fix crash if account already registered
|
||||
* Do not nuke all the logrotate configs on update
|
||||
* Remove unused httpPaths from manifest
|
||||
* cloudron-support: add option to reset cloudron.io account
|
||||
* Fix flicker in login page
|
||||
* Fix LE account key re-use issue in DO 1-click image
|
||||
* mail: add non-tls ports for recvmail addon
|
||||
* backups: fix issue where mail backups where not cleaned up
|
||||
* notifications: fix automatic app update notifications
|
||||
|
||||
[7.1.0]
|
||||
* Add mail manager role
|
||||
* mailbox: app can be set as owner when recvmail addon enabled
|
||||
* domains: add well known config UI (for jitsi configuration)
|
||||
* Prefix email addon variables with CLOUDRON_EMAIL instead of CLOUDRON_MAIL
|
||||
* remove support for manifest version 1
|
||||
* Add option to enable/disable mailbox sharing
|
||||
* base image 3.2.0
|
||||
* Update node to 16.13.1
|
||||
* mongodb: update to 4.4
|
||||
* Add `upstreamVersion` to manifest
|
||||
* Add `logPaths` to manifest
|
||||
* Add cifs seal support for backup and volume mounts
|
||||
* add a way for admins to set username when profiles are locked
|
||||
* Add support for secondary domains
|
||||
* postgresql: enable postgis
|
||||
* remove nginx config of stopped apps
|
||||
* mail: use port25check.cloudron.io to check outbound port 25 connectivity
|
||||
* Add import/export of mailboxes and users
|
||||
* LDAP server can now be exposed
|
||||
* Update monaco-editor to 0.32.1
|
||||
* Update xterm.js to 4.17.0
|
||||
* Update docker to 20.10.12
|
||||
* IPv6 support
|
||||
|
||||
[7.1.1]
|
||||
* Fix issue where dkimKey of a mail domain is sometimes null
|
||||
* firewall: add retry for xtables lock
|
||||
* redis: fix issue where protected mode was enabled with no password
|
||||
|
||||
[7.1.2]
|
||||
* Fix crash in cloudron-firewall when ports are whitelisted
|
||||
* eventlog: add event for certificate cleanup
|
||||
* eventlog: log event for mailbox alias update
|
||||
* backups: fix incorrect mountpoint check with managed mounts
|
||||
|
||||
[7.1.3]
|
||||
* Fix security issue where an admin can impersonate an owner
|
||||
* block list: can upload up to 2MB
|
||||
* dns: fix issue where link local address was picked up for ipv6
|
||||
* setup: ufw may not be installed
|
||||
* mysql: fix default collation of databases
|
||||
|
||||
[7.1.4]
|
||||
* wildcard dns: fix handling of ENODATA
|
||||
* cloudflare: fix error handling
|
||||
* openvpn: ipv6 support
|
||||
* dyndns: fix issue where eventlog was getting filled with empty entries
|
||||
* mandatory 2fa: Fix typo in 2FA check
|
||||
|
||||
[7.2.0]
|
||||
* mail: hide log button for non-superadmins
|
||||
* firewall: do not add duplicate ldap redirect rules
|
||||
* ldap: respond to RootDSE
|
||||
* Check if CNAME record exists and remove it if overwrite is set
|
||||
* cifs: use credentials file for better password support
|
||||
* installer: rework script to fix DNS resolution issues
|
||||
* backup cleaner: do not clean if not mounted
|
||||
* restore: fix sftp private key perms
|
||||
* support: add a separate system user named cloudron-support
|
||||
* sshfs: fix bug where sshfs mounts were generated without unbound dependancy
|
||||
* cloudron-setup: add --setup-token
|
||||
* notifications: add installation event
|
||||
* backups: set label of backup and control it's retention
|
||||
* wasabi: add new regions (London, Frankfurt, Paris, Toronto)
|
||||
* docker: update to 20.10.14
|
||||
* Ensure LDAP usernames are always treated lowercase
|
||||
* Add a way to make LDAP users local
|
||||
* proxyAuth: set X-Remote-User (rfc3875)
|
||||
* GoDaddy: there is now a delete API
|
||||
* nginx: use ubuntu packages for ubuntu 20.04 and 22.04
|
||||
* Ubuntu 22.04 LTS support
|
||||
* Add Hetzner DNS
|
||||
* cron: add support for extensions (@reboot, @weekly etc)
|
||||
* Add profile backgroundImage api
|
||||
* exec: rework API to get exit code
|
||||
* Add update available filter
|
||||
|
||||
[7.2.1]
|
||||
* Refactor backup code to use async/await
|
||||
* mongodb: fix bug where a small timeout prevented import of large backups
|
||||
* Add update available filter
|
||||
* exec: rework API to get exit code
|
||||
* Add profile backgroundImage api
|
||||
* cron: add support for extensions (@reboot, @weekly etc)
|
||||
|
||||
[7.2.2]
|
||||
* Update cloudron-manifestformat for new scheduler patterns
|
||||
* collectd: FQDNLookup causes collectd install to fail
|
||||
|
||||
[7.2.3]
|
||||
* appstore: allow re-registration on server side delete
|
||||
* transfer ownership route is not used anymore
|
||||
* graphite: fix issue where disk names with '.' do not render
|
||||
* dark mode fixes
|
||||
* sendmail: mail from display name
|
||||
* Use volumes for app data instead of raw path
|
||||
* initial xfs support
|
||||
|
||||
[7.2.4]
|
||||
* volumes: Ensure long volume names do not overflow the table
|
||||
* Move all appstore filter to the left
|
||||
* app data: allow sameness of old and new dir
|
||||
|
||||
[7.2.5]
|
||||
* Fix storage volume migration
|
||||
* Fix issue where only 25 group members were returned
|
||||
* Fix eventlog display
|
||||
|
||||
[7.3.0]
|
||||
* Proxied apps
|
||||
* Applinks - app bookmarks in dashboard
|
||||
* backups: optional encryption of backup file names
|
||||
* eventlog: add event for impersonated user login
|
||||
* ldap & user directory: Remove virtual user and admin groups
|
||||
* Randomize certificate generation cronjob to lighten load on Let's Encrypt servers
|
||||
* mail: catch all address can be any domain
|
||||
* mail: accept only STARTTLS servers for relay
|
||||
* graphs: cgroup v2 support
|
||||
* mail: fix issue where signature was appended to text attachments
|
||||
* redis: restart button will now rebuild if the container is missing
|
||||
* backups: allow space in label name
|
||||
* mail: fix crash when solr is enabled on Ubuntu 22 (cgroup v2 detection fix)
|
||||
* mail: fix issue where certificate renewal did not restart the mail container properly
|
||||
* notification: Fix crash when backupId is null
|
||||
* IPv6: initial support for ipv6 only server
|
||||
* User directory: Cloudron connector uses 2FA auth
|
||||
* port bindings: add read only flag
|
||||
* mail: add storage quota support
|
||||
* mail: allow aliases to have wildcard
|
||||
* proxyAuth: add supportsBearerAuth flag
|
||||
* backups: Fix precondition check which was not erroring if mount is missing
|
||||
* mail: add queue management API and UI
|
||||
* graphs: show app disk usage graphs
|
||||
* UI: fix issue where mailbox display name was not init correctly
|
||||
* wasabi: add singapore and sydney regions
|
||||
* filemanager: add split view
|
||||
* nginx: fix zero length certs when out of disk space
|
||||
* read only API tokens
|
||||
|
||||
[7.3.1]
|
||||
* Add cloudlare R2
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
The Cloudron Subscription license
|
||||
Copyright (c) 2022 Cloudron UG
|
||||
Copyright (c) 2020 Cloudron UG
|
||||
|
||||
With regard to the Cloudron Software:
|
||||
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||

|
||||
|
||||
# Cloudron
|
||||
|
||||
[Cloudron](https://cloudron.io) is the best way to run apps on your server.
|
||||
@@ -31,9 +29,9 @@ anyone to effortlessly host web applications on their server on their own terms.
|
||||
* Trivially migrate to another server keeping your apps and data (for example, switch your
|
||||
infrastructure provider or move to a bigger server).
|
||||
|
||||
* Comprehensive [REST API](https://docs.cloudron.io/api/).
|
||||
* Comprehensive [REST API](https://cloudron.io/documentation/developer/api/).
|
||||
|
||||
* [CLI](https://docs.cloudron.io/custom-apps/cli/) to configure apps.
|
||||
* [CLI](https://cloudron.io/documentation/cli/) to configure apps.
|
||||
|
||||
* Alerts, audit logs, graphs, dns management ... and much more
|
||||
|
||||
@@ -43,42 +41,15 @@ Try our demo at https://my.demo.cloudron.io (username: cloudron password: cloudr
|
||||
|
||||
## Installing
|
||||
|
||||
[Install script](https://docs.cloudron.io/installation/) - [Pricing](https://cloudron.io/pricing.html)
|
||||
[Install script](https://cloudron.io/documentation/installation/) - [Pricing](https://cloudron.io/pricing.html)
|
||||
|
||||
**Note:** This repo is a small part of what gets installed on your server - there is
|
||||
the dashboard, database addons, graph container, base image etc. Cloudron also relies
|
||||
on external services such as the App Store for apps to be installed. As such, don't
|
||||
clone this repo and npm install and expect something to work.
|
||||
|
||||
## Development
|
||||
|
||||
This is the backend code of Cloudron. The frontend code is [here](https://git.cloudron.io/cloudron/dashboard).
|
||||
|
||||
The way to develop is to first install a full instance of Cloudron in a VM. Then you can use the [hotfix](https://git.cloudron.io/cloudron/cloudron-machine)
|
||||
tool to patch the VM with the latest code.
|
||||
|
||||
```
|
||||
SSH_PASSPHRASE=sshkeypassword cloudron-machine hotfix --cloudron my.example.com --release 6.0.0 --ssh-key keyname
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Please note that the Cloudron code is under a source-available license. This is not the same as an
|
||||
open source license but ensures the code is available for introspection (and hacking!).
|
||||
|
||||
## Contributions
|
||||
|
||||
Just to give some heads up, we are a bit restrictive in merging changes. We are a small team and
|
||||
would like to keep our maintenance burden low. It might be best to discuss features first in the [forum](https://forum.cloudron.io),
|
||||
to also figure out how many other people will use it to justify maintenance for a feature.
|
||||
|
||||
# Localization
|
||||
|
||||

|
||||
|
||||
## Support
|
||||
|
||||
* [Documentation](https://docs.cloudron.io/)
|
||||
* [Documentation](https://cloudron.io/documentation/)
|
||||
* [Forum](https://forum.cloudron.io/)
|
||||
|
||||
|
||||
|
||||
Executable
+193
@@ -0,0 +1,193 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eu -o pipefail
|
||||
|
||||
assertNotEmpty() {
|
||||
: "${!1:? "$1 is not set."}"
|
||||
}
|
||||
|
||||
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
|
||||
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
|
||||
|
||||
INSTANCE_TYPE="t2.micro"
|
||||
BLOCK_DEVICE="DeviceName=/dev/sda1,Ebs={VolumeSize=20,DeleteOnTermination=true,VolumeType=gp2}"
|
||||
SSH_KEY_NAME="id_rsa_yellowtent"
|
||||
|
||||
revision=$(git rev-parse HEAD)
|
||||
ami_name=""
|
||||
server_id=""
|
||||
server_ip=""
|
||||
destroy_server="yes"
|
||||
deploy_env="prod"
|
||||
image_id=""
|
||||
|
||||
args=$(getopt -o "" -l "revision:,name:,no-destroy,env:,region:" -n "$0" -- "$@")
|
||||
eval set -- "${args}"
|
||||
|
||||
while true; do
|
||||
case "$1" in
|
||||
--env) deploy_env="$2"; shift 2;;
|
||||
--revision) revision="$2"; shift 2;;
|
||||
--name) ami_name="$2"; shift 2;;
|
||||
--no-destroy) destroy_server="no"; shift 2;;
|
||||
--region)
|
||||
case "$2" in
|
||||
"us-east-1")
|
||||
image_id="ami-6edd3078"
|
||||
security_group="sg-a5e17fd9"
|
||||
subnet_id="subnet-b8fbc0f1"
|
||||
;;
|
||||
"eu-central-1")
|
||||
image_id="ami-5aee2235"
|
||||
security_group="sg-19f5a770" # everything open on eu-central-1
|
||||
subnet_id=""
|
||||
;;
|
||||
*)
|
||||
echo "Unknown aws region $2"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
export AWS_DEFAULT_REGION="$2" # used by the aws cli tool
|
||||
shift 2
|
||||
;;
|
||||
--) break;;
|
||||
*) echo "Unknown option $1"; exit 1;;
|
||||
esac
|
||||
done
|
||||
|
||||
# TODO fix this
|
||||
export AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY}"
|
||||
export AWS_SECRET_ACCESS_KEY="${AWS_ACCESS_SECRET}"
|
||||
|
||||
readonly ssh_keys="${HOME}/.ssh/id_rsa_yellowtent"
|
||||
readonly SSH="ssh -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
|
||||
|
||||
if [[ ! -f "${ssh_keys}" ]]; then
|
||||
echo "caas ssh key is missing at ${ssh_keys} (pick it up from secrets repo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${image_id}" ]]; then
|
||||
echo "--region is required (us-east-1 or eu-central-1)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
function get_pretty_revision() {
|
||||
local git_rev="$1"
|
||||
local sha1=$(git rev-parse --short "${git_rev}" 2>/dev/null)
|
||||
|
||||
echo "${sha1}"
|
||||
}
|
||||
|
||||
function wait_for_ssh() {
|
||||
echo "=> Waiting for ssh connection"
|
||||
while true; do
|
||||
echo -n "."
|
||||
|
||||
if $SSH ubuntu@${server_ip} echo "hello"; then
|
||||
echo ""
|
||||
break
|
||||
fi
|
||||
|
||||
sleep 5
|
||||
done
|
||||
}
|
||||
|
||||
now=$(date "+%Y-%m-%d-%H%M%S")
|
||||
pretty_revision=$(get_pretty_revision "${revision}")
|
||||
|
||||
if [[ -z "${ami_name}" ]]; then
|
||||
ami_name="box-${deploy_env}-${pretty_revision}-${now}"
|
||||
fi
|
||||
|
||||
echo "=> Create EC2 instance"
|
||||
id=$(aws ec2 run-instances --image-id "${image_id}" --instance-type "${INSTANCE_TYPE}" --security-group-ids "${security_group}" --block-device-mappings "${BLOCK_DEVICE}" --key-name "${SSH_KEY_NAME}" --subnet-id "${subnet_id}" --associate-public-ip-address \
|
||||
| $JSON Instances \
|
||||
| $JSON 0.InstanceId)
|
||||
|
||||
[[ -z "$id" ]] && exit 1
|
||||
echo "Instance created ID $id"
|
||||
|
||||
echo "=> Waiting for instance to get a public IP"
|
||||
while true; do
|
||||
server_ip=$(aws ec2 describe-instances --instance-ids ${id} \
|
||||
| $JSON Reservations.0.Instances \
|
||||
| $JSON 0.PublicIpAddress)
|
||||
|
||||
if [[ ! -z "${server_ip}" ]]; then
|
||||
echo ""
|
||||
break
|
||||
fi
|
||||
|
||||
echo -n "."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "Got public IP ${server_ip}"
|
||||
|
||||
wait_for_ssh
|
||||
|
||||
echo "=> Fetching cloudron-setup"
|
||||
while true; do
|
||||
|
||||
if $SSH ubuntu@${server_ip} wget "https://cloudron.io/cloudron-setup" -O "cloudron-setup"; then
|
||||
echo ""
|
||||
break
|
||||
fi
|
||||
|
||||
echo -n "."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "=> Running cloudron-setup"
|
||||
$SSH ubuntu@${server_ip} sudo /bin/bash "cloudron-setup" --env "${deploy_env}" --provider "ami" --skip-reboot
|
||||
|
||||
wait_for_ssh
|
||||
|
||||
echo "=> Removing ssh key"
|
||||
$SSH ubuntu@${server_ip} sudo rm /home/ubuntu/.ssh/authorized_keys /root/.ssh/authorized_keys
|
||||
|
||||
echo "=> Creating AMI"
|
||||
image_id=$(aws ec2 create-image --instance-id "${id}" --name "${ami_name}" | $JSON ImageId)
|
||||
[[ -z "$id" ]] && exit 1
|
||||
echo "Creating AMI with Id ${image_id}"
|
||||
|
||||
echo "=> Waiting for AMI to be created"
|
||||
while true; do
|
||||
state=$(aws ec2 describe-images --image-ids ${image_id} \
|
||||
| $JSON Images \
|
||||
| $JSON 0.State)
|
||||
|
||||
if [[ "${state}" == "available" ]]; then
|
||||
echo ""
|
||||
break
|
||||
fi
|
||||
|
||||
echo -n "."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ "${destroy_server}" == "yes" ]]; then
|
||||
echo "=> Deleting EC2 instance"
|
||||
|
||||
while true; do
|
||||
state=$(aws ec2 terminate-instances --instance-id "${id}" \
|
||||
| $JSON TerminatingInstances \
|
||||
| $JSON 0.CurrentState.Name)
|
||||
|
||||
if [[ "${state}" == "shutting-down" ]]; then
|
||||
echo ""
|
||||
break
|
||||
fi
|
||||
|
||||
echo -n "."
|
||||
sleep 5
|
||||
done
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Done."
|
||||
echo ""
|
||||
echo "New AMI is: ${image_id}"
|
||||
echo ""
|
||||
@@ -0,0 +1,261 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [[ -z "${DIGITAL_OCEAN_TOKEN}" ]]; then
|
||||
echo "Script requires DIGITAL_OCEAN_TOKEN env to be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${JSON}" ]]; then
|
||||
echo "Script requires JSON env to be set to path of JSON binary"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
readonly CURL="curl --retry 5 -s -u ${DIGITAL_OCEAN_TOKEN}:"
|
||||
|
||||
function debug() {
|
||||
echo "$@" >&2
|
||||
}
|
||||
|
||||
function get_ssh_key_id() {
|
||||
id=$($CURL "https://api.digitalocean.com/v2/account/keys" \
|
||||
| $JSON ssh_keys \
|
||||
| $JSON -c "this.name === \"$1\"" \
|
||||
| $JSON 0.id)
|
||||
[[ -z "$id" ]] && exit 1
|
||||
echo "$id"
|
||||
}
|
||||
|
||||
function create_droplet() {
|
||||
local ssh_key_id="$1"
|
||||
local box_name="$2"
|
||||
|
||||
local image_region="sfo2"
|
||||
local ubuntu_image_slug="ubuntu-16-04-x64"
|
||||
local box_size="1gb"
|
||||
|
||||
local data="{\"name\":\"${box_name}\",\"size\":\"${box_size}\",\"region\":\"${image_region}\",\"image\":\"${ubuntu_image_slug}\",\"ssh_keys\":[ \"${ssh_key_id}\" ],\"backups\":false}"
|
||||
|
||||
id=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets" | $JSON droplet.id)
|
||||
[[ -z "$id" ]] && exit 1
|
||||
echo "$id"
|
||||
}
|
||||
|
||||
function get_droplet_ip() {
|
||||
local droplet_id="$1"
|
||||
ip=$($CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}" | $JSON "droplet.networks.v4[0].ip_address")
|
||||
[[ -z "$ip" ]] && exit 1
|
||||
echo "$ip"
|
||||
}
|
||||
|
||||
function get_droplet_id() {
|
||||
local droplet_name="$1"
|
||||
id=$($CURL "https://api.digitalocean.com/v2/droplets?per_page=200" | $JSON "droplets" | $JSON -c "this.name === '${droplet_name}'" | $JSON "[0].id")
|
||||
[[ -z "$id" ]] && exit 1
|
||||
echo "$id"
|
||||
}
|
||||
|
||||
function power_off_droplet() {
|
||||
local droplet_id="$1"
|
||||
local data='{"type":"power_off"}'
|
||||
local response=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions")
|
||||
local event_id=`echo "${response}" | $JSON action.id`
|
||||
|
||||
if [[ -z "${event_id}" ]]; then
|
||||
debug "Got no event id, assuming already powered off."
|
||||
debug "Response: ${response}"
|
||||
return
|
||||
fi
|
||||
|
||||
debug "Powered off droplet. Event id: ${event_id}"
|
||||
debug -n "Waiting for droplet to power off"
|
||||
|
||||
while true; do
|
||||
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
|
||||
if [[ "${event_status}" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
debug -n "."
|
||||
sleep 10
|
||||
done
|
||||
debug ""
|
||||
}
|
||||
|
||||
function power_on_droplet() {
|
||||
local droplet_id="$1"
|
||||
local data='{"type":"power_on"}'
|
||||
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
|
||||
|
||||
debug "Powered on droplet. Event id: ${event_id}"
|
||||
|
||||
if [[ -z "${event_id}" ]]; then
|
||||
debug "Got no event id, assuming already powered on"
|
||||
return
|
||||
fi
|
||||
|
||||
debug -n "Waiting for droplet to power on"
|
||||
|
||||
while true; do
|
||||
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
|
||||
if [[ "${event_status}" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
debug -n "."
|
||||
sleep 10
|
||||
done
|
||||
debug ""
|
||||
}
|
||||
|
||||
function get_image_id() {
|
||||
local snapshot_name="$1"
|
||||
local image_id=""
|
||||
|
||||
if ! response=$($CURL "https://api.digitalocean.com/v2/images?per_page=200"); then
|
||||
echo "Failed to get image listing. ${response}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! image_id=$(echo "$response" \
|
||||
| $JSON images \
|
||||
| $JSON -c "this.name === \"${snapshot_name}\"" 0.id); then
|
||||
echo "Failed to parse curl response: ${response}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ -z "${image_id}" ]]; then
|
||||
echo "Failed to get image id of ${snapshot_name}. reponse: ${response}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "${image_id}"
|
||||
}
|
||||
|
||||
function snapshot_droplet() {
|
||||
local droplet_id="$1"
|
||||
local snapshot_name="$2"
|
||||
local data="{\"type\":\"snapshot\",\"name\":\"${snapshot_name}\"}"
|
||||
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
|
||||
|
||||
debug "Droplet snapshotted as ${snapshot_name}. Event id: ${event_id}"
|
||||
debug -n "Waiting for snapshot to complete"
|
||||
|
||||
while true; do
|
||||
if ! response=$($CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}"); then
|
||||
echo "Could not get action status. ${response}"
|
||||
continue
|
||||
fi
|
||||
if ! event_status=$(echo "${response}" | $JSON action.status); then
|
||||
echo "Could not parse action.status from response. ${response}"
|
||||
continue
|
||||
fi
|
||||
if [[ "${event_status}" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
debug -n "."
|
||||
sleep 10
|
||||
done
|
||||
debug "! done"
|
||||
|
||||
if ! image_id=$(get_image_id "${snapshot_name}"); then
|
||||
return 1
|
||||
fi
|
||||
echo "${image_id}"
|
||||
}
|
||||
|
||||
function destroy_droplet() {
|
||||
local droplet_id="$1"
|
||||
# TODO: check for 204 status
|
||||
$CURL -X DELETE "https://api.digitalocean.com/v2/droplets/${droplet_id}"
|
||||
debug "Droplet destroyed"
|
||||
debug ""
|
||||
}
|
||||
|
||||
function transfer_image() {
|
||||
local image_id="$1"
|
||||
local region_slug="$2"
|
||||
local data="{\"type\":\"transfer\",\"region\":\"${region_slug}\"}"
|
||||
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/images/${image_id}/actions" | $JSON action.id`
|
||||
echo "${event_id}"
|
||||
}
|
||||
|
||||
function wait_for_image_event() {
|
||||
local image_id="$1"
|
||||
local event_id="$2"
|
||||
|
||||
debug -n "Waiting for ${event_id}"
|
||||
|
||||
while true; do
|
||||
local event_status=`$CURL "https://api.digitalocean.com/v2/images/${image_id}/actions/${event_id}" | $JSON action.status`
|
||||
if [[ "${event_status}" == "completed" ]]; then
|
||||
break
|
||||
fi
|
||||
debug -n "."
|
||||
sleep 10
|
||||
done
|
||||
debug ""
|
||||
}
|
||||
|
||||
function transfer_image_to_all_regions() {
|
||||
local image_id="$1"
|
||||
|
||||
xfer_events=()
|
||||
image_regions=(ams2) ## sfo1 is where the image is created
|
||||
for image_region in ${image_regions[@]}; do
|
||||
xfer_event=$(transfer_image ${image_id} ${image_region})
|
||||
echo "Image transfer to ${image_region} initiated. Event id: ${xfer_event}"
|
||||
xfer_events+=("${xfer_event}")
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "Image transfer initiated, but they will take some time to get transferred."
|
||||
|
||||
for xfer_event in ${xfer_events[@]}; do
|
||||
$vps wait_for_image_event "${image_id}" "${xfer_event}"
|
||||
done
|
||||
}
|
||||
|
||||
if [[ $# -lt 1 ]]; then
|
||||
debug "<command> <params...>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
case $1 in
|
||||
get_ssh_key_id)
|
||||
get_ssh_key_id "${@:2}"
|
||||
;;
|
||||
|
||||
create)
|
||||
create_droplet "${@:2}"
|
||||
;;
|
||||
|
||||
get_id)
|
||||
get_droplet_id "${@:2}"
|
||||
;;
|
||||
|
||||
get_ip)
|
||||
get_droplet_ip "${@:2}"
|
||||
;;
|
||||
|
||||
power_on)
|
||||
power_on_droplet "${@:2}"
|
||||
;;
|
||||
|
||||
power_off)
|
||||
power_off_droplet "${@:2}"
|
||||
;;
|
||||
|
||||
snapshot)
|
||||
snapshot_droplet "${@:2}"
|
||||
;;
|
||||
|
||||
destroy)
|
||||
destroy_droplet "${@:2}"
|
||||
;;
|
||||
|
||||
transfer_image_to_all_regions)
|
||||
transfer_image_to_all_regions "${@:2}"
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Unknown command $1"
|
||||
exit 1
|
||||
esac
|
||||
Executable
+163
@@ -0,0 +1,163 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euv -o pipefail
|
||||
|
||||
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
readonly arg_infraversionpath="${SOURCE_DIR}/../src"
|
||||
|
||||
function die {
|
||||
echo $1
|
||||
exit 1
|
||||
}
|
||||
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# hold grub since updating it breaks on some VPS providers. also, dist-upgrade will trigger it
|
||||
apt-mark hold grub* >/dev/null
|
||||
apt-get -o Dpkg::Options::="--force-confdef" update -y
|
||||
apt-get -o Dpkg::Options::="--force-confdef" upgrade -y
|
||||
apt-mark unhold grub* >/dev/null
|
||||
|
||||
echo "==> Installing required packages"
|
||||
|
||||
debconf-set-selections <<< 'mysql-server mysql-server/root_password password password'
|
||||
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password password'
|
||||
|
||||
# this enables automatic security upgrades (https://help.ubuntu.com/community/AutomaticSecurityUpdates)
|
||||
# resolvconf is needed for unbound to work property after disabling systemd-resolved in 18.04
|
||||
ubuntu_version=$(lsb_release -rs)
|
||||
ubuntu_codename=$(lsb_release -cs)
|
||||
gpg_package=$([[ "${ubuntu_version}" == "16.04" ]] && echo "gnupg" || echo "gpg")
|
||||
apt-get -y install \
|
||||
acl \
|
||||
build-essential \
|
||||
cifs-utils \
|
||||
cron \
|
||||
curl \
|
||||
debconf-utils \
|
||||
dmsetup \
|
||||
$gpg_package \
|
||||
iptables \
|
||||
libpython2.7 \
|
||||
linux-generic \
|
||||
logrotate \
|
||||
mysql-server-5.7 \
|
||||
openssh-server \
|
||||
pwgen \
|
||||
resolvconf \
|
||||
swaks \
|
||||
tzdata \
|
||||
unattended-upgrades \
|
||||
unbound \
|
||||
xfsprogs
|
||||
|
||||
if [[ "${ubuntu_version}" == "16.04" ]]; then
|
||||
echo "==> installing nginx for xenial for TLSv3 support"
|
||||
|
||||
curl -sL http://nginx.org/packages/ubuntu/pool/nginx/n/nginx/nginx_1.14.0-1~xenial_amd64.deb -o /tmp/nginx.deb
|
||||
# apt install with install deps (as opposed to dpkg -i)
|
||||
apt install -y /tmp/nginx.deb
|
||||
rm /tmp/nginx.deb
|
||||
else
|
||||
apt install -y nginx-full
|
||||
fi
|
||||
|
||||
# on some providers like scaleway the sudo file is changed and we want to keep the old one
|
||||
apt-get -o Dpkg::Options::="--force-confold" install -y sudo
|
||||
|
||||
# this ensures that unattended upgades are enabled, if it was disabled during ubuntu install time (see #346)
|
||||
# debconf-set-selection of unattended-upgrades/enable_auto_updates + dpkg-reconfigure does not work
|
||||
cp /usr/share/unattended-upgrades/20auto-upgrades /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
echo "==> Installing node.js"
|
||||
mkdir -p /usr/local/node-10.18.1
|
||||
curl -sL https://nodejs.org/dist/v10.18.1/node-v10.18.1-linux-x64.tar.gz | tar zxf - --strip-components=1 -C /usr/local/node-10.18.1
|
||||
ln -sf /usr/local/node-10.18.1/bin/node /usr/bin/node
|
||||
ln -sf /usr/local/node-10.18.1/bin/npm /usr/bin/npm
|
||||
apt-get install -y python # Install python which is required for npm rebuild
|
||||
[[ "$(python --version 2>&1)" == "Python 2.7."* ]] || die "Expecting python version to be 2.7.x"
|
||||
|
||||
# https://docs.docker.com/engine/installation/linux/ubuntulinux/
|
||||
echo "==> Installing Docker"
|
||||
|
||||
# create systemd drop-in file. if you channge options here, be sure to fixup installer.sh as well
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2" > /etc/systemd/system/docker.service.d/cloudron.conf
|
||||
|
||||
# there are 3 packages for docker - containerd, CLI and the daemon
|
||||
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.2.2-3_amd64.deb" -o /tmp/containerd.deb
|
||||
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
|
||||
curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
|
||||
# apt install with install deps (as opposed to dpkg -i)
|
||||
apt install -y /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
|
||||
rm /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
|
||||
|
||||
storage_driver=$(docker info | grep "Storage Driver" | sed 's/.*: //')
|
||||
if [[ "${storage_driver}" != "overlay2" ]]; then
|
||||
echo "Docker is using "${storage_driver}" instead of overlay2"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# do not upgrade grub because it might prompt user and break this script
|
||||
echo "==> Enable memory accounting"
|
||||
apt-get -y --no-upgrade install grub2-common
|
||||
sed -e 's/^GRUB_CMDLINE_LINUX="\(.*\)"$/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
|
||||
update-grub
|
||||
|
||||
echo "==> Downloading docker images"
|
||||
if [ ! -f "${arg_infraversionpath}/infra_version.js" ]; then
|
||||
echo "No infra_versions.js found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
images=$(node -e "var i = require('${arg_infraversionpath}/infra_version.js'); console.log(i.baseImages.map(function (x) { return x.tag; }).join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
|
||||
|
||||
echo -e "\tPulling docker images: ${images}"
|
||||
for image in ${images}; do
|
||||
docker pull "${image}"
|
||||
docker pull "${image%@sha256:*}" # this will tag the image for readability
|
||||
done
|
||||
|
||||
echo "==> Install collectd"
|
||||
if ! apt-get install -y libcurl3-gnutls collectd collectd-utils; then
|
||||
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
|
||||
echo "Failed to install collectd. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
|
||||
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
|
||||
fi
|
||||
|
||||
echo "==> Configuring host"
|
||||
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
|
||||
timedatectl set-ntp 1
|
||||
# mysql follows the system timezone
|
||||
timedatectl set-timezone UTC
|
||||
|
||||
echo "==> Adding sshd configuration warning"
|
||||
sed -e '/Port 22/ i # NOTE: Cloudron only supports moving SSH to port 202. See https://cloudron.io/documentation/security/#securing-ssh-access' -i /etc/ssh/sshd_config
|
||||
|
||||
# https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068
|
||||
echo "==> Disabling motd news"
|
||||
sed -i 's/^ENABLED=.*/ENABLED=0/' /etc/default/motd-news
|
||||
|
||||
# Disable bind for good measure (on online.net, kimsufi servers these are pre-installed and conflicts with unbound)
|
||||
systemctl stop bind9 || true
|
||||
systemctl disable bind9 || true
|
||||
|
||||
# on ovh images dnsmasq seems to run by default
|
||||
systemctl stop dnsmasq || true
|
||||
systemctl disable dnsmasq || true
|
||||
|
||||
# on ssdnodes postfix seems to run by default
|
||||
systemctl stop postfix || true
|
||||
systemctl disable postfix || true
|
||||
|
||||
# on ubuntu 18.04, this is the default. this requires resolvconf for DNS to work further after the disable
|
||||
systemctl stop systemd-resolved || true
|
||||
systemctl disable systemd-resolved || true
|
||||
|
||||
# ubuntu's default config for unbound does not work if ipv6 is disabled. this config is overwritten in start.sh
|
||||
# we need unbound to work as this is required for installer.sh to do any DNS requests
|
||||
ip6=$([[ -s /proc/net/if_inet6 ]] && echo "yes" || echo "no")
|
||||
echo -e "server:\n\tinterface: 127.0.0.1\n\tdo-ip6: ${ip6}" > /etc/unbound/unbound.conf.d/cloudron-network.conf
|
||||
systemctl restart unbound
|
||||
|
||||
@@ -2,76 +2,57 @@
|
||||
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs'),
|
||||
ldap = require('./src/ldap.js'),
|
||||
paths = require('./src/paths.js'),
|
||||
proxyAuth = require('./src/proxyauth.js'),
|
||||
safe = require('safetydance'),
|
||||
server = require('./src/server.js'),
|
||||
settings = require('./src/settings.js'),
|
||||
directoryServer = require('./src/directoryserver.js');
|
||||
|
||||
let logFd;
|
||||
|
||||
async function setupLogging() {
|
||||
if (process.env.BOX_ENV === 'test') return;
|
||||
|
||||
logFd = fs.openSync(paths.BOX_LOG_FILE, 'a');
|
||||
// we used to write using a stream before but it caches internally and there is no way to flush it when things crash
|
||||
process.stdout.write = process.stderr.write = function (...args) {
|
||||
const callback = typeof args[args.length-1] === 'function' ? args.pop() : function () {}; // callback is required for fs.write
|
||||
fs.write.apply(fs, [logFd, ...args, callback]);
|
||||
// prefix all output with a timestamp
|
||||
// debug() already prefixes and uses process.stderr NOT console.*
|
||||
['log', 'info', 'warn', 'debug', 'error'].forEach(function (log) {
|
||||
var orig = console[log];
|
||||
console[log] = function () {
|
||||
orig.apply(console, [new Date().toISOString()].concat(Array.prototype.slice.call(arguments)));
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
// this is also used as the 'uncaughtException' handler which can only have synchronous functions
|
||||
function exitSync(status) {
|
||||
if (status.error) fs.write(logFd, status.error.stack + '\n', function () {});
|
||||
fs.fsyncSync(logFd);
|
||||
fs.closeSync(logFd);
|
||||
process.exit(status.code);
|
||||
}
|
||||
require('supererror')({ splatchError: true });
|
||||
|
||||
async function startServers() {
|
||||
await setupLogging();
|
||||
await server.start(); // do this first since it also inits the database
|
||||
await proxyAuth.start();
|
||||
await ldap.start();
|
||||
let async = require('async'),
|
||||
constants = require('./src/constants.js'),
|
||||
dockerProxy = require('./src/dockerproxy.js'),
|
||||
ldap = require('./src/ldap.js'),
|
||||
server = require('./src/server.js');
|
||||
|
||||
const conf = await settings.getDirectoryServerConfig();
|
||||
if (conf.enabled) await directoryServer.start();
|
||||
}
|
||||
console.log();
|
||||
console.log('==========================================');
|
||||
console.log(` Cloudron ${constants.VERSION} `);
|
||||
console.log('==========================================');
|
||||
console.log();
|
||||
|
||||
async function main() {
|
||||
const [error] = await safe(startServers());
|
||||
if (error) return exitSync({ error: new Error(`Error starting server: ${JSON.stringify(error)}`), code: 1 });
|
||||
async.series([
|
||||
server.start,
|
||||
ldap.start,
|
||||
dockerProxy.start
|
||||
], function (error) {
|
||||
if (error) {
|
||||
console.error('Error starting server', error);
|
||||
process.exit(1);
|
||||
}
|
||||
console.log('Cloudron is up and running');
|
||||
});
|
||||
|
||||
// require this here so that logging handler is already setup
|
||||
const debug = require('debug')('box:box');
|
||||
var NOOP_CALLBACK = function () { };
|
||||
|
||||
process.on('SIGINT', async function () {
|
||||
debug('Received SIGINT. Shutting down.');
|
||||
process.on('SIGINT', function () {
|
||||
console.log('Received SIGINT. Shutting down.');
|
||||
|
||||
await proxyAuth.stop();
|
||||
await server.stop();
|
||||
await directoryServer.stop();
|
||||
await ldap.stop();
|
||||
setTimeout(process.exit.bind(process), 3000);
|
||||
});
|
||||
server.stop(NOOP_CALLBACK);
|
||||
ldap.stop(NOOP_CALLBACK);
|
||||
dockerProxy.stop(NOOP_CALLBACK);
|
||||
setTimeout(process.exit.bind(process), 3000);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', async function () {
|
||||
debug('Received SIGTERM. Shutting down.');
|
||||
process.on('SIGTERM', function () {
|
||||
console.log('Received SIGTERM. Shutting down.');
|
||||
|
||||
await proxyAuth.stop();
|
||||
await server.stop();
|
||||
await directoryServer.stop();
|
||||
await ldap.stop();
|
||||
setTimeout(process.exit.bind(process), 3000);
|
||||
});
|
||||
|
||||
process.on('uncaughtException', (error) => exitSync({ error, code: 1 }));
|
||||
|
||||
console.log(`Cloudron is up and running. Logs are at ${paths.BOX_LOG_FILE}`); // this goes to journalctl
|
||||
}
|
||||
|
||||
main();
|
||||
server.stop(NOOP_CALLBACK);
|
||||
ldap.stop(NOOP_CALLBACK);
|
||||
dockerProxy.stop(NOOP_CALLBACK);
|
||||
setTimeout(process.exit.bind(process), 3000);
|
||||
});
|
||||
|
||||
+12
-6
@@ -2,21 +2,27 @@
|
||||
|
||||
'use strict';
|
||||
|
||||
const database = require('./src/database.js');
|
||||
var database = require('./src/database.js');
|
||||
|
||||
const crashNotifier = require('./src/crashnotifier.js');
|
||||
var crashNotifier = require('./src/crashnotifier.js');
|
||||
|
||||
// This is triggered by systemd with the crashed unit name as argument
|
||||
async function main() {
|
||||
function main() {
|
||||
if (process.argv.length !== 3) return console.error('Usage: crashnotifier.js <unitName>');
|
||||
|
||||
const unitName = process.argv[2];
|
||||
var unitName = process.argv[2];
|
||||
console.log('Started crash notifier for', unitName);
|
||||
|
||||
// eventlog api needs the db
|
||||
await database.initialize();
|
||||
database.initialize(function (error) {
|
||||
if (error) return console.error('Cannot connect to database. Unable to send crash log.', error);
|
||||
|
||||
await crashNotifier.sendFailureLogs(unitName);
|
||||
crashNotifier.sendFailureLogs(unitName, function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
process.exit();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
db.runSql('ALTER TABLE users DROP COLUMN modifiedAt', callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN ts', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,29 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
|
||||
var backupConfig = JSON.parse(results[0].value);
|
||||
if (backupConfig.intervalSecs === 6 * 60 * 60) { // every 6 hours
|
||||
backupConfig.schedulePattern = '00 00 5,11,17,23 * * *';
|
||||
} else if (backupConfig.intervalSecs === 12 * 60 * 60) { // every 12 hours
|
||||
backupConfig.schedulePattern = '00 00 5,17 * * *';
|
||||
} else if (backupConfig.intervalSecs === 24 * 60 * 60) { // every day
|
||||
backupConfig.schedulePattern = '00 00 23 * * *';
|
||||
} else if (backupConfig.intervalSecs === 3 * 24 * 60 * 60) { // every 3 days (based on day)
|
||||
backupConfig.schedulePattern = '00 00 23 * * 1,3,5';
|
||||
} else if (backupConfig.intervalSecs === 7 * 24 * 60 * 60) { // every week (saturday)
|
||||
backupConfig.schedulePattern = '00 00 23 * * 6';
|
||||
} else { // default to everyday
|
||||
backupConfig.schedulePattern = '00 00 23 * * *';
|
||||
}
|
||||
|
||||
delete backupConfig.intervalSecs;
|
||||
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [ JSON.stringify(backupConfig) ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,23 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT value FROM settings WHERE name="admin_domain"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
|
||||
const adminDomain = results[0].value;
|
||||
|
||||
async.series([
|
||||
db.runSql.bind(db, 'INSERT INTO settings (name, value) VALUES (?, ?)', [ 'mail_domain', adminDomain ]),
|
||||
db.runSql.bind(db, 'INSERT INTO settings (name, value) VALUES (?, ?)', [ 'mail_fqdn', `my.${adminDomain}` ])
|
||||
], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'DELETE FROM settings WHERE name="mail_domain"'),
|
||||
db.runSql.bind(db, 'DELETE FROM settings WHERE name="mail_fqdn"'),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,22 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('SELECT * FROM settings WHERE name=?', ['app_autoupdate_pattern'], function (error, results) {
|
||||
if (error || results.length === 0) return callback(error); // will use defaults from box code
|
||||
|
||||
var updatePattern = results[0].value; // use app auto update patter for the box as well
|
||||
|
||||
async.series([
|
||||
db.runSql.bind(db, 'START TRANSACTION;'),
|
||||
db.runSql.bind(db, 'DELETE FROM settings WHERE name=? OR name=?', ['app_autoupdate_pattern', 'box_autoupdate_pattern']),
|
||||
db.runSql.bind(db, 'INSERT settings (name, value) VALUES(?, ?)', ['autoupdate_pattern', updatePattern]),
|
||||
db.runSql.bind(db, 'COMMIT')
|
||||
], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mail ADD COLUMN bannerJson TEXT', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mail DROP COLUMN bannerJson', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,27 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const OLD_FIREWALL_CONFIG_JSON = '/home/yellowtent/boxdata/firewall-config.json';
|
||||
const PORTS_FILE = '/home/yellowtent/boxdata/firewall/ports.json';
|
||||
const BLOCKLIST_FILE = '/home/yellowtent/boxdata/firewall/blocklist.txt';
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
exports.up = function (db, callback) {
|
||||
if (!fs.existsSync(OLD_FIREWALL_CONFIG_JSON)) return callback();
|
||||
|
||||
try {
|
||||
const dataJson = fs.readFileSync(OLD_FIREWALL_CONFIG_JSON, 'utf8');
|
||||
const data = JSON.parse(dataJson);
|
||||
fs.writeFileSync(BLOCKLIST_FILE, data.blocklist.join('\n') + '\n', 'utf8');
|
||||
fs.writeFileSync(PORTS_FILE, JSON.stringify({ allowed_tcp_ports: data.allowed_tcp_ports }, null, 4), 'utf8');
|
||||
fs.unlinkSync(OLD_FIREWALL_CONFIG_JSON);
|
||||
} catch (error) {
|
||||
console.log('Error migrating old firewall config', error);
|
||||
}
|
||||
|
||||
callback();
|
||||
};
|
||||
|
||||
exports.down = function (db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,40 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
var cmd1 = 'CREATE TABLE volumes(' +
|
||||
'id VARCHAR(128) NOT NULL UNIQUE,' +
|
||||
'name VARCHAR(256) NOT NULL UNIQUE,' +
|
||||
'hostPath VARCHAR(1024) NOT NULL UNIQUE,' +
|
||||
'creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
|
||||
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
|
||||
|
||||
var cmd2 = 'CREATE TABLE appMounts(' +
|
||||
'appId VARCHAR(128) NOT NULL,' +
|
||||
'volumeId VARCHAR(128) NOT NULL,' +
|
||||
'readOnly BOOLEAN DEFAULT 1,' +
|
||||
'UNIQUE KEY appMounts_appId_volumeId (appId, volumeId),' +
|
||||
'FOREIGN KEY(appId) REFERENCES apps(id),' +
|
||||
'FOREIGN KEY(volumeId) REFERENCES volumes(id)) CHARACTER SET utf8 COLLATE utf8_bin;';
|
||||
|
||||
db.runSql(cmd1, function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
db.runSql(cmd2, function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN bindsJson', callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('DROP TABLE appMounts', function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
db.runSql('DROP TABLE volumes', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN proxyAuth BOOLEAN DEFAULT 0', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN proxyAuth', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN ownerType VARCHAR(16)'),
|
||||
db.runSql.bind(db, 'UPDATE mailboxes SET ownerType=?', [ 'user' ]),
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes MODIFY ownerType VARCHAR(16) NOT NULL'),
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mailboxes DROP COLUMN ownerType', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,13 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN httpPort')
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,29 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
iputils = require('../src/iputils.js');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN containerIp VARCHAR(16) UNIQUE', function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
let baseIp = iputils.intFromIp('172.18.16.0');
|
||||
|
||||
db.all('SELECT * FROM apps', function (error, apps) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(apps, function (app, iteratorDone) {
|
||||
const nextIp = iputils.ipFromInt(++baseIp);
|
||||
db.runSql('UPDATE apps SET containerIp=? WHERE id=?', [ nextIp, app.id ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN containerIp', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM settings WHERE name=?', ['platform_config'], function (error, results) {
|
||||
let value;
|
||||
if (error || results.length === 0) {
|
||||
value = { sftp: { requireAdmin: true } };
|
||||
} else {
|
||||
value = JSON.parse(results[0].value);
|
||||
if (!value.sftp) value.sftp = {};
|
||||
value.sftp.requireAdmin = true;
|
||||
}
|
||||
|
||||
// existing installations may not even have the key. so use REPLACE instead of UPDATE
|
||||
db.runSql('REPLACE INTO settings (name, value) VALUES (?, ?)', [ 'platform_config', JSON.stringify(value) ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,18 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'CREATE TABLE groupMembers_copy(groupId VARCHAR(128) NOT NULL, userId VARCHAR(128) NOT NULL, FOREIGN KEY(groupId) REFERENCES userGroups(id), FOREIGN KEY(userId) REFERENCES users(id), UNIQUE (groupId, userId)) CHARACTER SET utf8 COLLATE utf8_bin'), // In mysql CREATE TABLE.. LIKE does not copy indexes
|
||||
db.runSql.bind(db, 'INSERT INTO groupMembers_copy SELECT * FROM groupMembers GROUP BY groupId, userId'),
|
||||
db.runSql.bind(db, 'DROP TABLE groupMembers'),
|
||||
db.runSql.bind(db, 'ALTER TABLE groupMembers_copy RENAME TO groupMembers')
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE groupMembers DROP INDEX groupMembers_member'),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,51 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
safe = require('safetydance');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE domains ADD COLUMN wellKnownJson TEXT', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
// keep the paths around, so that we don't need to trigger a re-configure. the old nginx config will use the paths
|
||||
// the new one will proxy calls to the box code
|
||||
const WELLKNOWN_DIR = '/home/yellowtent/boxdata/well-known';
|
||||
const output = safe.child_process.execSync('find . -type f -printf "%P\n"', { cwd: WELLKNOWN_DIR, encoding: 'utf8' });
|
||||
if (!output) return callback();
|
||||
const paths = output.trim().split('\n');
|
||||
if (paths.length === 0) return callback(); // user didn't configure any well-known
|
||||
|
||||
let wellKnown = {};
|
||||
for (let path of paths) {
|
||||
const fqdn = path.split('/', 1)[0];
|
||||
const loc = path.slice(fqdn.length+1);
|
||||
const doc = safe.fs.readFileSync(`${WELLKNOWN_DIR}/${path}`, { encoding: 'utf8' });
|
||||
if (!doc) continue;
|
||||
|
||||
wellKnown[fqdn] = {};
|
||||
wellKnown[fqdn][loc] = doc;
|
||||
}
|
||||
|
||||
console.log('Migrating well-known', JSON.stringify(wellKnown, null, 4));
|
||||
|
||||
async.eachSeries(Object.keys(wellKnown), function (fqdn, iteratorDone) {
|
||||
db.runSql('UPDATE domains SET wellKnownJson=? WHERE domain=?', [ JSON.stringify(wellKnown[fqdn]), fqdn ], function (error, result) {
|
||||
if (error) {
|
||||
console.error(error); // maybe the domain does not exist anymore
|
||||
} else if (result.affectedRows === 0) {
|
||||
console.log(`Could not migrate wellknown as domain ${fqdn} is missing`);
|
||||
}
|
||||
iteratorDone();
|
||||
});
|
||||
}, function (error) {
|
||||
callback(error);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE domains DROP COLUMN wellKnownJson', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,23 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM settings WHERE name=?', ['platform_config'], function (error, results) {
|
||||
if (error || results.length === 0) return callback(null);
|
||||
|
||||
let value = JSON.parse(results[0].value);
|
||||
|
||||
for (const serviceName of Object.keys(value)) {
|
||||
const service = value[serviceName];
|
||||
if (!service.memorySwap) continue;
|
||||
service.memoryLimit = service.memorySwap;
|
||||
delete service.memorySwap;
|
||||
delete service.memory;
|
||||
}
|
||||
|
||||
db.runSql('UPDATE settings SET value=? WHERE name=?', [ JSON.stringify(value), 'platform_config' ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,28 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM apps', function (error, apps) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(apps, function (app, iteratorDone) {
|
||||
if (!app.servicesConfigJson) return iteratorDone();
|
||||
|
||||
let servicesConfig = JSON.parse(app.servicesConfigJson);
|
||||
for (const serviceName of Object.keys(servicesConfig)) {
|
||||
const service = servicesConfig[serviceName];
|
||||
if (!service.memorySwap) continue;
|
||||
service.memoryLimit = service.memorySwap;
|
||||
delete service.memorySwap;
|
||||
delete service.memory;
|
||||
}
|
||||
|
||||
db.runSql('UPDATE apps SET servicesConfigJson=? WHERE id=?', [ JSON.stringify(servicesConfig), app.id ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'services_config', 'platform_config' ], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'platform_config', 'services_config' ], callback);
|
||||
};
|
||||
@@ -1,10 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
/* this contained an invalid migration of OVH URLs from s3 subdomain to storage subdomain. See https://forum.cloudron.io/topic/4584/issue-with-backups-listings-and-saving-backup-config-in-6-2 */
|
||||
callback();
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT value FROM settings WHERE name="registry_config"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
|
||||
var registryConfig = JSON.parse(results[0].value);
|
||||
if (!registryConfig.provider) registryConfig.provider = 'other';
|
||||
|
||||
db.runSql('UPDATE settings SET value=? WHERE name="registry_config"', [ JSON.stringify(registryConfig) ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE tokens ADD COLUMN lastUsedTime TIMESTAMP NULL', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE tokens DROP COLUMN lastUsedTime', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN enableMailbox BOOLEAN DEFAULT 1', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN enableMailbox', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mailboxes ADD COLUMN active BOOLEAN DEFAULT 1', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
callback();
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mailboxes DROP COLUMN active', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
path = require('path');
|
||||
|
||||
const AVATAR_DIR = '/home/yellowtent/boxdata/profileicons';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN avatar MEDIUMBLOB', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
fs.readdir(AVATAR_DIR, function (error, filenames) {
|
||||
if (error && error.code === 'ENOENT') return callback();
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(filenames, function (filename, iteratorCallback) {
|
||||
const avatar = fs.readFileSync(path.join(AVATAR_DIR, filename));
|
||||
const userId = filename;
|
||||
|
||||
db.runSql('UPDATE users SET avatar=? WHERE id=?', [ avatar, userId ], iteratorCallback);
|
||||
}, function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
fs.rmdir(AVATAR_DIR, { recursive: true }, callback);
|
||||
});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN avatar', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE settings ADD COLUMN valueBlob MEDIUMBLOB', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
fs.readFile('/home/yellowtent/boxdata/avatar.png', function (error, avatar) {
|
||||
if (error && error.code === 'ENOENT') return callback();
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('INSERT INTO settings (name, valueBlob) VALUES (?, ?)', [ 'cloudron_avatar', avatar ], callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN loginLocationsJson TEXT', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN loginLocationsJson', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,42 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
path = require('path');
|
||||
|
||||
const APPICONS_DIR = '/home/yellowtent/boxdata/appicons';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN icon MEDIUMBLOB'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN appStoreIcon MEDIUMBLOB'),
|
||||
function migrateIcons(next) {
|
||||
fs.readdir(APPICONS_DIR, function (error, filenames) {
|
||||
if (error && error.code === 'ENOENT') return next();
|
||||
if (error) return next(error);
|
||||
|
||||
async.eachSeries(filenames, function (filename, iteratorCallback) {
|
||||
const icon = fs.readFileSync(path.join(APPICONS_DIR, filename));
|
||||
const appId = filename.split('.')[0];
|
||||
|
||||
if (filename.endsWith('.user.png')) {
|
||||
db.runSql('UPDATE apps SET icon=? WHERE id=?', [ icon, appId ], iteratorCallback);
|
||||
} else {
|
||||
db.runSql('UPDATE apps SET appStoreIcon=? WHERE id=?', [ icon, appId ], iteratorCallback);
|
||||
}
|
||||
}, function (error) {
|
||||
if (error) return next(error);
|
||||
|
||||
fs.rmdir(APPICONS_DIR, { recursive: true }, next);
|
||||
});
|
||||
});
|
||||
}
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN icon'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN appStoreIcon'),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps MODIFY ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps MODIFY ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,20 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
const cmd = 'CREATE TABLE blobs(' +
|
||||
'id VARCHAR(128) NOT NULL UNIQUE,' +
|
||||
'value MEDIUMBLOB,' +
|
||||
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
|
||||
|
||||
db.runSql(cmd, function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('DROP TABLE blobs', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,49 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const BOX_DATA_DIR = '/home/yellowtent/boxdata';
|
||||
const PLATFORM_DATA_DIR = '/home/yellowtent/platformdata';
|
||||
|
||||
exports.up = function (db, callback) {
|
||||
let funcs = [];
|
||||
|
||||
const acmeKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/acme/acme.key`);
|
||||
if (acmeKey) {
|
||||
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'acme_account_key', acmeKey ]));
|
||||
funcs.push(fs.rmdir.bind(fs, `${BOX_DATA_DIR}/acme`, { recursive: true }));
|
||||
}
|
||||
const dhparams = safe.fs.readFileSync(`${BOX_DATA_DIR}/dhparams.pem`);
|
||||
if (dhparams) {
|
||||
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/dhparams.pem`, dhparams);
|
||||
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'dhparams', dhparams ]));
|
||||
// leave the dhparms here for the moment because startup code regenerates box nginx config and reloads nginx. at that point,
|
||||
// nginx config of apps has not been re-generated yet and the reload fails. post 6.3, this file can be removed in start.sh
|
||||
// funcs.push(fs.unlink.bind(fs, `${BOX_DATA_DIR}/dhparams.pem`));
|
||||
}
|
||||
const turnSecret = safe.fs.readFileSync(`${BOX_DATA_DIR}/addon-turn-secret`);
|
||||
if (turnSecret) {
|
||||
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'addon_turn_secret', turnSecret ]));
|
||||
funcs.push(fs.unlink.bind(fs, `${BOX_DATA_DIR}/addon-turn-secret`));
|
||||
}
|
||||
|
||||
// sftp keys get moved to platformdata in start.sh
|
||||
const sftpPublicKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/sftp/ssh/ssh_host_rsa_key.pub`);
|
||||
const sftpPrivateKey = safe.fs.readFileSync(`${BOX_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`);
|
||||
if (sftpPublicKey) {
|
||||
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key.pub`, sftpPublicKey);
|
||||
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`, sftpPrivateKey);
|
||||
safe.fs.chmodSync(`${PLATFORM_DATA_DIR}/sftp/ssh/ssh_host_rsa_key`, 0o600);
|
||||
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'sftp_public_key', sftpPublicKey ]));
|
||||
funcs.push(db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'sftp_private_key', sftpPrivateKey ]));
|
||||
funcs.push(fs.rmdir.bind(fs, `${BOX_DATA_DIR}/sftp`, { recursive: true }));
|
||||
}
|
||||
|
||||
async.series(funcs, callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,31 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const BOX_DATA_DIR = '/home/yellowtent/boxdata';
|
||||
const PLATFORM_DATA_DIR = '/home/yellowtent/platformdata';
|
||||
|
||||
exports.up = function (db, callback) {
|
||||
if (!fs.existsSync(`${BOX_DATA_DIR}/firewall`)) return callback();
|
||||
|
||||
const ports = safe.fs.readFileSync(`${BOX_DATA_DIR}/firewall/ports.json`);
|
||||
if (ports) {
|
||||
safe.fs.writeFileSync(`${PLATFORM_DATA_DIR}/firewall/ports.json`, ports);
|
||||
}
|
||||
|
||||
const blocklist = safe.fs.readFileSync(`${BOX_DATA_DIR}/firewall/blocklist.txt`);
|
||||
async.series([
|
||||
(next) => {
|
||||
if (!blocklist) return next();
|
||||
db.runSql('INSERT INTO settings (name, valueBlob) VALUES (?, ?)', [ 'firewall_blocklist', blocklist ], next);
|
||||
},
|
||||
fs.writeFile.bind(fs, `${PLATFORM_DATA_DIR}/firewall/blocklist.txt`, blocklist || ''),
|
||||
fs.rmdir.bind(fs, `${BOX_DATA_DIR}/firewall`, { recursive: true })
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,38 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const CERTS_DIR = '/home/yellowtent/boxdata/certs',
|
||||
PLATFORM_CERTS_DIR = '/home/yellowtent/platformdata/nginx/cert';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE domains ADD COLUMN fallbackCertificateJson MEDIUMTEXT', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.all('SELECT * FROM domains', [ ], function (error, domains) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(domains, function (domain, iteratorDone) {
|
||||
// b94dbf5fa33a6d68d784571721ff44348c2d88aa seems to have moved certs from platformdata to boxdata
|
||||
let cert = safe.fs.readFileSync(`${CERTS_DIR}/${domain.domain}.host.cert`, 'utf8');
|
||||
let key = safe.fs.readFileSync(`${CERTS_DIR}/${domain.domain}.host.key`, 'utf8');
|
||||
|
||||
if (!cert) {
|
||||
cert = safe.fs.readFileSync(`${PLATFORM_CERTS_DIR}/${domain.domain}.host.cert`, 'utf8');
|
||||
key = safe.fs.readFileSync(`${PLATFORM_CERTS_DIR}/${domain.domain}.host.key`, 'utf8');
|
||||
}
|
||||
|
||||
const fallbackCertificate = { cert, key };
|
||||
|
||||
db.runSql('UPDATE domains SET fallbackCertificateJson=? WHERE domain=?', [ JSON.stringify(fallbackCertificate), domain.domain ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.run(db, 'ALTER TABLE domains DROP COLUMN fallbackCertificateJson')
|
||||
], callback);
|
||||
};
|
||||
@@ -1,34 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const CERTS_DIR = '/home/yellowtent/boxdata/certs';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE subdomains ADD COLUMN certificateJson MEDIUMTEXT', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.all('SELECT * FROM subdomains', [ ], function (error, subdomains) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(subdomains, function (subdomain, iteratorDone) {
|
||||
const cert = safe.fs.readFileSync(`${CERTS_DIR}/${subdomain.subdomain}.${subdomain.domain}.user.cert`, 'utf8');
|
||||
const key = safe.fs.readFileSync(`${CERTS_DIR}/${subdomain.subdomain}.${subdomain.domain}.user.key`, 'utf8');
|
||||
|
||||
if (!cert || !key) return iteratorDone();
|
||||
|
||||
const certificate = { cert, key };
|
||||
|
||||
db.runSql('UPDATE subdomains SET certificateJson=? WHERE domain=? AND subdomain=?', [ JSON.stringify(certificate), subdomain.domain, subdomain.subdomain ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.run(db, 'ALTER TABLE subdomains DROP COLUMN certificateJson')
|
||||
], callback);
|
||||
};
|
||||
@@ -1,52 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
child_process = require('child_process'),
|
||||
fs = require('fs'),
|
||||
path = require('path'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const OLD_CERTS_DIR = '/home/yellowtent/boxdata/certs';
|
||||
const NEW_CERTS_DIR = '/home/yellowtent/platformdata/nginx/cert';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
fs.readdir(OLD_CERTS_DIR, function (error, filenames) {
|
||||
if (error && error.code === 'ENOENT') return callback();
|
||||
if (error) return callback(error);
|
||||
|
||||
filenames = filenames.filter(f => f.endsWith('.key') && !f.endsWith('.host.key') && !f.endsWith('.user.key')); // ignore fallback and user keys
|
||||
|
||||
async.eachSeries(filenames, function (filename, iteratorCallback) {
|
||||
const privateKeyFile = filename;
|
||||
const privateKey = fs.readFileSync(path.join(OLD_CERTS_DIR, filename));
|
||||
const certificateFile = filename.replace(/\.key$/, '.cert');
|
||||
const certificate = safe.fs.readFileSync(path.join(OLD_CERTS_DIR, certificateFile));
|
||||
if (!certificate) {
|
||||
console.log(`${certificateFile} is missing. skipping migration`);
|
||||
return iteratorCallback();
|
||||
}
|
||||
const csrFile = filename.replace(/\.key$/, '.csr');
|
||||
const csr = safe.fs.readFileSync(path.join(OLD_CERTS_DIR, csrFile));
|
||||
if (!csr) {
|
||||
console.log(`${csrFile} is missing. skipping migration`);
|
||||
return iteratorCallback();
|
||||
}
|
||||
|
||||
async.series([
|
||||
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${privateKeyFile}`, privateKey),
|
||||
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${certificateFile}`, certificate),
|
||||
db.runSql.bind(db, 'INSERT INTO blobs (id, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value=VALUES(value)', `cert-${csrFile}`, csr),
|
||||
], iteratorCallback);
|
||||
}, function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
child_process.execSync(`cp ${OLD_CERTS_DIR}/* ${NEW_CERTS_DIR}`); // this way we copy the non-migrated ones like .host, .user etc as well
|
||||
fs.rmdir(OLD_CERTS_DIR, { recursive: true }, callback);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE volumes ADD COLUMN mountType VARCHAR(16) DEFAULT "noop"'),
|
||||
db.runSql.bind(db, 'ALTER TABLE volumes ADD COLUMN mountOptionsJson TEXT')
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE volumes DROP COLUMN mountType'),
|
||||
db.runSql.bind(db, 'ALTER TABLE volumes DROP COLUMN mountOptionsJson')
|
||||
], callback);
|
||||
};
|
||||
@@ -1,21 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE backups ADD INDEX creationTime_index (creationTime)'),
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog ADD INDEX creationTime_index (creationTime)'),
|
||||
db.runSql.bind(db, 'ALTER TABLE notifications ADD INDEX creationTime_index (creationTime)'),
|
||||
db.runSql.bind(db, 'ALTER TABLE tasks ADD INDEX creationTime_index (creationTime)'),
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE backups DROP INDEX creationTime_index'),
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog DROP INDEX creationTime_index'),
|
||||
db.runSql.bind(db, 'ALTER TABLE notifications DROP INDEX creationTime_index'),
|
||||
db.runSql.bind(db, 'ALTER TABLE tasks DROP INDEX creationTime_index'),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,33 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('ALTER TABLE users ADD INDEX creationTime_index (creationTime)', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.all('SELECT id, createdAt FROM users', function (error, results) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(results, function (r, iteratorDone) {
|
||||
const creationTime = new Date(r.createdAt);
|
||||
db.runSql('UPDATE users SET creationTime=? WHERE id=?', [ creationTime, r.id ], iteratorDone);
|
||||
}, function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('ALTER TABLE users DROP COLUMN createdAt', callback);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN creationTime', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,27 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT value FROM settings WHERE name="backup_config"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
|
||||
const backupConfig = JSON.parse(results[0].value);
|
||||
if (backupConfig.provider === 'sshfs' || backupConfig.provider === 'cifs' || backupConfig.provider === 'nfs' || backupConfig.externalDisk) {
|
||||
backupConfig.chown = backupConfig.provider === 'nfs' || backupConfig.provider === 'sshfs' || backupConfig.externalDisk;
|
||||
backupConfig.preserveAttributes = !!backupConfig.externalDisk;
|
||||
backupConfig.provider = 'mountpoint';
|
||||
if (backupConfig.externalDisk) {
|
||||
backupConfig.mountPoint = backupConfig.backupFolder;
|
||||
backupConfig.prefix = '';
|
||||
delete backupConfig.backupFolder;
|
||||
delete backupConfig.externalDisk;
|
||||
}
|
||||
db.runSql('UPDATE settings SET value=? WHERE name="backup_config"', [JSON.stringify(backupConfig)], callback);
|
||||
} else {
|
||||
callback();
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,13 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE notifications DROP COLUMN userId', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('DELETE FROM notifications', callback); // just clear notifications table
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE notifications ADD COLUMN userId VARCHAR(128) NOT NULL', callback);
|
||||
};
|
||||
@@ -1,26 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
safe = require('safetydance');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM volumes', function (error, volumes) {
|
||||
if (error || volumes.length === 0) return callback(error);
|
||||
|
||||
async.eachSeries(volumes, function (volume, iteratorDone) {
|
||||
if (volume.mountType !== 'noop') return iteratorDone();
|
||||
|
||||
let mountType;
|
||||
if (safe.child_process.execSync(`mountpoint -q -- ${volume.hostPath}`)) {
|
||||
mountType = 'mountpoint';
|
||||
} else {
|
||||
mountType = 'filesystem';
|
||||
}
|
||||
db.runSql('UPDATE volumes SET mountType=? WHERE id=?', [ mountType, volume.id ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,13 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('UPDATE users SET avatar="gravatar" WHERE avatar IS NULL', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('ALTER TABLE users MODIFY avatar MEDIUMBLOB NOT NULL', callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users MODIFY avatar MEDIUMBLOB', callback);
|
||||
};
|
||||
@@ -1,30 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
safe = require('safetydance');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * from domains', [], function (error, results) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(results, function (r, iteratorDone) {
|
||||
if (!r.wellKnownJson) return iteratorDone();
|
||||
|
||||
const wellKnown = safe.JSON.parse(r.wellKnownJson);
|
||||
if (!wellKnown || !wellKnown['matrix/server']) return iteratorDone();
|
||||
const matrixHostname = JSON.parse(wellKnown['matrix/server'])['m.server'];
|
||||
|
||||
wellKnown['matrix/client'] = JSON.stringify({
|
||||
'm.homeserver': {
|
||||
'base_url': 'https://' + matrixHostname
|
||||
}
|
||||
});
|
||||
|
||||
db.runSql('UPDATE domains SET wellKnownJson=? WHERE domain=?', [ JSON.stringify(wellKnown), r.domain ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE appAddonConfigs MODIFY value TEXT NOT NULL', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE appAddonConfigs MODIFY value VARCHAR(512)', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users MODIFY loginLocationsJson MEDIUMTEXT', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users MODIFY loginLocationsJson TEXT', [], function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN operatorsJson TEXT', callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN operatorsJson', callback);
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN crontab TEXT', callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN crontab', callback);
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN inviteToken VARCHAR(128) DEFAULT ""', callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN inviteToken', callback);
|
||||
};
|
||||
@@ -1,19 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
var async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN enableInbox BOOLEAN DEFAULT 0'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN inboxName VARCHAR(128)'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps ADD COLUMN inboxDomain VARCHAR(128)'),
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN enableInbox'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN inboxName'),
|
||||
db.runSql.bind(db, 'ALTER TABLE apps DROP COLUMN inboxDomain'),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,35 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
reverseProxy = require('../src/reverseproxy.js'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const NGINX_CERT_DIR = '/home/yellowtent/platformdata/nginx/cert';
|
||||
|
||||
// ensure fallbackCertificate of domains are present in database and the cert dir. it seems a bad migration lost them.
|
||||
// https://forum.cloudron.io/topic/5683/data-argument-must-be-of-type-received-null-error-during-restore-process
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM domains', [ ], function (error, domains) {
|
||||
if (error) return callback(error);
|
||||
|
||||
// this code is br0ken since async 3.x since async functions won't get iteratorDone anymore
|
||||
// no point fixing this migration though since it won't run again in old cloudrons. and in new cloudron domains will be empty
|
||||
async.eachSeries(domains, async function (domain, iteratorDone) {
|
||||
let fallbackCertificate = safe.JSON.parse(domain.fallbackCertificateJson);
|
||||
if (!fallbackCertificate || !fallbackCertificate.cert || !fallbackCertificate.key) {
|
||||
let error;
|
||||
[error, fallbackCertificate] = await safe(reverseProxy.generateFallbackCertificate(domain.domain));
|
||||
if (error) return iteratorDone(error);
|
||||
}
|
||||
|
||||
if (!safe.fs.writeFileSync(`${NGINX_CERT_DIR}/${domain.domain}.host.cert`, fallbackCertificate.cert, 'utf8')) return iteratorDone(safe.error);
|
||||
if (!safe.fs.writeFileSync(`${NGINX_CERT_DIR}/${domain.domain}.host.key`, fallbackCertificate.key, 'utf8')) return iteratorDone(safe.error);
|
||||
|
||||
db.runSql('UPDATE domains SET fallbackCertificateJson=? WHERE domain=?', [ JSON.stringify(fallbackCertificate), domain.domain ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mailboxes ADD COLUMN enablePop3 BOOLEAN DEFAULT 0', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mailboxes DROP COLUMN enablePop3', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
fs = require('fs'),
|
||||
path = require('path'),
|
||||
safe = require('safetydance');
|
||||
|
||||
const MAIL_DATA_DIR = '/home/yellowtent/boxdata/mail';
|
||||
const DKIM_DIR = `${MAIL_DATA_DIR}/dkim`;
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE mail ADD COLUMN dkimKeyJson MEDIUMTEXT', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
fs.readdir(DKIM_DIR, function (error, filenames) {
|
||||
if (error && error.code === 'ENOENT') return callback();
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(filenames, function (filename, iteratorCallback) {
|
||||
const domain = filename;
|
||||
const publicKey = safe.fs.readFileSync(path.join(DKIM_DIR, domain, 'public'), 'utf8');
|
||||
const privateKey = safe.fs.readFileSync(path.join(DKIM_DIR, domain, 'private'), 'utf8');
|
||||
if (!publicKey || !privateKey) return iteratorCallback();
|
||||
|
||||
const dkimKey = {
|
||||
publicKey,
|
||||
privateKey
|
||||
};
|
||||
|
||||
db.runSql('UPDATE mail SET dkimKeyJson=? WHERE domain=?', [ JSON.stringify(dkimKey), domain ], iteratorCallback);
|
||||
}, function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
fs.rmdir(DKIM_DIR, { recursive: true }, callback);
|
||||
});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.run(db, 'ALTER TABLE mail DROP COLUMN dkimKeyJson')
|
||||
], callback);
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('DELETE FROM blobs WHERE id=?', [ 'dhparams' ], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,17 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog CHANGE source sourceJson TEXT', []),
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog CHANGE data dataJson TEXT', []),
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog CHANGE sourceJson source TEXT', []),
|
||||
db.runSql.bind(db, 'ALTER TABLE eventlog CHANGE dataJson data TEXT', []),
|
||||
], callback);
|
||||
};
|
||||
@@ -1,17 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('SELECT value FROM settings WHERE name=?', [ 'sysinfo_config' ], function (error, result) {
|
||||
if (error || result.length === 0) return callback(error);
|
||||
const sysinfoConfig = JSON.parse(result[0].value);
|
||||
if (sysinfoConfig.provider !== 'fixed' || !sysinfoConfig.ip) return callback();
|
||||
sysinfoConfig.ipv4 = sysinfoConfig.ip;
|
||||
delete sysinfoConfig.ip;
|
||||
|
||||
db.runSql('REPLACE INTO settings (name, value) VALUES(?, ?)', [ 'sysinfo_config', JSON.stringify(sysinfoConfig) ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'directory_config', 'profile_config' ], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'profile_config', 'directory_config' ], callback);
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE subdomains ADD COLUMN environmentVariable VARCHAR(128)', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE subdomains DROP COLUMN environmentVariable', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,19 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const safe = require('safetydance');
|
||||
|
||||
const PROXY_AUTH_TOKEN_SECRET_FILE = '/home/yellowtent/platformdata/proxy-auth-token-secret';
|
||||
|
||||
exports.up = function (db, callback) {
|
||||
const token = safe.fs.readFileSync(PROXY_AUTH_TOKEN_SECRET_FILE);
|
||||
if (!token) return callback();
|
||||
db.runSql('INSERT INTO blobs (id, value) VALUES (?, ?)', [ 'proxy_auth_token_secret', token ], function (error) {
|
||||
if (error) return callback(error);
|
||||
safe.fs.unlinkSync(PROXY_AUTH_TOKEN_SECRET_FILE);
|
||||
callback();
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,12 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('RENAME TABLE subdomains TO locations', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,27 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
mail = require('../src/mail.js'),
|
||||
safe = require('safetydance'),
|
||||
util = require('util');
|
||||
|
||||
// it seems some mail domains do not have dkimKey in the database for some reason because of some previous bad migration
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM mail', [ ], function (error, mailDomains) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(mailDomains, function (mailDomain, iteratorDone) {
|
||||
let dkimKey = safe.JSON.parse(mailDomain.dkimKeyJson);
|
||||
if (dkimKey && dkimKey.publicKey && dkimKey.privateKey) return iteratorDone();
|
||||
console.log(`${mailDomain.domain} has no dkim key in the database. generating a new one`);
|
||||
util.callbackify(mail.generateDkimKey)(function (error, dkimKey) {
|
||||
if (error) return iteratorDone(error);
|
||||
db.runSql('UPDATE mail SET dkimKeyJson=? WHERE domain=?', [ JSON.stringify(dkimKey), mailDomain.domain ], iteratorDone);
|
||||
});
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('DELETE FROM settings WHERE name=?', [ 'license_key' ], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('UPDATE settings SET name=? WHERE name=?', [ 'appstore_api_token', 'cloudron_token' ], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,37 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const superagent = require('superagent');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT value FROM settings WHERE name="api_server_origin"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
const apiServerOrigin = results[0].value;
|
||||
|
||||
db.all('SELECT value FROM settings WHERE name="appstore_api_token"', function (error, results) {
|
||||
if (error || results.length === 0) return callback(error);
|
||||
const apiToken = results[0].value;
|
||||
|
||||
console.log(`Getting appstore web token from ${apiServerOrigin}`);
|
||||
|
||||
superagent.post(`${apiServerOrigin}/api/v1/user_token`)
|
||||
.send({})
|
||||
.query({ accessToken: apiToken })
|
||||
.timeout(30 * 1000).end(function (error, response) {
|
||||
if (error && !error.response) {
|
||||
console.log('Network error getting web token', error);
|
||||
return callback();
|
||||
}
|
||||
if (response.statusCode !== 201 || !response.body.accessToken) {
|
||||
console.log(`Bad status getting web token: ${response.status} ${response.text}`);
|
||||
return callback();
|
||||
}
|
||||
|
||||
db.runSql('INSERT settings (name, value) VALUES(?, ?)', [ 'appstore_web_token', response.body.accessToken ], callback);
|
||||
});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,16 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE backups ADD COLUMN label VARCHAR(128) DEFAULT ""', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE backups DROP COLUMN label', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,51 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async'),
|
||||
hat = require('../src/hat.js');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * from backups', function (error, allBackups) {
|
||||
if (error) return callback(error);
|
||||
|
||||
console.log(`Fixing up ${allBackups.length} backup entries`);
|
||||
const idMap = {};
|
||||
allBackups.forEach(b => {
|
||||
b.remotePath = b.id;
|
||||
b.id = `${b.type}_${b.identifier}_v${b.packageVersion}_${hat(256)}`; // id is used by the UI to derive dependent packages. making this a UUID will require a lot of db querying
|
||||
idMap[b.remotePath] = b.id;
|
||||
});
|
||||
|
||||
db.runSql('ALTER TABLE backups ADD COLUMN remotePath VARCHAR(256)', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('ALTER TABLE backups CHANGE COLUMN dependsOn dependsOnJson TEXT', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(allBackups, function (backup, iteratorDone) {
|
||||
const dependsOnPaths = backup.dependsOn ? backup.dependsOn.split(',') : []; // previously, it was paths
|
||||
let dependsOnIds = [];
|
||||
dependsOnPaths.forEach(p => { if (idMap[p]) dependsOnIds.push(idMap[p]); });
|
||||
|
||||
db.runSql('UPDATE backups SET id = ?, remotePath = ?, dependsOnJson = ? WHERE id = ?', [ backup.id, backup.remotePath, JSON.stringify(dependsOnIds), backup.remotePath ], iteratorDone);
|
||||
}, function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
db.runSql('ALTER TABLE backups MODIFY COLUMN remotePath VARCHAR(256) NOT NULL UNIQUE', callback);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE backups DROP COLUMN remotePath', function (error) {
|
||||
if (error) console.error(error);
|
||||
|
||||
db.runSql('ALTER TABLE backups RENAME COLUMN dependsOnJson to dependsOn', function (error) {
|
||||
if (error) return callback(error);
|
||||
|
||||
callback(error);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM apps', function (error, apps) {
|
||||
if (error) return callback(error);
|
||||
|
||||
async.eachSeries(apps, function (app, iteratorDone) {
|
||||
const manifest = JSON.parse(app.manifestJson);
|
||||
const hasSso = !!manifest.addons['proxyAuth'] || !!manifest.addons['ldap'];
|
||||
if (hasSso || !app.sso) return iteratorDone();
|
||||
|
||||
console.log(`Unsetting sso flag of ${app.id}`);
|
||||
db.runSql('UPDATE apps SET sso=? WHERE id=?', [ 0, app.id ], iteratorDone);
|
||||
}, callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,20 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.all('SELECT * FROM settings WHERE name = ?', [ 'api_server_origin' ], function (error, result) {
|
||||
if (error || result.length === 0) return callback(error);
|
||||
|
||||
let consoleOrigin;
|
||||
switch (result[0].value) {
|
||||
case 'https://api.dev.cloudron.io': consoleOrigin = 'https://console.dev.cloudron.io'; break;
|
||||
case 'https://api.staging.cloudron.io': consoleOrigin = 'https://console.staging.cloudron.io'; break;
|
||||
default: consoleOrigin = 'https://console.cloudron.io'; break;
|
||||
}
|
||||
|
||||
db.runSql('REPLACE INTO settings (name, value) VALUES (?, ?)', [ 'console_server_origin', consoleOrigin ], callback);
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
callback();
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users ADD COLUMN backgroundImage MEDIUMBLOB', callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE users DROP COLUMN backgroundImage', callback);
|
||||
};
|
||||
@@ -1,12 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps ADD COLUMN mailboxDisplayName VARCHAR(128) DEFAULT "" NOT NULL', [], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
db.runSql('ALTER TABLE apps DROP COLUMN mailboxDisplayName', function (error) {
|
||||
if (error) console.error(error);
|
||||
callback(error);
|
||||
});
|
||||
};
|
||||
@@ -1,53 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const path = require('path'),
|
||||
safe = require('safetydance'),
|
||||
uuid = require('uuid');
|
||||
|
||||
function getMountPoint(dataDir) {
|
||||
const output = safe.child_process.execSync(`df --output=target "${dataDir}" | tail -1`, { encoding: 'utf8' });
|
||||
if (!output) return dataDir;
|
||||
const mountPoint = output.trim();
|
||||
if (mountPoint === '/') return dataDir;
|
||||
return mountPoint;
|
||||
}
|
||||
|
||||
exports.up = async function(db) {
|
||||
// use safe() here because this migration failed midway in 7.2.4
|
||||
await safe(db.runSql('ALTER TABLE apps ADD storageVolumeId VARCHAR(128), ADD FOREIGN KEY(storageVolumeId) REFERENCES volumes(id)'));
|
||||
await safe(db.runSql('ALTER TABLE apps ADD storageVolumePrefix VARCHAR(128)'));
|
||||
await safe(db.runSql('ALTER TABLE apps ADD CONSTRAINT apps_storageVolume UNIQUE (storageVolumeId, storageVolumePrefix)'));
|
||||
|
||||
const apps = await db.runSql('SELECT * FROM apps WHERE dataDir IS NOT NULL');
|
||||
|
||||
for (const app of apps) {
|
||||
const allVolumes = await db.runSql('SELECT * FROM volumes');
|
||||
|
||||
console.log(`data-dir (${app.id}): migrating data dir ${app.dataDir}`);
|
||||
|
||||
const mountPoint = getMountPoint(app.dataDir);
|
||||
const prefix = path.relative(mountPoint, app.dataDir);
|
||||
|
||||
console.log(`data-dir (${app.id}): migrating to mountpoint ${mountPoint} and prefix ${prefix}`);
|
||||
|
||||
const volume = allVolumes.find(v => v.hostPath === mountPoint);
|
||||
if (volume) {
|
||||
console.log(`data-dir (${app.id}): using existing volume ${volume.id}`);
|
||||
await db.runSql('UPDATE apps SET storageVolumeId=?, storageVolumePrefix=? WHERE id=?', [ volume.id, prefix, app.id ]);
|
||||
continue;
|
||||
}
|
||||
|
||||
const id = uuid.v4().replace(/-/g, ''); // to make systemd mount file names more readable
|
||||
const name = `appdata-${id}`;
|
||||
const type = app.dataDir === mountPoint ? 'filesystem' : 'mountpoint';
|
||||
|
||||
console.log(`data-dir (${app.id}): creating new volume ${id}`);
|
||||
await db.runSql('INSERT INTO volumes (id, name, hostPath, mountType, mountOptionsJson) VALUES (?, ?, ?, ?, ?)', [ id, name, mountPoint, type, JSON.stringify({}) ]);
|
||||
await db.runSql('UPDATE apps SET storageVolumeId=?, storageVolumePrefix=? WHERE id=?', [ id, prefix, app.id ]);
|
||||
}
|
||||
|
||||
await db.runSql('ALTER TABLE apps DROP COLUMN dataDir');
|
||||
};
|
||||
|
||||
exports.down = async function(/*db*/) {
|
||||
};
|
||||
@@ -1,9 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = async function (db) {
|
||||
await db.runSql('ALTER TABLE apps ADD COLUMN upstreamUri VARCHAR(256) DEFAULT ""');
|
||||
};
|
||||
|
||||
exports.down = async function (db) {
|
||||
await db.runSql('ALTER TABLE apps DROP COLUMN upstreamUri');
|
||||
};
|
||||
@@ -1,15 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = async function(db) {
|
||||
const result = await db.runSql('SELECT * FROM settings WHERE name=?', [ 'backup_config' ]);
|
||||
if (!result.length) return;
|
||||
|
||||
const backupConfig = JSON.parse(result[0].value);
|
||||
|
||||
if (backupConfig.encryption && backupConfig.format === 'rsync') backupConfig.encryptedFilenames = true;
|
||||
|
||||
await db.runSql('UPDATE settings SET value=? WHERE name=?', [ JSON.stringify(backupConfig), 'backup_config', ]);
|
||||
};
|
||||
|
||||
exports.down = async function(/* db */) {
|
||||
};
|
||||
@@ -1,22 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = async function (db) {
|
||||
var cmd = 'CREATE TABLE applinks(' +
|
||||
'id VARCHAR(128) NOT NULL UNIQUE,' +
|
||||
'accessRestrictionJson TEXT,' +
|
||||
'creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
|
||||
'updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
|
||||
'ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,' +
|
||||
'label VARCHAR(128),' +
|
||||
'tagsJson VARCHAR(2048),' +
|
||||
'icon MEDIUMBLOB,' +
|
||||
'upstreamUri VARCHAR(256) DEFAULT "",' +
|
||||
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
|
||||
|
||||
await db.runSql(cmd);
|
||||
};
|
||||
|
||||
exports.down = async function (db) {
|
||||
await db.runSql('DROP TABLE applinks');
|
||||
};
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const async = require('async');
|
||||
|
||||
exports.up = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN storageQuota BIGINT DEFAULT 0'),
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN messagesQuota BIGINT DEFAULT 0'),
|
||||
], callback);
|
||||
};
|
||||
|
||||
exports.down = function(db, callback) {
|
||||
async.series([
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP COLUMN storageQuota'),
|
||||
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP COLUMN messagesQuota')
|
||||
], callback);
|
||||
};
|
||||
@@ -1,18 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const safe = require('safetydance');
|
||||
|
||||
exports.up = async function (db) {
|
||||
const mailDomains = await db.runSql('SELECT * FROM mail', []);
|
||||
|
||||
for (const mailDomain of mailDomains) {
|
||||
let catchAll = safe.JSON.parse(mailDomain.catchAllJson) || [];
|
||||
if (catchAll.length === 0) continue;
|
||||
|
||||
catchAll = catchAll.map(a => `${a}@${mailDomain.domain}`);
|
||||
await db.runSql('UPDATE mail SET catchAllJson = ? WHERE domain = ?', [ JSON.stringify(catchAll), mailDomain.domain ]);
|
||||
}
|
||||
};
|
||||
|
||||
exports.down = async function( /* db */) {
|
||||
};
|
||||
@@ -1,13 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
exports.up = async function (db) {
|
||||
await db.runSql('ALTER TABLE tokens DROP COLUMN scope');
|
||||
await db.runSql('ALTER TABLE tokens ADD COLUMN scopeJson TEXT');
|
||||
|
||||
await db.runSql('UPDATE tokens SET scopeJson = ?', [ JSON.stringify({'*':'rw'})]);
|
||||
};
|
||||
|
||||
exports.down = async function (db) {
|
||||
await db.runSql('ALTER TABLE tokens ADD COLUMN scope VARCHAR(512) NOT NULL DEFAULT ""');
|
||||
await db.runSql('ALTER TABLE tokens DROP COLUMN scopeJson');
|
||||
};
|
||||
+19
-95
@@ -6,7 +6,7 @@
|
||||
#### Strict mode is enabled
|
||||
#### VARCHAR - stored as part of table row (use for strings)
|
||||
#### TEXT - stored offline from table row (use for strings)
|
||||
#### BLOB (64KB), MEDIUMBLOB (16MB), LONGBLOB (4GB) - stored offline from table row (use for binary data)
|
||||
#### BLOB - stored offline from table row (use for binary data)
|
||||
#### https://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
|
||||
#### Times are stored in the database in UTC. And precision is seconds
|
||||
|
||||
@@ -20,23 +20,18 @@ CREATE TABLE IF NOT EXISTS users(
|
||||
email VARCHAR(254) NOT NULL UNIQUE,
|
||||
password VARCHAR(1024) NOT NULL,
|
||||
salt VARCHAR(512) NOT NULL,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
createdAt VARCHAR(512) NOT NULL,
|
||||
modifiedAt VARCHAR(512) NOT NULL,
|
||||
displayName VARCHAR(512) DEFAULT "",
|
||||
fallbackEmail VARCHAR(512) DEFAULT "",
|
||||
twoFactorAuthenticationSecret VARCHAR(128) DEFAULT "",
|
||||
twoFactorAuthenticationEnabled BOOLEAN DEFAULT false,
|
||||
source VARCHAR(128) DEFAULT "",
|
||||
role VARCHAR(32),
|
||||
inviteToken VARCHAR(128) DEFAULT "",
|
||||
resetToken VARCHAR(128) DEFAULT "",
|
||||
resetTokenCreationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
active BOOLEAN DEFAULT 1,
|
||||
avatar MEDIUMBLOB NOT NULL,
|
||||
backgroundImage MEDIUMBLOB,
|
||||
loginLocationsJson MEDIUMTEXT, // { locations: [{ ip, userAgent, city, country, ts }] }
|
||||
|
||||
INDEX creationTime_index (creationTime),
|
||||
PRIMARY KEY(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS userGroups(
|
||||
@@ -49,8 +44,7 @@ CREATE TABLE IF NOT EXISTS groupMembers(
|
||||
groupId VARCHAR(128) NOT NULL,
|
||||
userId VARCHAR(128) NOT NULL,
|
||||
FOREIGN KEY(groupId) REFERENCES userGroups(id),
|
||||
FOREIGN KEY(userId) REFERENCES users(id),
|
||||
UNIQUE (groupId, userId));
|
||||
FOREIGN KEY(userId) REFERENCES users(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS tokens(
|
||||
id VARCHAR(128) NOT NULL UNIQUE,
|
||||
@@ -58,9 +52,8 @@ CREATE TABLE IF NOT EXISTS tokens(
|
||||
accessToken VARCHAR(128) NOT NULL UNIQUE,
|
||||
identifier VARCHAR(128) NOT NULL, // resourceId: app id or user id
|
||||
clientId VARCHAR(128),
|
||||
scopeJson TEXT,
|
||||
scope VARCHAR(512) NOT NULL,
|
||||
expires BIGINT NOT NULL, // FIXME: make this a timestamp
|
||||
lastUsedTime TIMESTAMP NULL,
|
||||
PRIMARY KEY(accessToken));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS apps(
|
||||
@@ -72,6 +65,9 @@ CREATE TABLE IF NOT EXISTS apps(
|
||||
healthTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app last responded
|
||||
containerId VARCHAR(128),
|
||||
manifestJson TEXT,
|
||||
httpPort INTEGER, // this is the nginx proxy port and not manifest.httpPort
|
||||
location VARCHAR(128) NOT NULL,
|
||||
domain VARCHAR(128) NOT NULL,
|
||||
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app was installed
|
||||
updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the last app update was done
|
||||
@@ -84,30 +80,17 @@ CREATE TABLE IF NOT EXISTS apps(
|
||||
reverseProxyConfigJson TEXT, // { robotsTxt, csp }
|
||||
enableBackup BOOLEAN DEFAULT 1, // misnomer: controls automatic daily backups
|
||||
enableAutomaticUpdate BOOLEAN DEFAULT 1,
|
||||
enableMailbox BOOLEAN DEFAULT 1, // whether sendmail addon is enabled
|
||||
mailboxName VARCHAR(128), // mailbox of this app
|
||||
mailboxDomain VARCHAR(128), // mailbox domain of this app
|
||||
mailboxDisplayName VARCHAR(128), // mailbox display name
|
||||
enableInbox BOOLEAN DEFAULT 0, // whether recvmail addon is enabled
|
||||
inboxName VARCHAR(128), // mailbox of this app
|
||||
inboxDomain VARCHAR(128), // mailbox domain of this app
|
||||
mailboxDomain VARCHAR(128), // mailbox domain of this apps
|
||||
label VARCHAR(128), // display name
|
||||
tagsJson VARCHAR(2048), // array of tags
|
||||
storageVolumeId VARCHAR(128),
|
||||
storageVolumePrefix VARCHAR(128),
|
||||
dataDir VARCHAR(256) UNIQUE,
|
||||
taskId INTEGER, // current task
|
||||
errorJson TEXT,
|
||||
servicesConfigJson TEXT, // app services configuration
|
||||
containerIp VARCHAR(16) UNIQUE, // this is not-null because of ip allocation fails, user can 'repair'
|
||||
appStoreIcon MEDIUMBLOB,
|
||||
icon MEDIUMBLOB,
|
||||
crontab TEXT,
|
||||
upstreamUri VARCHAR(256) DEFAULT "",
|
||||
bindsJson TEXT, // bind mounts
|
||||
|
||||
FOREIGN KEY(mailboxDomain) REFERENCES domains(domain),
|
||||
FOREIGN KEY(taskId) REFERENCES tasks(id),
|
||||
FOREIGN KEY(storageVolumeId) REFERENCES volumes(id),
|
||||
UNIQUE (storageVolumeId, storageVolumePrefix),
|
||||
PRIMARY KEY(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS appPortBindings(
|
||||
@@ -121,14 +104,13 @@ CREATE TABLE IF NOT EXISTS appPortBindings(
|
||||
CREATE TABLE IF NOT EXISTS settings(
|
||||
name VARCHAR(128) NOT NULL UNIQUE,
|
||||
value TEXT,
|
||||
valueBlob MEDIUMBLOB,
|
||||
PRIMARY KEY(name));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS appAddonConfigs(
|
||||
appId VARCHAR(128) NOT NULL,
|
||||
addonId VARCHAR(32) NOT NULL,
|
||||
name VARCHAR(128) NOT NULL,
|
||||
value TEXT NOT NULL,
|
||||
value VARCHAR(512) NOT NULL,
|
||||
FOREIGN KEY(appId) REFERENCES apps(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS appEnvVars(
|
||||
@@ -139,30 +121,26 @@ CREATE TABLE IF NOT EXISTS appEnvVars(
|
||||
|
||||
CREATE TABLE IF NOT EXISTS backups(
|
||||
id VARCHAR(128) NOT NULL,
|
||||
remotePath VARCHAR(256) NOT NULL UNIQUE,
|
||||
label VARCHAR(128) DEFAULT "",
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
packageVersion VARCHAR(128) NOT NULL, /* app version or box version */
|
||||
encryptionVersion INTEGER, /* when null, unencrypted backup */
|
||||
type VARCHAR(16) NOT NULL, /* 'box' or 'app' */
|
||||
identifier VARCHAR(128) NOT NULL, /* 'box' or the app id */
|
||||
dependsOnJson TEXT, /* comma separate list of objects this backup depends on */
|
||||
dependsOn TEXT, /* comma separate list of objects this backup depends on */
|
||||
state VARCHAR(16) NOT NULL,
|
||||
manifestJson TEXT, /* to validate if the app can be installed in this version of box */
|
||||
format VARCHAR(16) DEFAULT "tgz",
|
||||
preserveSecs INTEGER DEFAULT 0,
|
||||
|
||||
INDEX creationTime_index (creationTime),
|
||||
PRIMARY KEY (id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS eventlog(
|
||||
id VARCHAR(128) NOT NULL,
|
||||
action VARCHAR(128) NOT NULL,
|
||||
sourceJson TEXT, /* { userId, username, ip }. userId can be null for cron,sysadmin */
|
||||
dataJson TEXT, /* free flowing json based on action */
|
||||
source TEXT, /* { userId, username, ip }. userId can be null for cron,sysadmin */
|
||||
data TEXT, /* free flowing json based on action */
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX creationTime_index (creationTime),
|
||||
PRIMARY KEY (id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS domains(
|
||||
@@ -171,9 +149,6 @@ CREATE TABLE IF NOT EXISTS domains(
|
||||
provider VARCHAR(16) NOT NULL,
|
||||
configJson TEXT, /* JSON containing the dns backend provider config */
|
||||
tlsConfigJson TEXT, /* JSON containing the tls provider config */
|
||||
wellKnownJson TEXT, /* JSON containing well known docs for this domain */
|
||||
|
||||
fallbackCertificateJson MEDIUMTEXT,
|
||||
|
||||
PRIMARY KEY (domain))
|
||||
|
||||
@@ -187,9 +162,7 @@ CREATE TABLE IF NOT EXISTS mail(
|
||||
mailFromValidation BOOLEAN DEFAULT 1,
|
||||
catchAllJson TEXT,
|
||||
relayJson TEXT,
|
||||
bannerJson TEXT,
|
||||
|
||||
dkimKeyJson MEDIUMTEXT,
|
||||
dkimSelector VARCHAR(128) NOT NULL DEFAULT "cloudron",
|
||||
|
||||
FOREIGN KEY(domain) REFERENCES domains(domain),
|
||||
@@ -208,30 +181,22 @@ CREATE TABLE IF NOT EXISTS mailboxes(
|
||||
name VARCHAR(128) NOT NULL,
|
||||
type VARCHAR(16) NOT NULL, /* 'mailbox', 'alias', 'list' */
|
||||
ownerId VARCHAR(128) NOT NULL, /* user id */
|
||||
ownerType VARCHAR(16) NOT NULL,
|
||||
aliasName VARCHAR(128), /* the target name type is an alias */
|
||||
aliasDomain VARCHAR(128), /* the target domain */
|
||||
membersJson TEXT, /* members of a group. fully qualified */
|
||||
membersOnly BOOLEAN DEFAULT false,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
domain VARCHAR(128),
|
||||
active BOOLEAN DEFAULT 1,
|
||||
enablePop3 BOOLEAN DEFAULT 0,
|
||||
storageQuota BIGINT DEFAULT 0,
|
||||
messagesQuota BIGINT DEFAULT 0,
|
||||
|
||||
FOREIGN KEY(domain) REFERENCES mail(domain),
|
||||
FOREIGN KEY(aliasDomain) REFERENCES mail(domain),
|
||||
UNIQUE (name, domain));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS locations(
|
||||
CREATE TABLE IF NOT EXISTS subdomains(
|
||||
appId VARCHAR(128) NOT NULL,
|
||||
domain VARCHAR(128) NOT NULL,
|
||||
subdomain VARCHAR(128) NOT NULL,
|
||||
type VARCHAR(128) NOT NULL, /* primary, secondary, redirect, alias */
|
||||
environmentVariable VARCHAR(128), /* only set for secondary */
|
||||
|
||||
certificateJson MEDIUMTEXT,
|
||||
type VARCHAR(128) NOT NULL, /* primary or redirect */
|
||||
|
||||
FOREIGN KEY(domain) REFERENCES domains(domain),
|
||||
FOREIGN KEY(appId) REFERENCES apps(id),
|
||||
@@ -240,27 +205,23 @@ CREATE TABLE IF NOT EXISTS locations(
|
||||
CREATE TABLE IF NOT EXISTS tasks(
|
||||
id int NOT NULL AUTO_INCREMENT,
|
||||
type VARCHAR(32) NOT NULL,
|
||||
argsJson TEXT,
|
||||
percent INTEGER DEFAULT 0,
|
||||
message TEXT,
|
||||
errorJson TEXT,
|
||||
resultJson TEXT,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX creationTime_index (creationTime),
|
||||
PRIMARY KEY (id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS notifications(
|
||||
id int NOT NULL AUTO_INCREMENT,
|
||||
userId VARCHAR(128) NOT NULL,
|
||||
eventId VARCHAR(128), // reference to eventlog. can be null
|
||||
title VARCHAR(512) NOT NULL,
|
||||
message TEXT,
|
||||
acknowledged BOOLEAN DEFAULT false,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
INDEX creationTime_index (creationTime),
|
||||
FOREIGN KEY(eventId) REFERENCES eventlog(id),
|
||||
UNIQUE KEY appPasswords_name_appId_identifier (name, userId, identifier),
|
||||
PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
@@ -271,46 +232,9 @@ CREATE TABLE IF NOT EXISTS appPasswords(
|
||||
identifier VARCHAR(128) NOT NULL, // resourceId: app id or mail or webadmin
|
||||
hashedPassword VARCHAR(1024) NOT NULL,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE KEY appPasswords_name_appId_identifier (name, userId, identifier)
|
||||
FOREIGN KEY(userId) REFERENCES users(id),
|
||||
|
||||
PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS volumes(
|
||||
id VARCHAR(128) NOT NULL UNIQUE,
|
||||
name VARCHAR(256) NOT NULL UNIQUE,
|
||||
hostPath VARCHAR(1024) NOT NULL UNIQUE,
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
mountType VARCHAR(16) DEFAULT "noop",
|
||||
mountOptionsJson TEXT,
|
||||
PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS appMounts(
|
||||
appId VARCHAR(128) NOT NULL,
|
||||
volumeId VARCHAR(128) NOT NULL,
|
||||
readOnly BOOLEAN DEFAULT 1,
|
||||
UNIQUE KEY appMounts_appId_volumeId (appId, volumeId),
|
||||
FOREIGN KEY(appId) REFERENCES apps(id),
|
||||
FOREIGN KEY(volumeId) REFERENCES volumes(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS blobs(
|
||||
id VARCHAR(128) NOT NULL UNIQUE,
|
||||
value MEDIUMBLOB,
|
||||
PRIMARY KEY(id));
|
||||
|
||||
CREATE TABLE IF NOT EXISTS appLinks(
|
||||
id VARCHAR(128) NOT NULL UNIQUE,
|
||||
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
|
||||
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app was installed
|
||||
updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the last app update was done
|
||||
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, // when this db record was updated (useful for UI caching)
|
||||
label VARCHAR(128), // display name
|
||||
tagsJson VARCHAR(2048), // array of tags
|
||||
icon MEDIUMBLOB,
|
||||
upstreamUri VARCHAR(256) DEFAULT "",
|
||||
|
||||
PRIMARY KEY(id));
|
||||
|
||||
CHARACTER SET utf8 COLLATE utf8_bin;
|
||||
|
||||
Generated
+2473
-12050
File diff suppressed because it is too large
Load Diff
+53
-44
@@ -10,68 +10,77 @@
|
||||
"type": "git",
|
||||
"url": "https://git.cloudron.io/cloudron/box.git"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=4.0.0 <=4.1.1"
|
||||
},
|
||||
"dependencies": {
|
||||
"@google-cloud/dns": "^2.2.4",
|
||||
"@google-cloud/storage": "^5.19.2",
|
||||
"@google-cloud/dns": "^1.1.0",
|
||||
"@google-cloud/storage": "^2.5.0",
|
||||
"@sindresorhus/df": "git+https://github.com/cloudron-io/df.git#type",
|
||||
"async": "^3.2.3",
|
||||
"aws-sdk": "^2.1115.0",
|
||||
"basic-auth": "^2.0.1",
|
||||
"body-parser": "^1.20.0",
|
||||
"cloudron-manifestformat": "^5.18.0",
|
||||
"async": "^2.6.3",
|
||||
"aws-sdk": "^2.685.0",
|
||||
"body-parser": "^1.19.0",
|
||||
"cloudron-manifestformat": "^5.4.0",
|
||||
"connect": "^3.7.0",
|
||||
"connect-lastmile": "^2.1.1",
|
||||
"connect-lastmile": "^2.0.0",
|
||||
"connect-timeout": "^1.9.0",
|
||||
"cookie-parser": "^1.4.6",
|
||||
"cookie-session": "^2.0.0",
|
||||
"cookie-session": "^1.4.0",
|
||||
"cron": "^1.8.2",
|
||||
"db-migrate": "^0.11.13",
|
||||
"db-migrate-mysql": "^2.2.0",
|
||||
"debug": "^4.3.4",
|
||||
"dockerode": "^3.3.1",
|
||||
"ejs": "^3.1.6",
|
||||
"ejs-cli": "^2.2.3",
|
||||
"express": "^4.17.3",
|
||||
"ipaddr.js": "^2.0.1",
|
||||
"js-yaml": "^4.1.0",
|
||||
"jsdom": "^20.0.0",
|
||||
"json": "^11.0.0",
|
||||
"jsonwebtoken": "^8.5.1",
|
||||
"ldapjs": "^2.3.2",
|
||||
"lodash": "^4.17.21",
|
||||
"moment": "^2.29.2",
|
||||
"moment-timezone": "^0.5.34",
|
||||
"db-migrate": "^0.11.11",
|
||||
"db-migrate-mysql": "^1.1.10",
|
||||
"debug": "^4.1.1",
|
||||
"dockerode": "^2.5.8",
|
||||
"ejs": "^2.6.1",
|
||||
"ejs-cli": "^2.2.0",
|
||||
"express": "^4.17.1",
|
||||
"js-yaml": "^3.14.0",
|
||||
"json": "^9.0.6",
|
||||
"ldapjs": "^1.0.2",
|
||||
"lodash": "^4.17.15",
|
||||
"lodash.chunk": "^4.2.0",
|
||||
"mime": "^2.4.6",
|
||||
"moment": "^2.26.0",
|
||||
"moment-timezone": "^0.5.31",
|
||||
"morgan": "^1.10.0",
|
||||
"multiparty": "^4.2.3",
|
||||
"multiparty": "^4.2.1",
|
||||
"mysql": "^2.18.1",
|
||||
"nodemailer": "^6.7.3",
|
||||
"nodemailer": "^6.4.6",
|
||||
"nodemailer-smtp-transport": "^2.7.4",
|
||||
"once": "^1.4.0",
|
||||
"parse-links": "^0.1.0",
|
||||
"pretty-bytes": "^5.3.0",
|
||||
"progress-stream": "^2.0.0",
|
||||
"qrcode": "^1.5.0",
|
||||
"readdirp": "^3.6.0",
|
||||
"safetydance": "^2.2.0",
|
||||
"semver": "^7.3.7",
|
||||
"proxy-middleware": "^0.15.0",
|
||||
"qrcode": "^1.4.4",
|
||||
"readdirp": "^3.4.0",
|
||||
"request": "^2.88.2",
|
||||
"rimraf": "^2.6.3",
|
||||
"s3-block-read-stream": "^0.5.0",
|
||||
"safetydance": "^1.1.1",
|
||||
"semver": "^6.1.1",
|
||||
"showdown": "^1.9.1",
|
||||
"speakeasy": "^2.0.0",
|
||||
"split": "^1.0.1",
|
||||
"superagent": "^7.1.1",
|
||||
"superagent": "^5.2.2",
|
||||
"supererror": "^0.7.2",
|
||||
"tar-fs": "github:cloudron-io/tar-fs#ignore_stat_error",
|
||||
"tar-stream": "^2.2.0",
|
||||
"tar-stream": "^2.1.2",
|
||||
"tldjs": "^2.3.1",
|
||||
"ua-parser-js": "^1.0.2",
|
||||
"underscore": "^1.13.2",
|
||||
"uuid": "^8.3.2",
|
||||
"validator": "^13.7.0",
|
||||
"ws": "^8.5.0",
|
||||
"underscore": "^1.10.2",
|
||||
"uuid": "^3.4.0",
|
||||
"validator": "^11.0.0",
|
||||
"ws": "^7.3.0",
|
||||
"xml2js": "^0.4.23"
|
||||
},
|
||||
"devDependencies": {
|
||||
"expect.js": "*",
|
||||
"hock": "^1.4.1",
|
||||
"js2xmlparser": "^4.0.2",
|
||||
"mocha": "^9.2.2",
|
||||
"js2xmlparser": "^4.0.1",
|
||||
"mocha": "^6.1.4",
|
||||
"mock-aws-s3": "git+https://github.com/cloudron-io/mock-aws-s3.git",
|
||||
"nock": "^13.2.4",
|
||||
"node-sass": "^7.0.1",
|
||||
"nyc": "^15.1.0"
|
||||
"nock": "^10.0.6",
|
||||
"node-sass": "^4.14.1",
|
||||
"recursive-readdir": "^2.2.2"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "./runTests",
|
||||
|
||||
@@ -2,11 +2,11 @@
|
||||
|
||||
set -eu
|
||||
|
||||
readonly source_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
readonly SOURCE_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
readonly DATA_DIR="${HOME}/.cloudron_test"
|
||||
readonly DEFAULT_TESTS="./src/test/*-test.js ./src/routes/test/*-test.js"
|
||||
|
||||
! "${source_dir}/src/test/checkInstall" && exit 1
|
||||
! "${SOURCE_dir}/src/test/checkInstall" && exit 1
|
||||
|
||||
# cleanup old data dirs some of those docker container data requires sudo to be removed
|
||||
echo "=> Provide root password to purge any leftover data in ${DATA_DIR} and load apparmor profile:"
|
||||
@@ -22,30 +22,19 @@ fi
|
||||
mkdir -p ${DATA_DIR}
|
||||
cd ${DATA_DIR}
|
||||
mkdir -p appsdata
|
||||
mkdir -p boxdata/box boxdata/mail boxdata/certs boxdata/mail/dkim/localhost boxdata/mail/dkim/foobar.com
|
||||
mkdir -p platformdata/addons/mail/banner platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons platformdata/logrotate.d platformdata/backup platformdata/logs/tasks platformdata/sftp/ssh platformdata/firewall platformdata/update
|
||||
sudo mkdir -p /mnt/cloudron-test-music /media/cloudron-test-music # volume test
|
||||
|
||||
# translations
|
||||
mkdir -p box/dashboard/dist/translation
|
||||
cp -r ${source_dir}/../dashboard/dist/translation/* box/dashboard/dist/translation
|
||||
mkdir -p boxdata/profileicons boxdata/appicons boxdata/mail boxdata/certs boxdata/mail/dkim/localhost boxdata/mail/dkim/foobar.com
|
||||
mkdir -p platformdata/addons/mail platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons platformdata/logrotate.d platformdata/backup platformdata/logs/tasks
|
||||
|
||||
# put cert
|
||||
echo "=> Generating a localhost selfsigned cert"
|
||||
openssl req -x509 -newkey rsa:2048 -keyout platformdata/nginx/cert/host.key -out platformdata/nginx/cert/host.cert -days 3650 -subj '/CN=localhost' -nodes -config <(cat /etc/ssl/openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:*.localhost"))
|
||||
|
||||
# clear out any containers if FAST is unset
|
||||
if [[ -z ${FAST+x} ]]; then
|
||||
echo "=> Delete all docker containers first"
|
||||
docker ps -qa --filter "label=isCloudronManaged" | xargs --no-run-if-empty docker rm -f
|
||||
docker rm -f mysql-server
|
||||
echo "==> To skip this run with: FAST=1 ./runTests"
|
||||
else
|
||||
echo "==> WARNING!! Skipping docker container cleanup, the database might not be pristine!"
|
||||
fi
|
||||
# clear out any containers
|
||||
echo "=> Delete all docker containers first"
|
||||
docker ps -qa | xargs --no-run-if-empty docker rm -f
|
||||
|
||||
# create docker network (while the infra code does this, most tests skip infra setup)
|
||||
docker network create --subnet=172.18.0.0/16 --ip-range=172.18.0.0/20 --gateway 172.18.0.1 cloudron --ipv6 --subnet=fd00:c107:d509::/64 || true
|
||||
docker network create --subnet=172.18.0.0/16 cloudron || true
|
||||
|
||||
# create the same mysql server version to test with
|
||||
OUT=`docker inspect mysql-server` || true
|
||||
@@ -63,12 +52,6 @@ while ! mysqladmin ping -h"${MYSQL_IP}" --silent; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "=> Ensure local base image"
|
||||
docker pull cloudron/base:3.0.0@sha256:455c70428723e3a823198c57472785437eb6eab082e79b3ff04ea584faf46e92
|
||||
|
||||
echo "=> Create iptables blocklist"
|
||||
sudo ipset create cloudron_blocklist hash:net || true
|
||||
|
||||
echo "=> Starting cloudron-syslog"
|
||||
cloudron-syslog --logdir "${DATA_DIR}/platformdata/logs/" &
|
||||
|
||||
@@ -76,18 +59,13 @@ echo "=> Ensure database"
|
||||
mysql -h"${MYSQL_IP}" -uroot -ppassword -e 'CREATE DATABASE IF NOT EXISTS box'
|
||||
|
||||
echo "=> Run database migrations"
|
||||
cd "${source_dir}"
|
||||
cd "${SOURCE_dir}"
|
||||
BOX_ENV=test DATABASE_URL=mysql://root:password@${MYSQL_IP}/box node_modules/.bin/db-migrate up
|
||||
|
||||
echo "=> Run tests with mocha"
|
||||
TESTS=${DEFAULT_TESTS}
|
||||
if [[ $# -gt 0 ]]; then
|
||||
TESTS="$*"
|
||||
fi
|
||||
|
||||
if [[ -z ${COVERAGE+x} ]]; then
|
||||
echo "=> Run tests with mocha"
|
||||
BOX_ENV=test ./node_modules/.bin/mocha --bail --no-timeouts --exit -R spec ${TESTS}
|
||||
else
|
||||
echo "=> Run tests with mocha and coverage"
|
||||
BOX_ENV=test ./node_modules/.bin/nyc --reporter=html ./node_modules/.bin/mocha --no-timeouts --exit -R spec ${TESTS}
|
||||
fi
|
||||
BOX_ENV=test ./node_modules/mocha/bin/_mocha --bail --no-timeouts --exit -R spec ${TESTS}
|
||||
|
||||
+48
-119
@@ -2,16 +2,10 @@
|
||||
|
||||
set -eu -o pipefail
|
||||
|
||||
function exitHandler() {
|
||||
rm -f /etc/update-motd.d/91-cloudron-install-in-progress
|
||||
}
|
||||
|
||||
trap exitHandler EXIT
|
||||
|
||||
# change this to a hash when we make a upgrade release
|
||||
readonly LOG_FILE="/var/log/cloudron-setup.log"
|
||||
readonly MINIMUM_DISK_SIZE_GB="18" # this is the size of "/" and required to fit in docker images 18 is a safe bet for different reporting on 20GB min
|
||||
readonly MINIMUM_MEMORY="960" # this is mostly reported for 1GB main memory (DO 992, EC2 967, Linode 989, Serverdiscounter.com 974)
|
||||
readonly MINIMUM_MEMORY="974" # this is mostly reported for 1GB main memory (DO 992, EC2 990, Linode 989, Serverdiscounter.com 974)
|
||||
|
||||
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400"
|
||||
|
||||
@@ -26,8 +20,8 @@ readonly GREEN='\033[32m'
|
||||
readonly DONE='\033[m'
|
||||
|
||||
# verify the system has minimum requirements met
|
||||
if [[ "${rootfs_type}" != "ext4" && "${rootfs_type}" != "xfs" ]]; then
|
||||
echo "Error: Cloudron requires '/' to be ext4 or xfs" # see #364
|
||||
if [[ "${rootfs_type}" != "ext4" ]]; then
|
||||
echo "Error: Cloudron requires '/' to be ext4" # see #364
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -41,61 +35,38 @@ if [[ "${disk_size_gb}" -lt "${MINIMUM_DISK_SIZE_GB}" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$(uname -m)" != "x86_64" ]]; then
|
||||
echo "Error: Cloudron only supports amd64/x86_64"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if cvirt=$(systemd-detect-virt --container); then
|
||||
echo "Error: Cloudron does not support ${cvirt}, only runs on bare metal or with full hardware virtualization"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# do not use is-active in case box service is down and user attempts to re-install
|
||||
if systemctl cat box.service >/dev/null 2>&1; then
|
||||
if systemctl -q is-active box; then
|
||||
echo "Error: Cloudron is already installed. To reinstall, start afresh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
initBaseImage="true"
|
||||
provider="generic"
|
||||
requestedVersion=""
|
||||
installServerOrigin="https://api.cloudron.io"
|
||||
apiServerOrigin="https://api.cloudron.io"
|
||||
webServerOrigin="https://cloudron.io"
|
||||
consoleServerOrigin="https://console.cloudron.io"
|
||||
sourceTarballUrl=""
|
||||
rebootServer="true"
|
||||
setupToken="" # this is a OTP for securing an installation (https://forum.cloudron.io/topic/6389/add-password-for-initial-configuration)
|
||||
appstoreSetupToken=""
|
||||
redo="false"
|
||||
|
||||
args=$(getopt -o "" -l "help,provider:,version:,env:,skip-reboot,generate-setup-token,setup-token:,redo" -n "$0" -- "$@")
|
||||
args=$(getopt -o "" -l "help,skip-baseimage-init,provider:,version:,env:,skip-reboot" -n "$0" -- "$@")
|
||||
eval set -- "${args}"
|
||||
|
||||
while true; do
|
||||
case "$1" in
|
||||
--help) echo "See https://docs.cloudron.io/installation/ on how to install Cloudron"; exit 0;;
|
||||
--help) echo "See https://cloudron.io/documentation/installation/ on how to install Cloudron"; exit 0;;
|
||||
--provider) provider="$2"; shift 2;;
|
||||
--version) requestedVersion="$2"; shift 2;;
|
||||
--env)
|
||||
if [[ "$2" == "dev" ]]; then
|
||||
apiServerOrigin="https://api.dev.cloudron.io"
|
||||
webServerOrigin="https://dev.cloudron.io"
|
||||
consoleServerOrigin="https://console.dev.cloudron.io"
|
||||
installServerOrigin="https://api.dev.cloudron.io"
|
||||
elif [[ "$2" == "staging" ]]; then
|
||||
apiServerOrigin="https://api.staging.cloudron.io"
|
||||
webServerOrigin="https://staging.cloudron.io"
|
||||
consoleServerOrigin="https://console.staging.cloudron.io"
|
||||
installServerOrigin="https://api.staging.cloudron.io"
|
||||
elif [[ "$2" == "unstable" ]]; then
|
||||
installServerOrigin="https://api.dev.cloudron.io"
|
||||
fi
|
||||
shift 2;;
|
||||
--skip-baseimage-init) initBaseImage="false"; shift;;
|
||||
--skip-reboot) rebootServer="false"; shift;;
|
||||
--redo) redo="true"; shift;;
|
||||
--setup-token) appstoreSetupToken="$2"; shift 2;;
|
||||
--generate-setup-token) setupToken="$(openssl rand -hex 10)"; shift;;
|
||||
--) break;;
|
||||
*) echo "Unknown option $1"; exit 1;;
|
||||
esac
|
||||
@@ -109,38 +80,11 @@ fi
|
||||
|
||||
# Only --help works with mismatched ubuntu
|
||||
ubuntu_version=$(lsb_release -rs)
|
||||
if [[ "${ubuntu_version}" != "16.04" && "${ubuntu_version}" != "18.04" && "${ubuntu_version}" != "20.04" && "${ubuntu_version}" != "22.04" ]]; then
|
||||
echo "Cloudron requires Ubuntu 18.04, 20.04, 22.04" > /dev/stderr
|
||||
if [[ "${ubuntu_version}" != "16.04" && "${ubuntu_version}" != "18.04" ]]; then
|
||||
echo "Cloudron requires Ubuntu 16.04 or 18.04" > /dev/stderr
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if which nginx >/dev/null || which docker >/dev/null || which node > /dev/null; then
|
||||
if [[ "${redo}" == "false" ]]; then
|
||||
echo "Error: Some packages like nginx/docker/nodejs are already installed. Cloudron requires specific versions of these packages and will install them as part of it's installation. Please start with a fresh Ubuntu install and run this script again." > /dev/stderr
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Install MOTD file for stack script style installations. this is removed by the trap exit handler. Heredoc quotes prevents parameter expansion
|
||||
cat > /etc/update-motd.d/91-cloudron-install-in-progress <<'EOF'
|
||||
#!/bin/bash
|
||||
|
||||
printf "**********************************************************************\n\n"
|
||||
|
||||
printf "\t\t\tWELCOME TO CLOUDRON\n"
|
||||
printf "\t\t\t-------------------\n"
|
||||
|
||||
printf '\n\e[1;32m%-6s\e[m\n\n' "Cloudron is installing. Run 'tail -f /var/log/cloudron-setup.log' to view progress."
|
||||
|
||||
printf "Cloudron overview - https://docs.cloudron.io/ \n"
|
||||
printf "Cloudron setup - https://docs.cloudron.io/installation/#setup \n"
|
||||
|
||||
printf "\nFor help and more information, visit https://forum.cloudron.io\n\n"
|
||||
|
||||
printf "**********************************************************************\n"
|
||||
EOF
|
||||
chmod +x /etc/update-motd.d/91-cloudron-install-in-progress
|
||||
|
||||
# Can only write after we have confirmed script has root access
|
||||
echo "Running cloudron-setup with args : $@" > "${LOG_FILE}"
|
||||
|
||||
@@ -155,19 +99,33 @@ echo ""
|
||||
echo " Join us at https://forum.cloudron.io for any questions."
|
||||
echo ""
|
||||
|
||||
echo "=> Updating apt and installing script dependencies"
|
||||
if ! apt-get update &>> "${LOG_FILE}"; then
|
||||
echo "Could not update package repositories. See ${LOG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
if [[ "${initBaseImage}" == "true" ]]; then
|
||||
echo "=> Installing software-properties-common"
|
||||
if ! apt-get install -y software-properties-common &>> "${LOG_FILE}"; then
|
||||
echo "Could not install software-properties-common (for add-apt-repository below). See ${LOG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -y install --no-install-recommends curl python3 ubuntu-standard software-properties-common -y &>> "${LOG_FILE}"; then
|
||||
echo "Could not install setup dependencies (curl). See ${LOG_FILE}"
|
||||
exit 1
|
||||
echo "=> Ensure required apt sources"
|
||||
if ! add-apt-repository universe &>> "${LOG_FILE}"; then
|
||||
echo "Could not add required apt sources (for nginx-full). See ${LOG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=> Updating apt and installing script dependencies"
|
||||
if ! apt-get update &>> "${LOG_FILE}"; then
|
||||
echo "Could not update package repositories. See ${LOG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -y install curl python3 ubuntu-standard -y &>> "${LOG_FILE}"; then
|
||||
echo "Could not install setup dependencies (curl). See ${LOG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "=> Checking version"
|
||||
if ! releaseJson=$($curl -s "${installServerOrigin}/api/v1/releases?boxVersion=${requestedVersion}"); then
|
||||
if ! releaseJson=$($curl -s "${apiServerOrigin}/api/v1/releases?boxVersion=${requestedVersion}"); then
|
||||
echo "Failed to get release information"
|
||||
exit 1
|
||||
fi
|
||||
@@ -183,7 +141,7 @@ if ! sourceTarballUrl=$(echo "${releaseJson}" | python3 -c 'import json,sys;obj=
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=> Downloading Cloudron version ${version} ..."
|
||||
echo "=> Downloading version ${version} ..."
|
||||
box_src_tmp_dir=$(mktemp -dt box-src-XXXXXX)
|
||||
|
||||
if ! $curl -sL "${sourceTarballUrl}" | tar -zxf - -C "${box_src_tmp_dir}"; then
|
||||
@@ -191,19 +149,20 @@ if ! $curl -sL "${sourceTarballUrl}" | tar -zxf - -C "${box_src_tmp_dir}"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -n "=> Installing base dependencies and downloading docker images (this takes some time) ..."
|
||||
init_ubuntu_script=$(test -f "${box_src_tmp_dir}/scripts/init-ubuntu.sh" && echo "${box_src_tmp_dir}/scripts/init-ubuntu.sh" || echo "${box_src_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh")
|
||||
if ! /bin/bash "${init_ubuntu_script}" &>> "${LOG_FILE}"; then
|
||||
echo "Init script failed. See ${LOG_FILE} for details"
|
||||
exit 1
|
||||
if [[ "${initBaseImage}" == "true" ]]; then
|
||||
echo -n "=> Installing base dependencies and downloading docker images (this takes some time) ..."
|
||||
# initializeBaseUbuntuImage.sh args (provider, infraversion path) are only to support installation of pre 5.3 Cloudrons
|
||||
if ! /bin/bash "${box_src_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh" "generic" "../src" &>> "${LOG_FILE}"; then
|
||||
echo "Init script failed. See ${LOG_FILE} for details"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# The provider flag is still used for marketplace images
|
||||
echo "=> Installing Cloudron version ${version} (this takes some time) ..."
|
||||
echo "=> Installing version ${version} (this takes some time) ..."
|
||||
mkdir -p /etc/cloudron
|
||||
echo "${provider}" > /etc/cloudron/PROVIDER
|
||||
[[ ! -z "${setupToken}" ]] && echo "${setupToken}" > /etc/cloudron/SETUP_TOKEN
|
||||
|
||||
if ! /bin/bash "${box_src_tmp_dir}/scripts/installer.sh" &>> "${LOG_FILE}"; then
|
||||
echo "Failed to install cloudron. See ${LOG_FILE} for details"
|
||||
@@ -212,20 +171,6 @@ fi
|
||||
|
||||
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('api_server_origin', '${apiServerOrigin}');" 2>/dev/null
|
||||
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('web_server_origin', '${webServerOrigin}');" 2>/dev/null
|
||||
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('console_server_origin', '${consoleServerOrigin}');" 2>/dev/null
|
||||
|
||||
if [[ -n "${appstoreSetupToken}" ]]; then
|
||||
if ! setupResponse=$(curl -sX POST -H "Content-type: application/json" --data "{\"setupToken\": \"${appstoreSetupToken}\"}" "${apiServerOrigin}/api/v1/cloudron_setup_done"); then
|
||||
echo "Could not complete setup. See ${LOG_FILE} for details"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cloudronId=$(echo "${setupResponse}" | python3 -c 'import json,sys;obj=json.load(sys.stdin);print(obj["cloudronId"])')
|
||||
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('cloudron_id', '${cloudronId}');" 2>/dev/null
|
||||
|
||||
appstoreApiToken=$(echo "${setupResponse}" | python3 -c 'import json,sys;obj=json.load(sys.stdin);print(obj["cloudronToken"])')
|
||||
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('appstore_api_token', '${appstoreApiToken}');" 2>/dev/null
|
||||
fi
|
||||
|
||||
echo -n "=> Waiting for cloudron to be ready (this takes some time) ..."
|
||||
while true; do
|
||||
@@ -236,34 +181,18 @@ while true; do
|
||||
sleep 10
|
||||
done
|
||||
|
||||
ip4=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
|
||||
ip6=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv6.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
|
||||
|
||||
url4=""
|
||||
url6=""
|
||||
fallbackUrl=""
|
||||
if [[ -z "${setupToken}" ]]; then
|
||||
[[ -n "${ip4}" ]] && url4="https://${ip4}"
|
||||
[[ -n "${ip6}" ]] && url6="https://[${ip6}]"
|
||||
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>"
|
||||
else
|
||||
[[ -n "${ip4}" ]] && url4="https://${ip4}/?setupToken=${setupToken}"
|
||||
[[ -n "${ip6}" ]] && url6="https://[${ip6}]/?setupToken=${setupToken}"
|
||||
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>?setupToken=${setupToken}"
|
||||
if ! ip=$(curl -s --fail --connect-timeout 2 --max-time 2 https://api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p'); then
|
||||
ip='<IP>'
|
||||
fi
|
||||
echo -e "\n\n${GREEN}After reboot, visit one of the following URLs and accept the self-signed certificate to finish setup.${DONE}\n"
|
||||
[[ -n "${url4}" ]] && echo -e " * ${GREEN}${url4}${DONE}"
|
||||
[[ -n "${url6}" ]] && echo -e " * ${GREEN}${url6}${DONE}"
|
||||
[[ -n "${fallbackUrl}" ]] && echo -e " * ${GREEN}${fallbackUrl}${DONE}"
|
||||
echo -e "\n\n${GREEN}Visit https://${ip} and accept the self-signed certificate to finish setup.${DONE}\n"
|
||||
|
||||
if [[ "${rebootServer}" == "true" ]]; then
|
||||
systemctl stop box mysql # sometimes mysql ends up having corrupt privilege tables
|
||||
|
||||
# https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#ANSI_002dC-Quoting
|
||||
read -p $'\n'"The server has to be rebooted to apply all the settings. Reboot now ? [Y/n] " yn
|
||||
read -p "The server has to be rebooted to apply all the settings. Reboot now ? [Y/n] " yn
|
||||
yn=${yn:-y}
|
||||
case $yn in
|
||||
[Yy]* ) exitHandler; systemctl reboot;;
|
||||
[Yy]* ) systemctl reboot;;
|
||||
* ) exit;;
|
||||
esac
|
||||
fi
|
||||
|
||||
+47
-52
@@ -8,15 +8,14 @@ set -eu -o pipefail
|
||||
PASTEBIN="https://paste.cloudron.io"
|
||||
OUT="/tmp/cloudron-support.log"
|
||||
LINE="\n========================================================\n"
|
||||
CLOUDRON_SUPPORT_PUBLIC_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGWS+930b8QdzbchGljt3KSljH9wRhYvht8srrtQHdzg support@cloudron.io"
|
||||
CLOUDRON_SUPPORT_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQVilclYAIu+ioDp/sgzzFz6YU0hPcRYY7ze/LiF/lC7uQqK062O54BFXTvQ3ehtFZCx3bNckjlT2e6gB8Qq07OM66De4/S/g+HJW4TReY2ppSPMVNag0TNGxDzVH8pPHOysAm33LqT2b6L/wEXwC6zWFXhOhHjcMqXvi8Ejaj20H1HVVcf/j8qs5Thkp9nAaFTgQTPu8pgwD8wDeYX1hc9d0PYGesTADvo6HF4hLEoEnefLw7PaStEbzk2fD3j7/g5r5HcgQQXBe74xYZ/1gWOX2pFNuRYOBSEIrNfJEjFJsqk3NR1+ZoMGK7j+AZBR4k0xbrmncQLcQzl6MMDzkp support@cloudron.io"
|
||||
HELP_MESSAGE="
|
||||
This script collects diagnostic information to help debug server related issues.
|
||||
This script collects diagnostic information to help debug server related issues
|
||||
|
||||
Options:
|
||||
--owner-login Login as owner
|
||||
--enable-ssh Enable SSH access for the Cloudron support team
|
||||
--reset-appstore-account Reset associated cloudron.io account
|
||||
--help Show this message
|
||||
--owner-login Login as owner
|
||||
--enable-ssh Enable SSH access for the Cloudron support team
|
||||
--help Show this message
|
||||
"
|
||||
|
||||
# We require root
|
||||
@@ -27,7 +26,7 @@ fi
|
||||
|
||||
enableSSH="false"
|
||||
|
||||
args=$(getopt -o "" -l "help,enable-ssh,admin-login,owner-login,reset-appstore-account" -n "$0" -- "$@")
|
||||
args=$(getopt -o "" -l "help,enable-ssh,admin-login,owner-login" -n "$0" -- "$@")
|
||||
eval set -- "${args}"
|
||||
|
||||
while true; do
|
||||
@@ -38,20 +37,12 @@ while true; do
|
||||
# fall through
|
||||
;&
|
||||
--owner-login)
|
||||
admin_username=$(mysql -NB -uroot -ppassword -e "SELECT username FROM box.users WHERE role='owner' AND username IS NOT NULL ORDER BY creationTime LIMIT 1" 2>/dev/null)
|
||||
admin_username=$(mysql -NB -uroot -ppassword -e "SELECT username FROM box.users WHERE role='owner' LIMIT 1" 2>/dev/null)
|
||||
admin_password=$(pwgen -1s 12)
|
||||
dashboard_domain=$(mysql -NB -uroot -ppassword -e "SELECT value FROM box.settings WHERE name='admin_fqdn'" 2>/dev/null)
|
||||
mysql -NB -uroot -ppassword -e "INSERT INTO box.settings (name, value) VALUES ('ghosts_config', '{\"${admin_username}\":\"${admin_password}\"}') ON DUPLICATE KEY UPDATE name='ghosts_config', value='{\"${admin_username}\":\"${admin_password}\"}'" 2>/dev/null
|
||||
echo "Login at https://${dashboard_domain} as ${admin_username} / ${admin_password} . This password may only be used once."
|
||||
exit 0
|
||||
;;
|
||||
--reset-appstore-account)
|
||||
echo -e "This will reset the Cloudron.io account associated with this Cloudron. Once reset, you can re-login with a different account in the Cloudron Dashboard. See https://docs.cloudron.io/appstore/#change-account for more information.\n"
|
||||
read -e -p "Reset the Cloudron.io account? [y/N] " choice
|
||||
[[ "$choice" != [Yy]* ]] && exit 1
|
||||
mysql -uroot -ppassword -e "DELETE FROM box.settings WHERE name='cloudron_token';" 2>/dev/null
|
||||
dashboard_domain=$(mysql -NB -uroot -ppassword -e "SELECT value FROM box.settings WHERE name='admin_fqdn'" 2>/dev/null)
|
||||
echo "Account reset. Please re-login at https://${dashboard_domain}/#/appstore"
|
||||
ghost_file=/home/yellowtent/platformdata/cloudron_ghost.json
|
||||
printf '{"%s":"%s"}\n' "${admin_username}" "${admin_password}" > "${ghost_file}"
|
||||
chown yellowtent:yellowtent "${ghost_file}" && chmod o-r,g-r "${ghost_file}"
|
||||
echo "Login as ${admin_username} / ${admin_password} . Remove ${ghost_file} when done."
|
||||
exit 0
|
||||
;;
|
||||
--) break;;
|
||||
@@ -66,7 +57,7 @@ if [[ "`df --output="avail" / | sed -n 2p`" -lt "10240" ]]; then
|
||||
echo ""
|
||||
df -h
|
||||
echo ""
|
||||
echo "To recover from a full disk, follow the guide at https://docs.cloudron.io/troubleshooting/#recovery-after-disk-full"
|
||||
echo "To recover from a full disk, follow the guide at https://cloudron.io/documentation/troubleshooting/#recovery-after-disk-full"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -77,39 +68,11 @@ if [[ "`df --output="avail" /tmp | sed -n 2p`" -lt "5120" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "${enableSSH}" == "true" ]]; then
|
||||
ssh_port=$(cat /etc/ssh/sshd_config | grep "Port " | sed -e "s/.*Port //")
|
||||
|
||||
ssh_user="cloudron-support"
|
||||
keys_file="/home/cloudron-support/.ssh/authorized_keys"
|
||||
|
||||
echo -e $LINE"SSH"$LINE >> $OUT
|
||||
echo "Username: ${ssh_user}" >> $OUT
|
||||
echo "Port: ${ssh_port}" >> $OUT
|
||||
echo "Key file: ${keys_file}" >> $OUT
|
||||
|
||||
echo -n "Enabling ssh access for the Cloudron support team..."
|
||||
mkdir -p $(dirname "${keys_file}") # .ssh does not exist sometimes
|
||||
touch "${keys_file}" # required for concat to work
|
||||
if ! grep -q "${CLOUDRON_SUPPORT_PUBLIC_KEY}" "${keys_file}"; then
|
||||
echo -e "\n${CLOUDRON_SUPPORT_PUBLIC_KEY}" >> "${keys_file}"
|
||||
chmod 600 "${keys_file}"
|
||||
chown "${ssh_user}" "${keys_file}"
|
||||
fi
|
||||
|
||||
echo "Done"
|
||||
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo -n "Generating Cloudron Support stats..."
|
||||
|
||||
# clear file
|
||||
rm -rf $OUT
|
||||
|
||||
echo -e $LINE"DASHBOARD DOMAIN"$LINE >> $OUT
|
||||
mysql -NB -uroot -ppassword -e "SELECT value FROM box.settings WHERE name='admin_fqdn'" &>> $OUT 2>/dev/null || true
|
||||
|
||||
echo -e $LINE"PROVIDER"$LINE >> $OUT
|
||||
cat /etc/cloudron/PROVIDER &>> $OUT || true
|
||||
|
||||
@@ -131,12 +94,12 @@ echo -e $LINE"Backup stats (possibly misleading)"$LINE >> $OUT
|
||||
du -hcsL /var/backups/* &>> $OUT || true
|
||||
|
||||
echo -e $LINE"System daemon status"$LINE >> $OUT
|
||||
systemctl status --lines=100 box mysql unbound cloudron-syslog nginx collectd docker &>> $OUT
|
||||
systemctl status --lines=100 cloudron.target box mysql unbound cloudron-syslog nginx collectd docker &>> $OUT
|
||||
|
||||
echo -e $LINE"Box logs"$LINE >> $OUT
|
||||
tail -n 100 /home/yellowtent/platformdata/logs/box.log &>> $OUT
|
||||
|
||||
echo -e $LINE"Interface Info"$LINE >> $OUT
|
||||
echo -e $LINE"Firewall chains"$LINE >> $OUT
|
||||
ip addr &>> $OUT
|
||||
|
||||
echo -e $LINE"Firewall chains"$LINE >> $OUT
|
||||
@@ -144,8 +107,40 @@ iptables -L &>> $OUT
|
||||
|
||||
echo "Done"
|
||||
|
||||
if [[ "${enableSSH}" == "true" ]]; then
|
||||
ssh_port=$(cat /etc/ssh/sshd_config | grep "Port " | sed -e "s/.*Port //")
|
||||
permit_root_login=$(grep -q ^PermitRootLogin.*yes /etc/ssh/sshd_config && echo "yes" || echo "no")
|
||||
|
||||
# support.js uses similar logic
|
||||
if [[ -d /home/ubuntu ]]; then
|
||||
ssh_user="ubuntu"
|
||||
keys_file="/home/ubuntu/.ssh/authorized_keys"
|
||||
else
|
||||
ssh_user="root"
|
||||
keys_file="/root/.ssh/authorized_keys"
|
||||
fi
|
||||
|
||||
echo -e $LINE"SSH"$LINE >> $OUT
|
||||
echo "Username: ${ssh_user}" >> $OUT
|
||||
echo "Port: ${ssh_port}" >> $OUT
|
||||
echo "PermitRootLogin: ${permit_root_login}" >> $OUT
|
||||
echo "Key file: ${keys_file}" >> $OUT
|
||||
|
||||
echo -n "Enabling ssh access for the Cloudron support team..."
|
||||
mkdir -p $(dirname "${keys_file}") # .ssh does not exist sometimes
|
||||
touch "${keys_file}" # required for concat to work
|
||||
if ! grep -q "${CLOUDRON_SUPPORT_PUBLIC_KEY}" "${keys_file}"; then
|
||||
echo -e "\n${CLOUDRON_SUPPORT_PUBLIC_KEY}" >> "${keys_file}"
|
||||
chmod 600 "${keys_file}"
|
||||
chown "${ssh_user}" "${keys_file}"
|
||||
fi
|
||||
|
||||
echo "Done"
|
||||
fi
|
||||
|
||||
echo -n "Uploading information..."
|
||||
paste_key=$(curl -X POST ${PASTEBIN}/documents --silent --data-binary "@$OUT" | python3 -c "import sys, json; print(json.load(sys.stdin)['key'])")
|
||||
# for some reason not using $(cat $OUT) will not contain newlines!?
|
||||
paste_key=$(curl -X POST ${PASTEBIN}/documents --silent -d "$(cat $OUT)" | python3 -c "import sys, json; print(json.load(sys.stdin)['key'])")
|
||||
echo "Done"
|
||||
|
||||
echo ""
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eu -o pipefail
|
||||
|
||||
# This script downloads new translation data from weblate at https://translate.cloudron.io
|
||||
|
||||
OUT="/home/yellowtent/box/dashboard/dist/translation"
|
||||
|
||||
# We require root
|
||||
if [[ ${EUID} -ne 0 ]]; then
|
||||
echo "This script should be run as root. Run with sudo"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=> Downloading new translation files..."
|
||||
curl https://translate.cloudron.io/download/cloudron/dashboard/?format=zip -o /tmp/lang.zip
|
||||
|
||||
echo "=> Unpacking..."
|
||||
unzip -jo /tmp/lang.zip -d $OUT
|
||||
chown -R yellowtent:yellowtent $OUT
|
||||
# unzip put very restrictive permissions
|
||||
chmod ua+r $OUT/*
|
||||
|
||||
echo "=> Cleanup..."
|
||||
rm /tmp/lang.zip
|
||||
|
||||
echo "=> Done"
|
||||
|
||||
echo ""
|
||||
echo "Reload the dashboard to see the new translations"
|
||||
echo ""
|
||||
@@ -41,8 +41,8 @@ if ! $(cd "${SOURCE_DIR}/../dashboard" && git diff --exit-code >/dev/null); then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$(node --version)" != "v16.13.1" ]]; then
|
||||
echo "This script requires node 16.13.1"
|
||||
if [[ "$(node --version)" != "v10.18.1" ]]; then
|
||||
echo "This script requires node 10.18.1"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -1,199 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This script is run on the base ubuntu. Put things here which are managed by ubuntu
|
||||
# This script is also run after ubuntu upgrade
|
||||
|
||||
set -euv -o pipefail
|
||||
|
||||
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
readonly arg_infraversionpath="${SOURCE_DIR}/../src"
|
||||
|
||||
function die {
|
||||
echo $1
|
||||
exit 1
|
||||
}
|
||||
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
readonly ubuntu_codename=$(lsb_release -cs)
|
||||
readonly ubuntu_version=$(lsb_release -rs)
|
||||
|
||||
# hold grub since updating it breaks on some VPS providers. also, dist-upgrade will trigger it
|
||||
apt-mark hold grub* >/dev/null
|
||||
apt-get -o Dpkg::Options::="--force-confdef" update -y
|
||||
apt-get -o Dpkg::Options::="--force-confdef" upgrade -y
|
||||
apt-mark unhold grub* >/dev/null
|
||||
|
||||
echo "==> Installing required packages"
|
||||
|
||||
debconf-set-selections <<< 'mysql-server mysql-server/root_password password password'
|
||||
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password password'
|
||||
|
||||
# this enables automatic security upgrades (https://help.ubuntu.com/community/AutomaticSecurityUpdates)
|
||||
# resolvconf is needed for unbound to work property after disabling systemd-resolved in 18.04
|
||||
case "${ubuntu_version}" in
|
||||
16.04)
|
||||
gpg_package="gnupg"
|
||||
mysql_package="mysql-server-5.7"
|
||||
ntpd_package=""
|
||||
python_package="python2.7"
|
||||
nginx_package="" # we use custom package for TLS v1.3 support
|
||||
;;
|
||||
18.04)
|
||||
gpg_package="gpg"
|
||||
mysql_package="mysql-server-5.7"
|
||||
ntpd_package=""
|
||||
python_package="python2.7"
|
||||
nginx_package="" # we use custom package for TLS v1.3 support
|
||||
;;
|
||||
20.04)
|
||||
gpg_package="gpg"
|
||||
mysql_package="mysql-server-8.0"
|
||||
ntpd_package="systemd-timesyncd"
|
||||
python_package="python3.8"
|
||||
nginx_package="nginx-full"
|
||||
;;
|
||||
22.04)
|
||||
gpg_package="gpg"
|
||||
mysql_package="mysql-server-8.0"
|
||||
ntpd_package="systemd-timesyncd"
|
||||
python_package="python3.10"
|
||||
nginx_package="nginx-full"
|
||||
;;
|
||||
esac
|
||||
|
||||
apt-get -y install --no-install-recommends \
|
||||
acl \
|
||||
apparmor \
|
||||
build-essential \
|
||||
cifs-utils \
|
||||
cron \
|
||||
curl \
|
||||
debconf-utils \
|
||||
dmsetup \
|
||||
$gpg_package \
|
||||
ipset \
|
||||
iptables \
|
||||
lib${python_package} \
|
||||
linux-generic \
|
||||
logrotate \
|
||||
$mysql_package \
|
||||
nfs-common \
|
||||
$nginx_package \
|
||||
$ntpd_package \
|
||||
openssh-server \
|
||||
pwgen \
|
||||
resolvconf \
|
||||
sshfs \
|
||||
swaks \
|
||||
tzdata \
|
||||
unattended-upgrades \
|
||||
unbound \
|
||||
unzip \
|
||||
xfsprogs
|
||||
|
||||
# on some providers like scaleway the sudo file is changed and we want to keep the old one
|
||||
apt-get -o Dpkg::Options::="--force-confold" install -y --no-install-recommends sudo
|
||||
|
||||
# this ensures that unattended upgades are enabled, if it was disabled during ubuntu install time (see #346)
|
||||
# debconf-set-selection of unattended-upgrades/enable_auto_updates + dpkg-reconfigure does not work
|
||||
cp /usr/share/unattended-upgrades/20auto-upgrades /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
apt-get install -y --no-install-recommends $python_package # Install python which is required for npm rebuild
|
||||
|
||||
# do not upgrade grub because it might prompt user and break this script
|
||||
echo "==> Enable memory accounting"
|
||||
apt-get -y --no-upgrade --no-install-recommends install grub2-common
|
||||
sed -e 's/^GRUB_CMDLINE_LINUX="\(.*\)"$/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
|
||||
update-grub
|
||||
|
||||
echo "==> Install collectd"
|
||||
# without this, libnotify4 will install gnome-shell
|
||||
apt-get install -y libnotify4 libcurl3-gnutls --no-install-recommends
|
||||
# https://bugs.launchpad.net/ubuntu/+source/collectd/+bug/1872281
|
||||
if [[ "${ubuntu_version}" == "22.04" ]]; then
|
||||
readonly launchpad="https://launchpad.net/ubuntu/+source/collectd/5.12.0-9/+build/23189375/+files"
|
||||
cd /tmp && wget -q "${launchpad}/collectd_5.12.0-9_amd64.deb" "${launchpad}/collectd-utils_5.12.0-9_amd64.deb" "${launchpad}/collectd-core_5.12.0-9_amd64.deb" "${launchpad}/libcollectdclient1_5.12.0-9_amd64.deb"
|
||||
cd /tmp && apt install -y --no-install-recommends ./libcollectdclient1_5.12.0-9_amd64.deb ./collectd-core_5.12.0-9_amd64.deb ./collectd_5.12.0-9_amd64.deb ./collectd-utils_5.12.0-9_amd64.deb && rm -f /tmp/collectd_*.deb
|
||||
echo -e "\nLD_PRELOAD=/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/libpython3.10.so" >> /etc/default/collectd
|
||||
else
|
||||
if ! apt-get install -y --no-install-recommends collectd collectd-utils; then
|
||||
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
|
||||
echo "Failed to install collectd, continuing anyway. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
|
||||
fi
|
||||
|
||||
if [[ "${ubuntu_version}" == "20.04" ]]; then
|
||||
echo -e "\nLD_PRELOAD=/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8.so" >> /etc/default/collectd
|
||||
fi
|
||||
fi
|
||||
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
|
||||
|
||||
# some hosts like atlantic install ntp which conflicts with timedatectl. https://serverfault.com/questions/1024770/ubuntu-20-04-time-sync-problems-and-possibly-incorrect-status-information
|
||||
echo "==> Configuring host"
|
||||
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
|
||||
if systemctl is-active ntp; then
|
||||
systemctl stop ntp
|
||||
apt purge -y ntp
|
||||
fi
|
||||
timedatectl set-ntp 1
|
||||
# mysql follows the system timezone
|
||||
timedatectl set-timezone UTC
|
||||
|
||||
echo "==> Adding sshd configuration warning"
|
||||
sed -e '/Port 22/ i # NOTE: Cloudron only supports moving SSH to port 202. See https://docs.cloudron.io/security/#securing-ssh-access' -i /etc/ssh/sshd_config
|
||||
|
||||
# https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068
|
||||
echo "==> Disabling motd news"
|
||||
if [[ -f "/etc/default/motd-news" ]]; then
|
||||
sed -i 's/^ENABLED=.*/ENABLED=0/' /etc/default/motd-news
|
||||
fi
|
||||
|
||||
# If privacy extensions are not disabled on server, this breaks IPv6 detection
|
||||
# https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1068756
|
||||
if [[ ! -f /etc/sysctl.d/99-cloudimg-ipv6.conf ]]; then
|
||||
echo "==> Disable temporary address (IPv6)"
|
||||
echo -e "# See https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1068756\nnet.ipv6.conf.all.use_tempaddr = 0\nnet.ipv6.conf.default.use_tempaddr = 0\n\n" > /etc/sysctl.d/99-cloudimg-ipv6.conf
|
||||
fi
|
||||
|
||||
# Disable exim4 (1blu.de)
|
||||
systemctl stop exim4 || true
|
||||
systemctl disable exim4 || true
|
||||
|
||||
# Disable bind for good measure (on online.net, kimsufi servers these are pre-installed)
|
||||
systemctl stop bind9 || true
|
||||
systemctl disable bind9 || true
|
||||
|
||||
# on ovh images dnsmasq seems to run by default
|
||||
systemctl stop dnsmasq || true
|
||||
systemctl disable dnsmasq || true
|
||||
|
||||
# on ssdnodes postfix seems to run by default
|
||||
systemctl stop postfix || true
|
||||
systemctl disable postfix || true
|
||||
|
||||
# on ubuntu 18.04 and 20.04, this is the default. this requires resolvconf for DNS to work further after the disable
|
||||
systemctl stop systemd-resolved || true
|
||||
systemctl disable systemd-resolved || true
|
||||
|
||||
# on vultr, ufw is enabled by default. we have our own firewall
|
||||
ufw disable || true
|
||||
|
||||
# we need unbound to work as this is required for installer.sh to do any DNS requests
|
||||
echo -e "server:\n\tinterface: 127.0.0.1\n" > /etc/unbound/unbound.conf.d/cloudron-network.conf
|
||||
systemctl restart unbound
|
||||
|
||||
# Ubuntu 22 has private home directories by default (https://discourse.ubuntu.com/t/private-home-directories-for-ubuntu-21-04-onwards/)
|
||||
sed -e 's/^HOME_MODE\([[:space:]]\+\).*$/HOME_MODE\10755/' -i /etc/login.defs
|
||||
|
||||
# create the yellowtent user. system user has different numeric range, no age and won't show in login/gdm UI
|
||||
# the nologin will also disable su/login
|
||||
if ! id yellowtent 2>/dev/null; then
|
||||
useradd --system --comment "Cloudron Box" --create-home --shell /usr/sbin/nologin yellowtent
|
||||
fi
|
||||
|
||||
# add support user (no password, sudo)
|
||||
if ! id cloudron-support 2>/dev/null; then
|
||||
useradd --system --comment "Cloudron Support (support@cloudron.io)" --create-home --no-user-group --shell /bin/bash cloudron-support
|
||||
fi
|
||||
|
||||
+65
-126
@@ -11,52 +11,6 @@ if [[ ${EUID} -ne 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
function log() {
|
||||
echo -e "$(date +'%Y-%m-%dT%H:%M:%S')" "==> installer: $1"
|
||||
}
|
||||
|
||||
apt_ready="no"
|
||||
function prepare_apt_once() {
|
||||
[[ "${apt_ready}" == "yes" ]] && return
|
||||
|
||||
log "Making sure apt is in a good state"
|
||||
|
||||
log "Waiting for all dpkg tasks to finish..."
|
||||
while fuser /var/lib/dpkg/lock; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# it's unclear what needs to be run first or whether both these command should be run. so keep trying both
|
||||
for count in {1..3}; do
|
||||
# alternative to apt-install -y --fix-missing ?
|
||||
if ! dpkg --force-confold --configure -a; then
|
||||
log "dpkg reconfigure failed (try $count)"
|
||||
dpkg_configure="no"
|
||||
else
|
||||
dpkg_configure="yes"
|
||||
fi
|
||||
|
||||
if ! apt update -y; then
|
||||
log "apt update failed (try $count)"
|
||||
apt_update="no"
|
||||
else
|
||||
apt_update="yes"
|
||||
fi
|
||||
|
||||
[[ "${dpkg_configure}" == "yes" && "${apt_update}" == "yes" ]] && break
|
||||
|
||||
sleep 1
|
||||
done
|
||||
|
||||
apt_ready="yes"
|
||||
|
||||
if [[ "${dpkg_configure}" == "yes" && "${apt_update}" == "yes" ]]; then
|
||||
log "apt is ready"
|
||||
else
|
||||
log "apt is not ready but proceeding anyway"
|
||||
fi
|
||||
}
|
||||
|
||||
readonly user=yellowtent
|
||||
readonly box_src_dir=/home/${user}/box
|
||||
|
||||
@@ -67,64 +21,61 @@ readonly box_src_tmp_dir="$(realpath ${script_dir}/..)"
|
||||
readonly ubuntu_version=$(lsb_release -rs)
|
||||
readonly ubuntu_codename=$(lsb_release -cs)
|
||||
|
||||
readonly is_update=$(systemctl is-active -q box && echo "yes" || echo "no")
|
||||
readonly is_update=$(systemctl is-active box && echo "yes" || echo "no")
|
||||
|
||||
log "Updating from $(cat $box_src_dir/VERSION 2>/dev/null) to $(cat $box_src_tmp_dir/VERSION 2>/dev/null)"
|
||||
echo "==> installer: Updating from $(cat $box_src_dir/VERSION) to $(cat $box_src_tmp_dir/VERSION) <=="
|
||||
|
||||
# https://docs.docker.com/engine/installation/linux/ubuntulinux/
|
||||
readonly docker_version=20.10.14
|
||||
if ! which docker 2>/dev/null || [[ $(docker version --format {{.Client.Version}}) != "${docker_version}" ]]; then
|
||||
log "installing/updating docker"
|
||||
|
||||
# create systemd drop-in file already to make sure images are with correct driver
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables" > /etc/systemd/system/docker.service.d/cloudron.conf
|
||||
echo "==> installer: updating docker"
|
||||
|
||||
if [[ $(docker version --format {{.Client.Version}}) != "18.09.2" ]]; then
|
||||
# there are 3 packages for docker - containerd, CLI and the daemon
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.5.11-1_amd64.deb" -o /tmp/containerd.deb
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_${docker_version}~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/containerd.io_1.2.2-3_amd64.deb" -o /tmp/containerd.deb
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce-cli_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker-ce-cli.deb
|
||||
$curl -sL "https://download.docker.com/linux/ubuntu/dists/${ubuntu_codename}/pool/stable/amd64/docker-ce_18.09.2~3-0~ubuntu-${ubuntu_codename}_amd64.deb" -o /tmp/docker.deb
|
||||
|
||||
echo "==> installer: Waiting for all dpkg tasks to finish..."
|
||||
while fuser /var/lib/dpkg/lock; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
while ! dpkg --force-confold --configure -a; do
|
||||
echo "==> installer: Failed to fix packages. Retry"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# the latest docker might need newer packages
|
||||
while ! apt update -y; do
|
||||
echo "==> installer: Failed to update packages. Retry"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
while ! apt install -y /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb; do
|
||||
echo "==> installer: Failed to install docker. Retry"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
log "installing docker"
|
||||
prepare_apt_once
|
||||
apt install -y /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
|
||||
rm /tmp/containerd.deb /tmp/docker-ce-cli.deb /tmp/docker.deb
|
||||
fi
|
||||
|
||||
# we want atleast nginx 1.14 for TLS v1.3 support. Ubuntu 20/22 already has nginx 1.18
|
||||
# Ubuntu 18 OpenSSL does not have TLS v1.3 support, so we use the upstream nginx packages
|
||||
readonly nginx_version=$(nginx -v 2>&1)
|
||||
if [[ "${ubuntu_version}" == "20.04" ]]; then
|
||||
if [[ "${nginx_version}" == *"Ubuntu"* ]]; then
|
||||
log "switching nginx to ubuntu package"
|
||||
prepare_apt_once
|
||||
apt remove -y nginx
|
||||
apt install -y nginx-full
|
||||
fi
|
||||
elif [[ "${ubuntu_version}" == "18.04" ]]; then
|
||||
if [[ "${nginx_version}" != *"1.18."* ]]; then
|
||||
log "installing/updating nginx 1.18"
|
||||
$curl -sL http://nginx.org/packages/ubuntu/pool/nginx/n/nginx/nginx_1.18.0-2~${ubuntu_codename}_amd64.deb -o /tmp/nginx.deb
|
||||
|
||||
prepare_apt_once
|
||||
|
||||
# apt install with install deps (as opposed to dpkg -i)
|
||||
apt install -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --force-yes /tmp/nginx.deb
|
||||
rm /tmp/nginx.deb
|
||||
fi
|
||||
readonly nginx_version=$(nginx -v)
|
||||
if [[ "${nginx_version}" != *"1.14."* && "${ubuntu_version}" == "16.04" ]]; then
|
||||
echo "==> installer: installing nginx for xenial for TLSv3 support"
|
||||
curl -sL http://nginx.org/packages/ubuntu/pool/nginx/n/nginx/nginx_1.14.0-1~xenial_amd64.deb -o /tmp/nginx.deb
|
||||
# apt install with install deps (as opposed to dpkg -i)
|
||||
apt install -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --force-yes /tmp/nginx.deb
|
||||
rm /tmp/nginx.deb
|
||||
fi
|
||||
|
||||
readonly node_version=16.14.2
|
||||
if ! which node 2>/dev/null || [[ "$(node --version)" != "v${node_version}" ]]; then
|
||||
log "installing/updating node ${node_version}"
|
||||
mkdir -p /usr/local/node-${node_version}
|
||||
$curl -sL https://nodejs.org/dist/v${node_version}/node-v${node_version}-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-${node_version}
|
||||
ln -sf /usr/local/node-${node_version}/bin/node /usr/bin/node
|
||||
ln -sf /usr/local/node-${node_version}/bin/npm /usr/bin/npm
|
||||
rm -rf /usr/local/node-16.13.1
|
||||
echo "==> installer: updating node"
|
||||
if [[ "$(node --version)" != "v10.18.1" ]]; then
|
||||
mkdir -p /usr/local/node-10.18.1
|
||||
$curl -sL https://nodejs.org/dist/v10.18.1/node-v10.18.1-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-10.18.1
|
||||
ln -sf /usr/local/node-10.18.1/bin/node /usr/bin/node
|
||||
ln -sf /usr/local/node-10.18.1/bin/npm /usr/bin/npm
|
||||
rm -rf /usr/local/node-10.15.1
|
||||
fi
|
||||
|
||||
# note that rebuild requires the above node
|
||||
# this is here (and not in updater.js) because rebuild requires the above node
|
||||
for try in `seq 1 10`; do
|
||||
# for reasons unknown, the dtrace package will fail. but rebuilding second time will work
|
||||
|
||||
@@ -132,70 +83,58 @@ for try in `seq 1 10`; do
|
||||
# however by default npm drops privileges for npm rebuild
|
||||
# https://docs.npmjs.com/misc/config#unsafe-perm
|
||||
if cd "${box_src_tmp_dir}" && npm rebuild --unsafe-perm; then break; fi
|
||||
log "Failed to rebuild, trying again"
|
||||
echo "==> installer: Failed to rebuild, trying again"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [[ ${try} -eq 10 ]]; then
|
||||
log "npm rebuild failed, giving up"
|
||||
echo "==> installer: npm rebuild failed, giving up"
|
||||
exit 4
|
||||
fi
|
||||
|
||||
log "downloading new addon images"
|
||||
images=$(node -e "let i = require('${box_src_tmp_dir}/src/infra_version.js'); console.log(i.baseImages.map(function (x) { return x.tag; }).join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
|
||||
|
||||
log "\tPulling docker images: ${images}"
|
||||
if ! curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip; then
|
||||
docker_registry=registry.ipv6.docker.com
|
||||
else
|
||||
docker_registry=registry-1.docker.io
|
||||
fi
|
||||
echo "==> installer: downloading new addon images"
|
||||
images=$(node -e "var i = require('${box_src_tmp_dir}/src/infra_version.js'); console.log(i.baseImages.map(function (x) { return x.tag; }).join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
|
||||
|
||||
echo -e "\tPulling docker images: ${images}"
|
||||
for image in ${images}; do
|
||||
while ! docker pull "${docker_registry}/${image}"; do # this pulls the image using the sha256
|
||||
log "Could not pull ${image}"
|
||||
sleep 5
|
||||
done
|
||||
while ! docker pull "${docker_registry}/${image%@sha256:*}"; do # this will tag the image for readability
|
||||
log "Could not pull ${image%@sha256:*}"
|
||||
sleep 5
|
||||
done
|
||||
if ! docker pull "${image}"; then # this pulls the image using the sha256
|
||||
echo "==> installer: Could not pull ${image}"
|
||||
exit 5
|
||||
fi
|
||||
if ! docker pull "${image%@sha256:*}"; then # this will tag the image for readability
|
||||
echo "==> installer: Could not pull ${image%@sha256:*}"
|
||||
exit 6
|
||||
fi
|
||||
done
|
||||
|
||||
log "update cloudron-syslog"
|
||||
echo "==> installer: update cloudron-syslog"
|
||||
CLOUDRON_SYSLOG_DIR=/usr/local/cloudron-syslog
|
||||
CLOUDRON_SYSLOG="${CLOUDRON_SYSLOG_DIR}/bin/cloudron-syslog"
|
||||
CLOUDRON_SYSLOG_VERSION="1.1.0"
|
||||
CLOUDRON_SYSLOG_VERSION="1.0.3"
|
||||
while [[ ! -f "${CLOUDRON_SYSLOG}" || "$(${CLOUDRON_SYSLOG} --version)" != ${CLOUDRON_SYSLOG_VERSION} ]]; do
|
||||
rm -rf "${CLOUDRON_SYSLOG_DIR}"
|
||||
mkdir -p "${CLOUDRON_SYSLOG_DIR}"
|
||||
# verbatim is not needed in node 18 since that is the default there. in node 16, ipv4 is preferred and this breaks on ipv6 only servers
|
||||
if NODE_OPTIONS="--dns-result-order=verbatim" npm install --unsafe-perm -g --prefix "${CLOUDRON_SYSLOG_DIR}" cloudron-syslog@${CLOUDRON_SYSLOG_VERSION}; then break; fi
|
||||
log "Failed to install cloudron-syslog, trying again"
|
||||
if npm install --unsafe-perm -g --prefix "${CLOUDRON_SYSLOG_DIR}" cloudron-syslog@${CLOUDRON_SYSLOG_VERSION}; then break; fi
|
||||
echo "===> installer: Failed to install cloudron-syslog, trying again"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
log "creating cloudron-support user"
|
||||
if ! id cloudron-support 2>/dev/null; then
|
||||
useradd --system --comment "Cloudron Support (support@cloudron.io)" --create-home --no-user-group --shell /bin/bash cloudron-support
|
||||
if ! id "${user}" 2>/dev/null; then
|
||||
useradd "${user}" -m
|
||||
fi
|
||||
|
||||
log "locking the ${user} account"
|
||||
usermod --shell /usr/sbin/nologin "${user}"
|
||||
passwd --lock "${user}"
|
||||
|
||||
if [[ "${is_update}" == "yes" ]]; then
|
||||
log "stop box service for update"
|
||||
echo "==> installer: stop cloudron.target service for update"
|
||||
${box_src_dir}/setup/stop.sh
|
||||
fi
|
||||
|
||||
# ensure we are not inside the source directory, which we will remove now
|
||||
cd /root
|
||||
|
||||
log "switching the box code"
|
||||
echo "==> installer: switching the box code"
|
||||
rm -rf "${box_src_dir}"
|
||||
mv "${box_src_tmp_dir}" "${box_src_dir}"
|
||||
chown -R "${user}:${user}" "${box_src_dir}"
|
||||
|
||||
log "calling box setup script"
|
||||
echo "==> installer: calling box setup script"
|
||||
"${box_src_dir}/setup/start.sh"
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eu -o pipefail
|
||||
|
||||
readonly logfile="/home/yellowtent/platformdata/logs/box.log"
|
||||
|
||||
if [[ ${EUID} -ne 0 ]]; then
|
||||
echo "This script should be run as root." > /dev/stderr
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "This will re-create all the containers. Services will go down for a bit."
|
||||
|
||||
read -p "Do you want to proceed? (y/N) " -n 1 -r choice
|
||||
echo
|
||||
|
||||
if [[ ! $choice =~ ^[Yy]$ ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -n "Re-creating addon containers (this takes a while) ."
|
||||
line_count=$(cat /home/yellowtent/platformdata/logs/box.log | wc -l)
|
||||
sed -e 's/"version": ".*",/"version":"48.0.0",/' -i /home/yellowtent/platformdata/INFRA_VERSION
|
||||
systemctl restart box
|
||||
|
||||
while ! tail -n "+${line_count}" "${logfile}" | grep -q "platform is ready"; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo -e "\nDone.\nThe Cloudron dashboard will say 'Configuring (Queued)' for each app. The apps will come up in a short while."
|
||||
|
||||
+73
-87
@@ -5,60 +5,46 @@ set -eu -o pipefail
|
||||
# This script is run after the box code is switched. This means that this script
|
||||
# should pretty much always succeed. No network logic/download code here.
|
||||
|
||||
function log() {
|
||||
echo -e "$(date +'%Y-%m-%dT%H:%M:%S')" "==> start: $1"
|
||||
}
|
||||
|
||||
log "Cloudron Start"
|
||||
echo "==> Cloudron Start"
|
||||
|
||||
readonly USER="yellowtent"
|
||||
readonly HOME_DIR="/home/${USER}"
|
||||
readonly BOX_SRC_DIR="${HOME_DIR}/box"
|
||||
readonly PLATFORM_DATA_DIR="${HOME_DIR}/platformdata"
|
||||
readonly APPS_DATA_DIR="${HOME_DIR}/appsdata"
|
||||
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata/box"
|
||||
readonly MAIL_DATA_DIR="${HOME_DIR}/boxdata/mail"
|
||||
readonly PLATFORM_DATA_DIR="${HOME_DIR}/platformdata" # platform data
|
||||
readonly APPS_DATA_DIR="${HOME_DIR}/appsdata" # app data
|
||||
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata" # box data
|
||||
|
||||
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
readonly json="$(realpath ${script_dir}/../node_modules/.bin/json)"
|
||||
readonly ubuntu_version=$(lsb_release -rs)
|
||||
|
||||
cp -f "${script_dir}/../scripts/cloudron-support" /usr/bin/cloudron-support
|
||||
cp -f "${script_dir}/../scripts/cloudron-translation-update" /usr/bin/cloudron-translation-update
|
||||
|
||||
# this needs to match the cloudron/base:2.0.0 gid
|
||||
if ! getent group media; then
|
||||
addgroup --gid 500 --system media
|
||||
fi
|
||||
|
||||
log "Configuring docker"
|
||||
echo "==> Configuring docker"
|
||||
cp "${script_dir}/start/docker-cloudron-app.apparmor" /etc/apparmor.d/docker-cloudron-app
|
||||
systemctl enable apparmor
|
||||
systemctl restart apparmor
|
||||
|
||||
usermod ${USER} -a -G docker
|
||||
|
||||
if ! grep -q ip6tables /etc/systemd/system/docker.service.d/cloudron.conf; then
|
||||
log "Adding ip6tables flag to docker" # https://github.com/moby/moby/pull/41622
|
||||
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables" > /etc/systemd/system/docker.service.d/cloudron.conf
|
||||
systemctl daemon-reload
|
||||
systemctl restart docker
|
||||
fi
|
||||
docker network create --subnet=172.18.0.0/16 cloudron || true
|
||||
|
||||
mkdir -p "${BOX_DATA_DIR}"
|
||||
mkdir -p "${APPS_DATA_DIR}"
|
||||
mkdir -p "${MAIL_DATA_DIR}"
|
||||
|
||||
# keep these in sync with paths.js
|
||||
log "Ensuring directories"
|
||||
echo "==> Ensuring directories"
|
||||
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/graphite"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/mysql"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/postgresql"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/mongodb"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/redis"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/addons/mail/banner" \
|
||||
"${PLATFORM_DATA_DIR}/addons/mail/dkim"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/addons/mail"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/collectd/collectd.conf.d"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/logrotate.d"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/acme"
|
||||
@@ -69,16 +55,19 @@ mkdir -p "${PLATFORM_DATA_DIR}/logs/backup" \
|
||||
"${PLATFORM_DATA_DIR}/logs/crash" \
|
||||
"${PLATFORM_DATA_DIR}/logs/collectd"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/update"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/sftp/ssh" # sftp keys
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/firewall"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/sshfs"
|
||||
mkdir -p "${PLATFORM_DATA_DIR}/cifs"
|
||||
|
||||
mkdir -p "${BOX_DATA_DIR}/appicons"
|
||||
mkdir -p "${BOX_DATA_DIR}/profileicons"
|
||||
mkdir -p "${BOX_DATA_DIR}/certs"
|
||||
mkdir -p "${BOX_DATA_DIR}/acme" # acme keys
|
||||
mkdir -p "${BOX_DATA_DIR}/mail/dkim"
|
||||
mkdir -p "${BOX_DATA_DIR}/well-known" # .well-known documents
|
||||
|
||||
# ensure backups folder exists and is writeable
|
||||
mkdir -p /var/backups
|
||||
chmod 777 /var/backups
|
||||
|
||||
log "Configuring journald"
|
||||
echo "==> Configuring journald"
|
||||
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
|
||||
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
|
||||
-i /etc/systemd/journald.conf
|
||||
@@ -88,35 +77,37 @@ sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
|
||||
sed -e "s/^WatchdogSec=.*$/WatchdogSec=3min/" \
|
||||
-i /lib/systemd/system/systemd-journald.service
|
||||
|
||||
usermod -a -G systemd-journal ${USER} # Give user access to system logs
|
||||
if [[ ! -d /var/log/journal ]]; then # in some images, this directory is not created making system log to /run/systemd instead
|
||||
mkdir -p /var/log/journal
|
||||
chown root:systemd-journal /var/log/journal
|
||||
chmod g+s /var/log/journal # sticky bit for group propagation
|
||||
fi
|
||||
# Give user access to system logs
|
||||
usermod -a -G systemd-journal ${USER}
|
||||
mkdir -p /var/log/journal # in some images, this directory is not created making system log to /run/systemd instead
|
||||
chown root:systemd-journal /var/log/journal
|
||||
systemctl daemon-reload
|
||||
systemctl restart systemd-journald
|
||||
setfacl -n -m u:${USER}:r /var/log/journal/*/system.journal
|
||||
|
||||
# Give user access to nginx logs (uses adm group)
|
||||
usermod -a -G adm ${USER}
|
||||
|
||||
log "Setting up unbound"
|
||||
echo "==> Setting up unbound"
|
||||
# DO uses Google nameservers by default. This causes RBL queries to fail (host 2.0.0.127.zen.spamhaus.org)
|
||||
# We do not use dnsmasq because it is not a recursive resolver and defaults to the value in the interfaces file (which is Google DNS!)
|
||||
# We listen on 0.0.0.0 because there is no way control ordering of docker (which creates the 172.18.0.0/16) and unbound
|
||||
# If IP6 is not enabled, dns queries seem to fail on some hosts. -s returns false if file missing or 0 size
|
||||
ip6=$([[ -s /proc/net/if_inet6 ]] && echo "yes" || echo "no")
|
||||
cp -f "${script_dir}/start/unbound.conf" /etc/unbound/unbound.conf.d/cloudron-network.conf
|
||||
# update the root anchor after a out-of-disk-space situation (see #269)
|
||||
unbound-anchor -a /var/lib/unbound/root.key
|
||||
|
||||
log "Adding systemd services"
|
||||
echo "==> Adding systemd services"
|
||||
cp -r "${script_dir}/start/systemd/." /etc/systemd/system/
|
||||
[[ "${ubuntu_version}" == "16.04" ]] && sed -e 's/MemoryMax/MemoryLimit/g' -i /etc/systemd/system/box.service
|
||||
[[ "${ubuntu_version}" == "16.04" ]] && sed -e 's/Type=notify/Type=simple/g' -i /etc/systemd/system/unbound.service
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now cloudron-syslog
|
||||
systemctl enable unbound
|
||||
systemctl enable box
|
||||
systemctl enable cloudron-syslog
|
||||
systemctl enable cloudron.target
|
||||
systemctl enable cloudron-firewall
|
||||
systemctl enable --now cloudron-disable-thp
|
||||
|
||||
# update firewall rules. this must be done after docker created it's rules
|
||||
# update firewall rules
|
||||
systemctl restart cloudron-firewall
|
||||
|
||||
# For logrotate
|
||||
@@ -128,38 +119,30 @@ systemctl restart unbound
|
||||
# ensure cloudron-syslog runs
|
||||
systemctl restart cloudron-syslog
|
||||
|
||||
log "Configuring sudoers"
|
||||
rm -f /etc/sudoers.d/${USER} /etc/sudoers.d/cloudron
|
||||
cp "${script_dir}/start/sudoers" /etc/sudoers.d/cloudron
|
||||
echo "==> Configuring sudoers"
|
||||
rm -f /etc/sudoers.d/${USER}
|
||||
cp "${script_dir}/start/sudoers" /etc/sudoers.d/${USER}
|
||||
|
||||
log "Configuring collectd"
|
||||
echo "==> Configuring collectd"
|
||||
rm -rf /etc/collectd /var/log/collectd.log
|
||||
ln -sfF "${PLATFORM_DATA_DIR}/collectd" /etc/collectd
|
||||
cp "${script_dir}/start/collectd/collectd.conf" "${PLATFORM_DATA_DIR}/collectd/collectd.conf"
|
||||
systemctl restart collectd
|
||||
|
||||
log "Configuring sysctl"
|
||||
# If privacy extensions are not disabled on server, this breaks IPv6 detection
|
||||
# https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1068756
|
||||
if [[ ! -f /etc/sysctl.d/99-cloudimg-ipv6.conf ]]; then
|
||||
echo "==> Disable temporary address (IPv6)"
|
||||
echo -e "# See https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1068756\nnet.ipv6.conf.all.use_tempaddr = 0\nnet.ipv6.conf.default.use_tempaddr = 0\n\n" > /etc/sysctl.d/99-cloudimg-ipv6.conf
|
||||
sysctl -p
|
||||
fi
|
||||
|
||||
log "Configuring logrotate"
|
||||
echo "==> Configuring logrotate"
|
||||
if ! grep -q "^include ${PLATFORM_DATA_DIR}/logrotate.d" /etc/logrotate.conf; then
|
||||
echo -e "\ninclude ${PLATFORM_DATA_DIR}/logrotate.d\n" >> /etc/logrotate.conf
|
||||
fi
|
||||
rm -f "${PLATFORM_DATA_DIR}/logrotate.d/"*
|
||||
cp "${script_dir}/start/logrotate/"* "${PLATFORM_DATA_DIR}/logrotate.d/"
|
||||
|
||||
# logrotate files have to be owned by root, this is here to fixup existing installations where we were resetting the owner to yellowtent
|
||||
chown root:root "${PLATFORM_DATA_DIR}/logrotate.d/"
|
||||
|
||||
log "Adding motd message for admins"
|
||||
echo "==> Adding motd message for admins"
|
||||
cp "${script_dir}/start/cloudron-motd" /etc/update-motd.d/92-cloudron
|
||||
|
||||
log "Configuring nginx"
|
||||
echo "==> Configuring nginx"
|
||||
# link nginx config to system config
|
||||
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
|
||||
ln -s "${PLATFORM_DATA_DIR}/nginx" /etc/nginx
|
||||
@@ -174,7 +157,7 @@ fi
|
||||
|
||||
# worker_rlimit_nofile in nginx config can be max this number
|
||||
mkdir -p /etc/systemd/system/nginx.service.d
|
||||
if ! grep -q "^LimitNOFILE=" /etc/systemd/system/nginx.service.d/cloudron.conf 2>/dev/null; then
|
||||
if ! grep -q "^LimitNOFILE=" /etc/systemd/system/nginx.service.d/cloudron.conf; then
|
||||
echo -e "[Service]\nLimitNOFILE=16384\n" > /etc/systemd/system/nginx.service.d/cloudron.conf
|
||||
fi
|
||||
|
||||
@@ -187,62 +170,65 @@ if [[ ! -f /etc/mysql/mysql.cnf ]] || ! diff -q "${script_dir}/start/mysql.cnf"
|
||||
cp "${script_dir}/start/mysql.cnf" /etc/mysql/mysql.cnf
|
||||
while true; do
|
||||
if ! systemctl list-jobs | grep mysql; then break; fi
|
||||
log "Waiting for mysql jobs..."
|
||||
echo "Waiting for mysql jobs..."
|
||||
sleep 1
|
||||
done
|
||||
log "Stopping mysql"
|
||||
systemctl stop mysql
|
||||
while mysqladmin ping 2>/dev/null; do
|
||||
log "Waiting for mysql to stop..."
|
||||
while true; do
|
||||
if systemctl restart mysql; then break; fi
|
||||
echo "Restarting MySql again after sometime since this fails randomly"
|
||||
sleep 1
|
||||
done
|
||||
else
|
||||
systemctl start mysql
|
||||
fi
|
||||
|
||||
# the start/stop of mysql is separate to make sure it got reloaded with latest config and it's up and running before we start the new box code
|
||||
# when using 'system restart mysql', it seems to restart much later and the box code loses connection during platform startup (dangerous!)
|
||||
log "Starting mysql"
|
||||
systemctl start mysql
|
||||
while ! mysqladmin ping 2>/dev/null; do
|
||||
log "Waiting for mysql to start..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
readonly mysql_root_password="password"
|
||||
mysqladmin -u root -ppassword password password # reset default root password
|
||||
readonly mysqlVersion=$(mysql -NB -u root -p${mysql_root_password} -e 'SELECT VERSION()' 2>/dev/null)
|
||||
if [[ "${mysqlVersion}" == "8.0."* ]]; then
|
||||
# mysql 8 added a new caching_sha2_password scheme which mysqljs does not support
|
||||
mysql -u root -p${mysql_root_password} -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${mysql_root_password}';"
|
||||
fi
|
||||
mysql -u root -p${mysql_root_password} -e 'CREATE DATABASE IF NOT EXISTS box'
|
||||
|
||||
# set HOME explicity, because it's not set when the installer calls it. this is done because
|
||||
# paths.js uses this env var and some of the migrate code requires box code
|
||||
log "Migrating data"
|
||||
echo "==> Migrating data"
|
||||
cd "${BOX_SRC_DIR}"
|
||||
if ! HOME=${HOME_DIR} BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@127.0.0.1/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up; then
|
||||
log "DB migration failed"
|
||||
echo "DB migration failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f /etc/cloudron/cloudron.conf
|
||||
|
||||
log "Changing ownership"
|
||||
# note, change ownership after db migrate. this allow db migrate to move files around as root and then we can fix it up here
|
||||
if [[ ! -f "${BOX_DATA_DIR}/dhparams.pem" ]]; then
|
||||
echo "==> Generating dhparams (takes forever)"
|
||||
openssl dhparam -out "${BOX_DATA_DIR}/dhparams.pem" 2048
|
||||
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
|
||||
else
|
||||
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
|
||||
fi
|
||||
|
||||
# old installations used to create appdata/<app>/redis which is now part of old backups and prevents restore
|
||||
echo "==> Cleaning up stale redis directories"
|
||||
find "${APPS_DATA_DIR}" -maxdepth 2 -type d -name redis -exec rm -rf {} +
|
||||
|
||||
echo "==> Cleaning up old logs"
|
||||
rm -f /home/yellowtent/platformdata/logs/*/*.log.* || true
|
||||
|
||||
echo "==> Changing ownership"
|
||||
# be careful of what is chown'ed here. subdirs like mysql,redis etc are owned by the containers and will stop working if perms change
|
||||
chown -R "${USER}" /etc/cloudron
|
||||
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme" "${PLATFORM_DATA_DIR}/backup" "${PLATFORM_DATA_DIR}/logs" "${PLATFORM_DATA_DIR}/update" "${PLATFORM_DATA_DIR}/sftp" "${PLATFORM_DATA_DIR}/firewall" "${PLATFORM_DATA_DIR}/sshfs" "${PLATFORM_DATA_DIR}/cifs"
|
||||
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme" "${PLATFORM_DATA_DIR}/backup" "${PLATFORM_DATA_DIR}/logs" "${PLATFORM_DATA_DIR}/update"
|
||||
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}/INFRA_VERSION" 2>/dev/null || true
|
||||
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}"
|
||||
chown "${USER}:${USER}" "${APPS_DATA_DIR}"
|
||||
|
||||
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}"
|
||||
# do not chown the boxdata/mail directory entirely; dovecot gets upset
|
||||
chown "${USER}:${USER}" "${MAIL_DATA_DIR}"
|
||||
# do not chown the boxdata/mail directory; dovecot gets upset
|
||||
chown "${USER}:${USER}" "${BOX_DATA_DIR}"
|
||||
find "${BOX_DATA_DIR}" -mindepth 1 -maxdepth 1 -not -path "${BOX_DATA_DIR}/mail" -exec chown -R "${USER}:${USER}" {} \;
|
||||
chown "${USER}:${USER}" "${BOX_DATA_DIR}/mail"
|
||||
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}/mail/dkim" # this is owned by box currently since it generates the keys
|
||||
|
||||
log "Starting Cloudron"
|
||||
systemctl start box
|
||||
echo "==> Starting Cloudron"
|
||||
systemctl start cloudron.target
|
||||
|
||||
sleep 2 # give systemd sometime to start the processes
|
||||
|
||||
log "Almost done"
|
||||
echo "==> Almost done"
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eu
|
||||
|
||||
echo "==> Disabling THP"
|
||||
|
||||
# https://docs.couchbase.com/server/current/install/thp-disable.html
|
||||
if [[ -d /sys/kernel/mm/transparent_hugepage ]]; then
|
||||
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
|
||||
else
|
||||
echo "==> kernel does not have THP"
|
||||
fi
|
||||
|
||||
@@ -3,154 +3,78 @@
|
||||
set -eu -o pipefail
|
||||
|
||||
echo "==> Setting up firewall"
|
||||
iptables -t filter -N CLOUDRON || true
|
||||
iptables -t filter -F CLOUDRON # empty any existing rules
|
||||
|
||||
has_ipv6=$(cat /proc/net/if_inet6 >/dev/null 2>&1 && echo "yes" || echo "no")
|
||||
|
||||
# wait for 120 seconds for xtables lock, checking every 1 second
|
||||
readonly iptables="iptables --wait 120 --wait-interval 1"
|
||||
readonly ip6tables="ip6tables --wait 120 --wait-interval 1"
|
||||
|
||||
function ipxtables() {
|
||||
$iptables "$@"
|
||||
[[ "${has_ipv6}" == "yes" ]] && $ip6tables "$@"
|
||||
}
|
||||
|
||||
ipxtables -t filter -N CLOUDRON || true
|
||||
ipxtables -t filter -F CLOUDRON # empty any existing rules
|
||||
|
||||
# first setup any user IP block lists
|
||||
ipset create cloudron_blocklist hash:net || true
|
||||
ipset create cloudron_blocklist6 hash:net family inet6 || true
|
||||
|
||||
/home/yellowtent/box/src/scripts/setblocklist.sh
|
||||
|
||||
$iptables -t filter -A CLOUDRON -m set --match-set cloudron_blocklist src -j DROP
|
||||
# the DOCKER-USER chain is not cleared on docker restart
|
||||
if ! $iptables -t filter -C DOCKER-USER -m set --match-set cloudron_blocklist src -j DROP; then
|
||||
$iptables -t filter -I DOCKER-USER 1 -m set --match-set cloudron_blocklist src -j DROP
|
||||
fi
|
||||
|
||||
$ip6tables -t filter -A CLOUDRON -m set --match-set cloudron_blocklist6 src -j DROP
|
||||
# there is no DOCKER-USER chain in ip6tables, bug?
|
||||
$ip6tables -D FORWARD -m set --match-set cloudron_blocklist6 src -j DROP || true
|
||||
$ip6tables -I FORWARD 1 -m set --match-set cloudron_blocklist6 src -j DROP
|
||||
|
||||
# allow related and establisted connections
|
||||
ipxtables -t filter -A CLOUDRON -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
ipxtables -t filter -A CLOUDRON -p tcp -m tcp -m multiport --dports 22,80,202,443 -j ACCEPT # 202 is the alternate ssh port
|
||||
|
||||
# whitelist any user ports. we used to use --dports but it has a 15 port limit (XT_MULTI_PORTS)
|
||||
ports_json="/home/yellowtent/platformdata/firewall/ports.json"
|
||||
if allowed_tcp_ports=$(node -e "console.log(JSON.parse(fs.readFileSync('${ports_json}', 'utf8')).allowed_tcp_ports.join(' '))" 2>/dev/null); then
|
||||
for p in $allowed_tcp_ports; do
|
||||
ipxtables -A CLOUDRON -p tcp -m tcp --dport "${p}" -j ACCEPT
|
||||
done
|
||||
fi
|
||||
|
||||
if allowed_udp_ports=$(node -e "console.log(JSON.parse(fs.readFileSync('${ports_json}', 'utf8')).allowed_udp_ports.join(' '))" 2>/dev/null); then
|
||||
for p in $allowed_udp_ports; do
|
||||
ipxtables -A CLOUDRON -p udp -m udp --dport "${p}" -j ACCEPT
|
||||
done
|
||||
fi
|
||||
|
||||
# LDAP user directory allow list
|
||||
ipset create cloudron_ldap_allowlist hash:net || true
|
||||
ipset flush cloudron_ldap_allowlist
|
||||
|
||||
ipset create cloudron_ldap_allowlist6 hash:net family inet6 || true
|
||||
ipset flush cloudron_ldap_allowlist6
|
||||
|
||||
ldap_allowlist_json="/home/yellowtent/platformdata/firewall/ldap_allowlist.txt"
|
||||
# delete any existing redirect rule
|
||||
$iptables -t nat -D PREROUTING -p tcp --dport 636 -j REDIRECT --to-ports 3004 2>/dev/null || true
|
||||
$ip6tables -t nat -D PREROUTING -p tcp --dport 636 -j REDIRECT --to-ports 3004 >/dev/null || true
|
||||
if [[ -f "${ldap_allowlist_json}" ]]; then
|
||||
# without the -n block, any last line without a new line won't be read it!
|
||||
while read -r line || [[ -n "$line" ]]; do
|
||||
[[ -z "${line}" ]] && continue # ignore empty lines
|
||||
[[ "$line" =~ ^#.*$ ]] && continue # ignore lines starting with #
|
||||
if [[ "$line" == *":"* ]]; then
|
||||
ipset add -! cloudron_ldap_allowlist6 "${line}" # the -! ignore duplicates
|
||||
else
|
||||
ipset add -! cloudron_ldap_allowlist "${line}" # the -! ignore duplicates
|
||||
fi
|
||||
done < "${ldap_allowlist_json}"
|
||||
|
||||
# ldap server we expose 3004 and also redirect from standard ldaps port 636
|
||||
$iptables -t nat -I PREROUTING -p tcp --dport 636 -j REDIRECT --to-ports 3004
|
||||
$iptables -t filter -A CLOUDRON -m set --match-set cloudron_ldap_allowlist src -p tcp --dport 3004 -j ACCEPT
|
||||
|
||||
$ip6tables -t nat -I PREROUTING -p tcp --dport 636 -j REDIRECT --to-ports 3004
|
||||
$ip6tables -t filter -A CLOUDRON -m set --match-set cloudron_ldap_allowlist6 src -p tcp --dport 3004 -j ACCEPT
|
||||
fi
|
||||
# NOTE: keep these in sync with src/apps.js validatePortBindings
|
||||
# allow ssh, http, https, ping, dns
|
||||
iptables -t filter -I CLOUDRON -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
# ssh is allowed alternately on port 202
|
||||
iptables -A CLOUDRON -p tcp -m tcp -m multiport --dports 22,25,80,202,443,587,993,4190 -j ACCEPT
|
||||
|
||||
# turn and stun service
|
||||
ipxtables -t filter -A CLOUDRON -p tcp -m multiport --dports 3478,5349 -j ACCEPT
|
||||
ipxtables -t filter -A CLOUDRON -p udp -m multiport --dports 3478,5349 -j ACCEPT
|
||||
ipxtables -t filter -A CLOUDRON -p udp -m multiport --dports 50000:51000 -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -p tcp -m multiport --dports 3478,5349 -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -p udp -m multiport --dports 3478,5349 -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -p udp -m multiport --dports 50000:51000 -j ACCEPT
|
||||
|
||||
# ICMPv6 is very fundamental to IPv6 connectivity unlike ICMPv4
|
||||
$iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-request -j ACCEPT
|
||||
$iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-reply -j ACCEPT
|
||||
$ip6tables -t filter -A CLOUDRON -p ipv6-icmp -j ACCEPT
|
||||
|
||||
ipxtables -t filter -A CLOUDRON -p udp --sport 53 -j ACCEPT
|
||||
$iptables -t filter -A CLOUDRON -s 172.18.0.0/16 -j ACCEPT # required to accept any connections from apps to our IP:<public port>
|
||||
ipxtables -t filter -A CLOUDRON -i lo -j ACCEPT # required for localhost connections (mysql)
|
||||
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-request -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-reply -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -p udp --sport 53 -j ACCEPT
|
||||
iptables -t filter -A CLOUDRON -s 172.18.0.0/16 -j ACCEPT # required to accept any connections from apps to our IP:<public port>
|
||||
iptables -t filter -A CLOUDRON -i lo -j ACCEPT # required for localhost connections (mysql)
|
||||
|
||||
# log dropped incoming. keep this at the end of all the rules
|
||||
ipxtables -t filter -A CLOUDRON -m limit --limit 2/min -j LOG --log-prefix "Packet dropped: " --log-level 7
|
||||
ipxtables -t filter -A CLOUDRON -j DROP
|
||||
iptables -t filter -A CLOUDRON -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7
|
||||
iptables -t filter -A CLOUDRON -j DROP
|
||||
|
||||
# prepend our chain to the filter table
|
||||
$iptables -t filter -C INPUT -j CLOUDRON 2>/dev/null || $iptables -t filter -I INPUT -j CLOUDRON
|
||||
$ip6tables -t filter -C INPUT -j CLOUDRON 2>/dev/null || $ip6tables -t filter -I INPUT -j CLOUDRON
|
||||
if ! iptables -t filter -C INPUT -j CLOUDRON 2>/dev/null; then
|
||||
iptables -t filter -I INPUT -j CLOUDRON
|
||||
fi
|
||||
|
||||
# Setup rate limit chain (the recent info is at /proc/net/xt_recent)
|
||||
ipxtables -t filter -N CLOUDRON_RATELIMIT || true
|
||||
ipxtables -t filter -F CLOUDRON_RATELIMIT # empty any existing rules
|
||||
iptables -t filter -N CLOUDRON_RATELIMIT || true
|
||||
iptables -t filter -F CLOUDRON_RATELIMIT # empty any existing rules
|
||||
|
||||
# log dropped incoming. keep this at the end of all the rules
|
||||
ipxtables -t filter -N CLOUDRON_RATELIMIT_LOG || true
|
||||
ipxtables -t filter -F CLOUDRON_RATELIMIT_LOG # empty any existing rules
|
||||
ipxtables -t filter -A CLOUDRON_RATELIMIT_LOG -m limit --limit 2/min -j LOG --log-prefix "IPTables RateLimit: " --log-level 7
|
||||
ipxtables -t filter -A CLOUDRON_RATELIMIT_LOG -j DROP
|
||||
iptables -t filter -N CLOUDRON_RATELIMIT_LOG || true
|
||||
iptables -t filter -F CLOUDRON_RATELIMIT_LOG # empty any existing rules
|
||||
iptables -t filter -A CLOUDRON_RATELIMIT_LOG -m limit --limit 2/min -j LOG --log-prefix "IPTables RateLimit: " --log-level 7
|
||||
iptables -t filter -A CLOUDRON_RATELIMIT_LOG -j DROP
|
||||
|
||||
# http https
|
||||
for port in 80 443; do
|
||||
ipxtables -A CLOUDRON_RATELIMIT -p tcp --syn --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --syn --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
|
||||
# ssh
|
||||
# ssh smtp ssh msa imap sieve
|
||||
for port in 22 202; do
|
||||
ipxtables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --set --name "public-${port}"
|
||||
ipxtables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --update --name "public-${port}" --seconds 10 --hitcount 5 -j CLOUDRON_RATELIMIT_LOG
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --set --name "public-${port}"
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --dport ${port} -m state --state NEW -m recent --update --name "public-${port}" --seconds 10 --hitcount 5 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
|
||||
# ldaps
|
||||
for port in 636 3004; do
|
||||
ipxtables -A CLOUDRON_RATELIMIT -p tcp --syn --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
# TODO: move docker platform rules to platform.js so it can be specialized to rate limit only when destination is the mail container
|
||||
|
||||
# docker translates (dnat) 25, 587, 993, 4190 in the PREROUTING step
|
||||
for port in 2525 4190 9993; do
|
||||
$iptables -A CLOUDRON_RATELIMIT -p tcp --syn ! -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 50 -j CLOUDRON_RATELIMIT_LOG
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --syn ! -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 50 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
|
||||
# msa, ldap, imap, sieve, pop3
|
||||
for port in 2525 3002 4190 9993 9995; do
|
||||
$iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 500 -j CLOUDRON_RATELIMIT_LOG
|
||||
# msa, ldap, imap, sieve
|
||||
for port in 2525 3002 4190 9993; do
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 500 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
|
||||
# cloudron docker network: mysql postgresql redis mongodb
|
||||
for port in 3306 5432 6379 27017; do
|
||||
$iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
|
||||
iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
|
||||
done
|
||||
|
||||
# Add the rate limit chain to input chain
|
||||
$iptables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null || $iptables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
|
||||
$ip6tables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null || $ip6tables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
|
||||
# For ssh, http, https
|
||||
if ! iptables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null; then
|
||||
iptables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
|
||||
fi
|
||||
|
||||
# Workaround issue where Docker insists on adding itself first in FORWARD table
|
||||
ipxtables -D FORWARD -j CLOUDRON_RATELIMIT || true
|
||||
ipxtables -I FORWARD 1 -j CLOUDRON_RATELIMIT
|
||||
# For smtp, imap etc routed via docker/nat
|
||||
# Workaroud issue where Docker insists on adding itself first in FORWARD table
|
||||
iptables -D FORWARD -j CLOUDRON_RATELIMIT || true
|
||||
iptables -I FORWARD 1 -j CLOUDRON_RATELIMIT
|
||||
|
||||
+10
-44
@@ -1,55 +1,21 @@
|
||||
#!/bin/bash
|
||||
|
||||
[[ -f /etc/update-motd.d/91-cloudron-install-in-progress ]] && exit
|
||||
|
||||
printf "**********************************************************************\n\n"
|
||||
|
||||
readonly cache_file4="/var/cache/cloudron-motd-cache4"
|
||||
readonly cache_file6="/var/cache/cloudron-motd-cache6"
|
||||
|
||||
url4=""
|
||||
url6=""
|
||||
fallbackUrl=""
|
||||
|
||||
function detectIp() {
|
||||
if [[ ! -f "${cache_file4}" ]]; then
|
||||
ip4=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
|
||||
[[ -n "${ip4}" ]] && echo "${ip4}" > "${cache_file4}"
|
||||
else
|
||||
ip4=$(cat "${cache_file4}")
|
||||
if [[ -z "$(ls -A /home/yellowtent/boxdata/mail/dkim)" ]]; then
|
||||
if [[ -f /tmp/.cloudron-motd-cache ]]; then
|
||||
ip=$(cat /tmp/.cloudron-motd-cache)
|
||||
elif ! ip=$(curl --fail --connect-timeout 2 --max-time 2 -q https://api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p'); then
|
||||
ip='<IP>'
|
||||
fi
|
||||
|
||||
if [[ ! -f "${cache_file6}" ]]; then
|
||||
ip6=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv6.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
|
||||
[[ -n "${ip6}" ]] && echo "${ip6}" > "${cache_file6}"
|
||||
else
|
||||
ip6=$(cat "${cache_file6}")
|
||||
fi
|
||||
|
||||
if [[ ! -f /etc/cloudron/SETUP_TOKEN ]]; then
|
||||
[[ -n "${ip4}" ]] && url4="https://${ip4}"
|
||||
[[ -n "${ip6}" ]] && url6="https://[${ip6}]"
|
||||
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>"
|
||||
else
|
||||
setupToken="$(cat /etc/cloudron/SETUP_TOKEN)"
|
||||
[[ -n "${ip4}" ]] && url4="https://${ip4}/?setupToken=${setupToken}"
|
||||
[[ -n "${ip6}" ]] && url6="https://[${ip6}]/?setupToken=${setupToken}"
|
||||
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>?setupToken=${setupToken}"
|
||||
fi
|
||||
}
|
||||
|
||||
if [[ -z "$(ls -A /home/yellowtent/platformdata/addons/mail/dkim)" ]]; then
|
||||
detectIp
|
||||
echo "${ip}" > /tmp/.cloudron-motd-cache
|
||||
|
||||
printf "\t\t\tWELCOME TO CLOUDRON\n"
|
||||
printf "\t\t\t-------------------\n"
|
||||
|
||||
printf '\n\e[1;32m%-6s\e[m\n' "Visit one of the following URLs on your browser and accept the self-signed certificate to finish setup."
|
||||
[[ -n "${url4}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${url4}"
|
||||
[[ -n "${url6}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${url6}"
|
||||
[[ -n "${fallbackUrl}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${fallbackUrl}"
|
||||
printf "\nCloudron overview - https://docs.cloudron.io/ \n"
|
||||
printf "Cloudron setup - https://docs.cloudron.io/installation/#setup \n"
|
||||
printf '\n\e[1;32m%-6s\e[m\n\n' "Visit https://${ip} on your browser and accept the self-signed certificate to finish setup."
|
||||
printf "Cloudron overview - https://cloudron.io/documentation/ \n"
|
||||
printf "Cloudron setup - https://cloudron.io/documentation/installation/#setup \n"
|
||||
else
|
||||
printf "\t\t\tNOTE TO CLOUDRON ADMINS\n"
|
||||
printf "\t\t\t-----------------------\n"
|
||||
@@ -57,7 +23,7 @@ else
|
||||
printf "Cloudron relies on and may break your installation. Ubuntu security updates\n"
|
||||
printf "are automatically installed on this server every night.\n"
|
||||
printf "\n"
|
||||
printf "Read more at https://docs.cloudron.io/security/#os-updates\n"
|
||||
printf "Read more at https://cloudron.io/documentation/security/#os-updates\n"
|
||||
fi
|
||||
|
||||
printf "\nFor help and more information, visit https://forum.cloudron.io\n\n"
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user