Compare commits

..

216 Commits

Author SHA1 Message Date
Girish Ramakrishnan 54af286fcd app proxy: workaround for nginx not starting if upstream is down
https://sandro-keil.de/blog/let-nginx-start-if-upstream-host-is-unavailable-or-down/

without a resolver, dns names do not resolve
2022-09-30 10:36:44 +02:00
Girish Ramakrishnan 7b5df02a0e app proxy: validate uri 2022-09-29 18:56:10 +02:00
Girish Ramakrishnan 4f0e0706b2 backups: fix id
avoid box_box_ and mail_mail_ in backup ids
2022-09-29 18:01:19 +02:00
Girish Ramakrishnan 1f74febdb0 mail: do not clear eventlog on restart 2022-09-28 22:16:32 +02:00
Girish Ramakrishnan 49bf333355 merge these changelog entries 2022-09-28 18:22:00 +02:00
Girish Ramakrishnan c4af06dd66 remove duplicate changelog entry 2022-09-28 18:21:12 +02:00
Johannes Zellner f5f9a8e520 Send 404 if applink icon does not exist 2022-09-28 15:18:05 +02:00
Johannes Zellner ae376774e4 Ensure we don't put empty applink icon buffers in db 2022-09-28 15:10:17 +02:00
Johannes Zellner ff8c2184f6 Convert applink ts to timestamp 2022-09-28 14:59:30 +02:00
Johannes Zellner a7b056a84c Some tweaks for better app link detection logic 2022-09-28 14:23:45 +02:00
Girish Ramakrishnan 131d456329 Add cloudflare R2 2022-09-27 19:44:20 +02:00
Girish Ramakrishnan d4bba93dbf cloudron-setup: Fix display on newline 2022-09-27 11:25:11 +02:00
Johannes Zellner e332ad96e4 Remove duplicate changes for 7.3.0 2022-09-26 17:42:39 +02:00
Girish Ramakrishnan c455325875 More changes 2022-09-26 09:37:49 +02:00
Girish Ramakrishnan 88e9f751ea mail: update for logging changes 2022-09-26 09:37:36 +02:00
Johannes Zellner 8677e86ace Add authorization to all routes 2022-09-24 21:27:43 +02:00
Johannes Zellner cde22cd0a3 Add token scope tests in routes 2022-09-24 20:56:43 +02:00
Johannes Zellner 6d7f7fbc9a Add some more token scope tests 2022-09-24 18:52:41 +02:00
Johannes Zellner 858c85ee85 Fixup more tests 2022-09-24 18:26:31 +02:00
Johannes Zellner 15d473d506 Fixup some token tests and error handling 2022-09-24 17:29:42 +02:00
Johannes Zellner 70d3040135 Validate token scopes 2022-09-23 13:09:07 +02:00
Johannes Zellner 56c567ac86 Add token scopes 2022-09-22 22:28:59 +02:00
Girish Ramakrishnan 1f5831b79e rename queue route 2022-09-22 19:48:20 +02:00
Girish Ramakrishnan 6382216dc5 mail: proxy queue routes correctly 2022-09-20 20:02:54 +02:00
Johannes Zellner 81b59eae36 improve applink businesslogic tests and fixup api 2022-09-19 21:00:44 +02:00
Girish Ramakrishnan bc3cb6acb5 more changes 2022-09-19 20:56:28 +02:00
Johannes Zellner fa768ad305 Support secureserver.net nameservers from GoDaddy 2022-09-19 19:58:52 +02:00
Johannes Zellner 5184e017c9 Error the task waited for fails in tests 2022-09-19 18:20:27 +02:00
Johannes Zellner d2ea6b2002 Fixup appstore tests 2022-09-19 17:21:55 +02:00
Johannes Zellner 3fcc3ea1aa Fixup reverseproxy tests 2022-09-19 17:04:44 +02:00
Girish Ramakrishnan 15877f45b8 more changes 2022-09-19 10:42:19 +02:00
Girish Ramakrishnan 0a514323a9 Update 7.3 changes 2022-09-19 10:41:48 +02:00
Johannes Zellner 1c07ec219c Do not query disk usage for apps without localstorage 2022-09-16 17:10:07 +02:00
Girish Ramakrishnan 82142f3f31 mail: fix issue where signature was appended to text attachments 2022-09-16 12:40:33 +02:00
Johannes Zellner 554dec640a Rework system graphs api 2022-09-15 16:07:08 +02:00
Girish Ramakrishnan d176ff2582 graphs: move system graph queries to the backend 2022-09-15 12:40:52 +02:00
Girish Ramakrishnan bd7ee437a8 collectd: fix memory stat collection configuration
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/memory.html#usage-in-bytes says
this is the most efficient approach for v1. It says RSS+CACHE(+SWAP) is the more accurate value.
Elsewhere in the note in https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/memory.html#stat-file,
it says "‘rss + mapped_file” will give you resident set size of cgroup." Overall, it's not clear how
to compute the values so we just use the file.

https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html is better. https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#memory
says the values are separated out.
2022-09-14 18:15:26 +02:00
Girish Ramakrishnan 0250661402 Revert spurious change 2022-09-14 17:59:44 +02:00
Girish Ramakrishnan 9cef08aa6a mail relay: do not accept TLS servers
haraka can only relay via STARTTLS
2022-09-14 17:42:21 +02:00
Johannes Zellner bead9589a1 Move app graphs graphite query to backend 2022-09-14 14:39:28 +02:00
Girish Ramakrishnan c5b631c0e5 mail: catch all is already fully qualified 2022-09-11 13:49:20 +02:00
Girish Ramakrishnan 4e75694ac6 mail: require catch all to be absolute 2022-09-11 12:56:58 +02:00
Girish Ramakrishnan 2a93c703ef mailserver: add queue routes 2022-08-31 08:45:18 +02:00
Johannes Zellner 3c92971665 If backup storage precondition is not met we want to throw 2022-08-29 22:54:23 +02:00
Johannes Zellner 563391c2f1 remove PermitRootLogin check as we now use cloudron-support user 2022-08-25 18:53:09 +02:00
Girish Ramakrishnan d4555886f4 add note on the reason for the flag 2022-08-25 16:36:57 +02:00
Girish Ramakrishnan a584fad278 proxyAuth: add supportsBearerAuth flag
required for firefly-iii
2022-08-25 16:12:42 +02:00
Girish Ramakrishnan e21f39bc0b Update mail container for quota support 2022-08-23 18:48:06 +02:00
Johannes Zellner 84ca85b315 Ensure app services like redis to be also started on restart if previously stopped 2022-08-23 11:41:08 +02:00
Girish Ramakrishnan d1bdb80c72 Update mail container for quota support 2022-08-22 19:03:47 +02:00
Johannes Zellner d20f8d5e75 Fix acme refactoring 2022-08-22 12:55:43 +02:00
Johannes Zellner b2de6624fd Make email actions buttons 2022-08-21 12:22:53 +02:00
Girish Ramakrishnan 1591541c7f mail: allow aliases to have wildcard
this came out of https://forum.cloudron.io/topic/6350/disposable-email-prefixes-for-existing-mailboxes/
2022-08-18 15:22:00 +02:00
Girish Ramakrishnan 6124323d52 improve mailbox.update 2022-08-18 12:38:46 +02:00
Girish Ramakrishnan b23189b45c mail: quota support 2022-08-18 11:31:40 +02:00
Girish Ramakrishnan 1c18c16e38 typo 2022-08-15 21:09:25 +02:00
Girish Ramakrishnan d07b1c7280 directoryServer: move out start/stop from cron 2022-08-15 21:08:22 +02:00
Girish Ramakrishnan 20d722f076 Fix test 2022-08-15 20:45:55 +02:00
Girish Ramakrishnan bb3be9f380 style 2022-08-15 20:45:55 +02:00
Girish Ramakrishnan edd284fe0b rename user directory to directory server 2022-08-15 20:45:51 +02:00
Girish Ramakrishnan b5cc7d90a9 Fix crash when cron seed file is missing 2022-08-10 22:07:05 +02:00
Girish Ramakrishnan 251c1f9757 add readOnly attribute check for port bindings 2022-08-10 14:22:31 +02:00
Girish Ramakrishnan 03cd9bcc7c Update readOnly flag to tcpPorts and udpPorts 2022-08-10 13:57:00 +02:00
Johannes Zellner fc8572c2af Raise alert for when an app cannot be autoupdated 2022-08-10 12:19:54 +02:00
Johannes Zellner a913660aeb Ensure we have a BoxError here 2022-08-10 12:19:54 +02:00
Girish Ramakrishnan 9c82765512 parseInt returns NaN on failure 2022-08-08 20:33:41 +02:00
Johannes Zellner ace96bd228 Fix stringification for debug of taskError object if set 2022-08-08 13:12:53 +02:00
Johannes Zellner 02d95810a6 Do not include proxy apps in graphs 2022-08-05 14:38:57 +02:00
Johannes Zellner 0fcb202364 Expose groups as memberof in ldap and userdirectory 2022-08-04 11:22:16 +02:00
Johannes Zellner 88eb809c6e For ldap users created on first login, make sure we also check 2fa if enabled 2022-08-03 18:20:43 +02:00
Johannes Zellner 1534eaf6f7 Fixup applink tests 2022-08-03 14:57:58 +02:00
Johannes Zellner a2a60ff426 Add support for LDAP cn=...+totptoken=.. support 2022-08-02 15:27:34 +02:00
Johannes Zellner afc70ac332 Expose twoFactorAuthenticationEnabled state of users via user directory 2022-08-02 15:27:34 +02:00
Girish Ramakrishnan d5e5b64df2 cloudron-setup/motd: show ipv4 or ipv6 setup link 2022-08-01 18:32:07 +02:00
Girish Ramakrishnan 4a18ecc0ef unbound: enable ip6 2022-08-01 14:15:09 +02:00
Girish Ramakrishnan f355403412 npm: make it work with ipv6 only servers 2022-08-01 14:15:09 +02:00
Girish Ramakrishnan 985320d355 switch registry url based on ipv6 availability 2022-08-01 14:15:09 +02:00
Girish Ramakrishnan 26c9d8bc88 notification: Fix crash when backupId is null 2022-08-01 14:15:09 +02:00
Girish Ramakrishnan 2b81163179 add to changes 2022-07-30 13:16:19 +02:00
Johannes Zellner 6715efca50 Distinguish ghost/impersonate logins from others 2022-07-29 20:39:18 +02:00
Johannes Zellner 612b1d6030 Also remove the virtual user and admin groups for userdirectory 2022-07-29 11:17:31 +02:00
Johannes Zellner b71254a0c3 Remove virtual user and admin groups to ldap user records 2022-07-29 11:11:53 +02:00
Johannes Zellner c0e5f60592 Also stash random minute cron tick in seed file 2022-07-29 09:15:42 +02:00
Girish Ramakrishnan 64243425ce installer: suppress VERSION not found error 2022-07-27 06:16:27 +02:00
Girish Ramakrishnan 9ad7fda3cd ubuntu: do not explicitly disable ipv6
IIRC, we had this because unbound will not start up on servers with IPv6 disabled (in the kernel).
Maybe this is a thing of the past by now.
2022-07-27 06:16:03 +02:00
Girish Ramakrishnan c0eedc97ac collectd: always disable FQDNLookup 2022-07-25 17:01:49 +02:00
Johannes Zellner 5b4a1e0ec1 Make certificate cron job more predictable with persistent hourly seed 2022-07-25 15:40:49 +02:00
Johannes Zellner 5b31486dc9 Randomize certificate renewal check over a whole day 2022-07-22 19:32:43 +02:00
Girish Ramakrishnan 116cde19f9 constants: location -> subdomain 2022-07-14 15:18:17 +05:30
Girish Ramakrishnan 14fc089f05 Fixup user and acme cert syncing 2022-07-14 15:04:45 +05:30
Girish Ramakrishnan 885d60f7cc reverseproxy: add setUserCertificate 2022-07-14 13:25:41 +05:30
Girish Ramakrishnan d33fd7b886 do not use bundle terminology
apparently, bundle is also like a cert chain
2022-07-14 12:39:41 +05:30
Girish Ramakrishnan ba067a959c reverseproxy: per location user certificates 2022-07-14 12:21:30 +05:30
Girish Ramakrishnan a246cb7e73 return location certificates 2022-07-14 11:57:04 +05:30
Girish Ramakrishnan f0abd7edc8 certificateJson can be null 2022-07-14 10:52:31 +05:30
Girish Ramakrishnan 127470ae59 domains: fix error handling 2022-07-14 10:35:59 +05:30
Girish Ramakrishnan efac46e40e verifyDomainConfig: just throw the error 2022-07-14 10:32:30 +05:30
Girish Ramakrishnan 6ab237034d remove superfluous validation 2022-07-13 12:06:48 +05:30
Girish Ramakrishnan 2af29fd844 cleanupCerts: add progress 2022-07-13 11:22:47 +05:30
Girish Ramakrishnan 1549f6a4d0 fix various terminology in code
subdomain, domain - strings
location - { subdomain, domain }
bundle - { cert, key }
bundlePath - { certFilePath, keyFilePath }

vhost is really just for virtual hosting
fqdn for others
2022-07-13 10:15:09 +05:30
Girish Ramakrishnan 5d16aca8f4 add script to recreate containers 2022-07-12 20:51:51 +05:30
Johannes Zellner 2facc6774b applinks icon improvements 2022-07-08 18:07:52 +02:00
Johannes Zellner e800c7d282 Only list applinks a user has access to 2022-07-08 15:14:48 +02:00
Johannes Zellner a58228952a Support accessRestriction for visibility of applinks 2022-07-07 19:44:59 +02:00
Johannes Zellner 3511856a7c support applink tags 2022-07-07 19:11:47 +02:00
Johannes Zellner 006a53dc7a Do not spam the logs on get queries 2022-07-07 18:56:21 +02:00
Johannes Zellner 45c73798b9 Fixup typo 2022-07-07 18:53:52 +02:00
Johannes Zellner c704884b10 Ensure applink label is a string 2022-07-07 18:53:27 +02:00
Johannes Zellner b54113ade3 Improve applink meta info detection 2022-07-07 18:19:53 +02:00
Johannes Zellner ac00225a75 Support applink update 2022-07-07 16:53:06 +02:00
Johannes Zellner f43fd21929 Better applink icon support 2022-07-07 16:06:04 +02:00
Johannes Zellner 741c21b368 Fixup applink routes 2022-07-07 13:01:23 +02:00
Johannes Zellner 5a26fe7361 Add applinks.js to routes/index 2022-07-07 12:44:12 +02:00
Johannes Zellner 1185dc7f79 Attempt to fetch applink icon and label from page 2022-07-07 12:36:53 +02:00
Johannes Zellner e1ac2b7b00 Add initial applink support 2022-07-06 20:37:52 +02:00
Girish Ramakrishnan e2c6672a5c better wording 2022-07-02 17:16:47 +05:30
Johannes Zellner 5c50534e21 Improve backup cleanup progress message 2022-07-01 14:18:50 +02:00
Girish Ramakrishnan 55e2139c69 restore: encrypted filenames 2022-06-27 09:49:58 -07:00
Johannes Zellner 34ff3462e9 Fixup backup_config migration script 2022-06-27 17:16:04 +02:00
Girish Ramakrishnan 104bdaf76b mail: cgroup v2 detection fix
there is crash in mail container when fts/solr is enabled
2022-06-26 14:28:22 -07:00
Girish Ramakrishnan c9f7b9a8a6 backups: make filename encryption optional 2022-06-26 09:37:22 -07:00
Girish Ramakrishnan 2e5d89be6b allow space in backup label 2022-06-24 09:18:51 -07:00
Girish Ramakrishnan bcf474aab6 redis: rebuild 2022-06-23 15:52:59 -07:00
Girish Ramakrishnan dea74f05ab remove bogus logic
db-migrate always runs a migration in a transaction. so no volume
was created in case of a failure
2022-06-23 10:31:13 -07:00
Girish Ramakrishnan 69e0f2f727 7.2.5 changes
(cherry picked from commit 131f823e57)
2022-06-23 10:27:57 -07:00
Girish Ramakrishnan 080f701f33 hetzner: debug typo 2022-06-22 22:12:19 -07:00
Girish Ramakrishnan 94a196bfa0 Fix issue where only 25 group members were returned
This is because GROUP_CONCAT defaults to 1024. uuid is 40 chars.
1024/40 = ~25
2022-06-22 17:54:52 -07:00
Girish Ramakrishnan 3a63158763 rename function to setMembers 2022-06-22 17:36:19 -07:00
Girish Ramakrishnan d9c47efe1f Fix storage volume migration
Patch the migration so it runs again properly in 7.2.5

https://forum.cloudron.io/topic/7256/app-data-regression-in-v7-2-4
(cherry picked from commit c2fdb9ae3f)
2022-06-22 17:16:47 -07:00
Johannes Zellner e818e5f7d5 Reload volumes in case one was created in the for loop 2022-06-22 15:32:50 +02:00
Girish Ramakrishnan cac0933334 typo 2022-06-13 13:55:04 -07:00
Girish Ramakrishnan b74f01bb9e clourdon-setup: memory keeps going lower 2022-06-13 10:58:55 -07:00
Girish Ramakrishnan 1f2d596a4a 7.2.4 changes
(cherry picked from commit 61a1ac6983)
2022-06-10 13:31:46 -07:00
Girish Ramakrishnan ce06b2e150 Fix upstreamUri validation 2022-06-10 11:23:58 -07:00
Girish Ramakrishnan 9bd9b72e5d apphealthmonitor: Fix crash 2022-06-10 11:09:41 -07:00
Girish Ramakrishnan a32166bc9d data dir: allow sameness of old and new dir
this makes it easy to migrate to a new volume setup
2022-06-09 17:49:33 -07:00
Johannes Zellner f382b8f1f5 Set real upstreamUri for healthcheck 2022-06-09 15:04:09 +02:00
Johannes Zellner fbc7fcf04b Put healthcheck errors in app logs 2022-06-09 14:56:40 +02:00
Johannes Zellner 11d7dfa071 Accept upstreamUri as string for proxy app install 2022-06-09 14:35:05 +02:00
Johannes Zellner 923a9f6560 Rename RELAY_APPSTORE_ID to PROXY_APP_APPSTORE_ID 2022-06-09 13:57:57 +02:00
Johannes Zellner 25f44f58e3 Start task also needs to skip container starting for proxy app 2022-06-09 10:48:54 +02:00
Johannes Zellner d55a6a5eec Update reverse proxy app config on upstreamUri change 2022-06-09 10:48:54 +02:00
Johannes Zellner f854d86986 Use upstreamUri in reverseproxy config 2022-06-09 10:48:54 +02:00
Johannes Zellner 6a7379e64c Add apps.upstreamUri support 2022-06-09 10:48:54 +02:00
Johannes Zellner a955457ee7 Support proxy app 2022-06-09 10:48:54 +02:00
Girish Ramakrishnan 67801020ed mailboxDisplayName is optional 2022-06-08 14:25:16 -07:00
Girish Ramakrishnan 037f4195da guard against two level subdir moves
this has never worked since the -wholename check only works for
one level deep
2022-06-08 12:24:11 -07:00
Girish Ramakrishnan 8cf0922401 Fix container creation when migrating data dir 2022-06-08 11:52:22 -07:00
Girish Ramakrishnan 6311c78bcd Fix quoting 2022-06-08 11:25:20 -07:00
Girish Ramakrishnan 544ca6e1f4 initial xfs support 2022-06-08 10:58:00 -07:00
Girish Ramakrishnan 6de198eaad sendmail: check for supportsDisplayName
it seems quite some apps don't support this. so, we need a way for the
ui to hide the field so that users are not confused.
2022-06-08 09:43:58 -07:00
Girish Ramakrishnan 6c67f13d90 Use bind mount instead of volume
see also c76b211ce0
2022-06-06 15:59:59 -07:00
Girish Ramakrishnan 7598cf2baf consolidate storage validation logic 2022-06-06 12:50:21 -07:00
Girish Ramakrishnan 7dba294961 storage: check volume status 2022-06-03 10:43:59 -07:00
Girish Ramakrishnan 4bee30dd83 fix more typos 2022-06-03 09:10:37 -07:00
Girish Ramakrishnan 7952a67ed2 guess the volume type better 2022-06-03 07:54:16 -07:00
Johannes Zellner 50b2eabfde Also fixup userdirectory tests 2022-06-03 13:59:21 +02:00
Johannes Zellner 591067ee22 Fixup ldap group search tests 2022-06-03 13:54:31 +02:00
Johannes Zellner 88f78c01ba Remove virtual groups users and admin exposed via ldap 2022-06-03 13:32:35 +02:00
Girish Ramakrishnan dddc5a1994 migrate app dataDir to volumes 2022-06-02 16:29:01 -07:00
Girish Ramakrishnan 8fc8128957 Make apps.getDataDir async 2022-06-02 11:19:33 -07:00
Girish Ramakrishnan c76b211ce0 localstorage: remove usage of docker volumes
just move bind mounts. the initial idea was to use docker volume backends
but we have no plans for this. in addition, usage of volumes means that
files get copied from the image and into volume on first run which is
not desired. people are putting /app/data stuff into images which ideally
should break.
2022-06-02 11:09:27 -07:00
Girish Ramakrishnan 0c13504928 Bump version 2022-06-02 11:02:06 -07:00
Girish Ramakrishnan 26ab7f2767 add mailbox display name to schema 2022-06-01 22:06:34 -07:00
Girish Ramakrishnan f78dabbf7e mail: add display name validation tests 2022-06-01 22:04:36 -07:00
Girish Ramakrishnan 39c5c44ac3 cloudron-firewall: fix spurious line 2022-06-01 09:28:50 -07:00
Girish Ramakrishnan 2dea7f8fe9 sendmail: restrict few characters in the display name 2022-06-01 08:13:19 -07:00
Girish Ramakrishnan 85af0d96d2 sendmail: allow display name to be set 2022-06-01 01:38:16 -07:00
Girish Ramakrishnan 176e917f51 update 7.2.3 changes 2022-05-31 13:27:00 -07:00
Girish Ramakrishnan 534c8f9c3f collectd: on one system, localhost was missing in /etc/hosts 2022-05-27 16:10:38 -07:00
Girish Ramakrishnan 5ee9feb0d2 If disk name has '.', replace with '_'
graphite uses . as the separator between different metric parts

see #348
2022-05-27 16:00:08 -07:00
Girish Ramakrishnan 723453dd1c 7.2.3 changes 2022-05-27 12:04:01 -07:00
Girish Ramakrishnan 45c9ddeacf appstore: allow re-registration on server side delete 2022-05-26 22:27:58 -07:00
Girish Ramakrishnan 5b075e3918 transfer ownership is not used anymore 2022-05-26 14:30:32 -07:00
Girish Ramakrishnan c9916c4107 Really disable FQDNLookup 2022-05-25 15:48:25 -07:00
Girish Ramakrishnan c7956872cb Add to changes 2022-05-25 15:14:01 -07:00
Girish Ramakrishnan 3adf8b5176 collectd: FQDNLookup causes collectd install to fail
this is on ubuntu 20

https://forum.cloudron.io/topic/7091/aws-ubuntu-20-04-installation-issue
2022-05-25 15:10:55 -07:00
Girish Ramakrishnan 40eae601da Update cloudron-manifestformat for new scheduler patterns 2022-05-23 11:02:04 -07:00
Girish Ramakrishnan 3eead2fdbe Fix possible duplicate key issue
console_server_origin in injected by the new setup script even for
7.1.x
2022-05-22 20:48:29 -07:00
Girish Ramakrishnan 9fcd6f9c0a cron: add @service which is probably clearer than @reboot in app context 2022-05-20 10:57:44 -07:00
Girish Ramakrishnan 17910584ca cron: add extensions
https://www.man7.org/linux/man-pages/man5/crontab.5.html#EXTENSIONS
2022-05-20 10:53:30 -07:00
Girish Ramakrishnan d9a02faf7a make the globals const 2022-05-20 09:38:22 -07:00
Girish Ramakrishnan d366f3107d net_admin: enable IPv6 forwarding in the container 2022-05-19 17:10:05 -07:00
Girish Ramakrishnan 2596afa7b3 appstore: set utmSource during user registration 2022-05-19 00:00:48 -07:00
Johannes Zellner aa1e8dc930 Give the dashboard a way to check backgroundImage availability 2022-05-17 15:25:44 +02:00
Johannes Zellner f3c66056b5 Allow to unset background image 2022-05-17 13:17:05 +02:00
Girish Ramakrishnan 93bacd00da Fix exec web socket/upload/download 2022-05-16 11:46:28 -07:00
Girish Ramakrishnan b5c2a0ff44 exec: rework API to get exit code 2022-05-16 11:23:58 -07:00
Johannes Zellner 6bd478b8b0 Add profile backgroundImage api 2022-05-15 12:08:11 +02:00
Girish Ramakrishnan c5c62ff294 Add to changes 2022-05-14 09:36:56 -07:00
Girish Ramakrishnan 7ed8678d50 mongodb: fix import timeout 2022-05-09 17:20:16 -07:00
Girish Ramakrishnan e19e5423f0 cloudron-support: Remove unused var 2022-05-07 19:25:06 -07:00
Girish Ramakrishnan 622ba01c7a ubuntu 22: collectd disappeared
https://bugs.launchpad.net/ubuntu/+source/collectd/+bug/1971093

also, remove the ubuntu 16 hack
2022-05-06 20:02:02 -07:00
Girish Ramakrishnan 935da3ed15 vultr: set ttl to 120
https://www.vultr.com/docs/introduction-to-vultr-dns/#Limitations
2022-05-06 12:29:12 -07:00
Girish Ramakrishnan ce054820a6 add migration to add consoleServerOrigin 2022-05-05 09:59:22 -07:00
Johannes Zellner a7668624b4 Ensure we also set the new console server origin during installation 2022-05-05 16:52:11 +02:00
Girish Ramakrishnan 01b36bb37e proxyAuth: make the POST to /logout redirect
for firefly-III
2022-05-03 18:19:22 -07:00
Girish Ramakrishnan 5d1aaf6bc6 cloudron-setup: silent 2022-05-03 10:20:19 -07:00
Girish Ramakrishnan 7ceb307110 Add 7.2.1 changes 2022-05-03 09:15:21 -07:00
Girish Ramakrishnan 6371b7c20d dns: add hetzner 2022-05-02 22:33:30 -07:00
Girish Ramakrishnan 7ec648164e Remove usage of util 2022-05-02 21:32:10 -07:00
Girish Ramakrishnan 6e98f5f36c backuptask: make upload/download async 2022-04-30 16:42:14 -07:00
Girish Ramakrishnan a098c6da34 noop: removeDir is async 2022-04-30 16:35:39 -07:00
Girish Ramakrishnan 94e70aca33 storage: downloadDir is not part of interface 2022-04-30 16:24:49 -07:00
Girish Ramakrishnan ea01586b52 storage: make copy async 2022-04-30 16:24:45 -07:00
Girish Ramakrishnan 8ceb80dc44 hush: return BoxError everywhere 2022-04-29 19:02:59 -07:00
Girish Ramakrishnan 2280b7eaf5 Add S3MultipartDownloadStream
This extends the modern Readable class
2022-04-29 18:23:56 -07:00
Girish Ramakrishnan 1c1d247a24 cloudron-support: update key 2022-04-29 12:39:42 -07:00
Girish Ramakrishnan 90a6ad8cf5 support: new keys (ed25519)
rsa keys are slowly going away
2022-04-29 12:37:27 -07:00
Girish Ramakrishnan 80d91e5540 Add missing changelog 2022-04-29 09:58:17 -07:00
Girish Ramakrishnan 26cf084e1c tarPack/tarExtract do not need a callback 2022-04-28 21:58:00 -07:00
Girish Ramakrishnan 8ef730ad9c backuptask: make upload/download async 2022-04-28 21:37:08 -07:00
Girish Ramakrishnan 7123ec433c split up backupformat logic into separate files 2022-04-28 19:10:57 -07:00
Girish Ramakrishnan c67d9fd082 move crypto code to hush.js 2022-04-28 18:12:17 -07:00
Girish Ramakrishnan dd8f710605 Fix failing test 2022-04-28 18:03:36 -07:00
Girish Ramakrishnan e097b79f65 godaddy: do not remove all the records of type 2022-04-28 17:46:03 -07:00
129 changed files with 4652 additions and 2257 deletions
+71
View File
@@ -2470,4 +2470,75 @@
* proxyAuth: set X-Remote-User (rfc3875)
* GoDaddy: there is now a delete API
* nginx: use ubuntu packages for ubuntu 20.04 and 22.04
* Ubuntu 22.04 LTS support
* Add Hetzner DNS
* cron: add support for extensions (@reboot, @weekly etc)
* Add profile backgroundImage api
* exec: rework API to get exit code
* Add update available filter
[7.2.1]
* Refactor backup code to use async/await
* mongodb: fix bug where a small timeout prevented import of large backups
* Add update available filter
* exec: rework API to get exit code
* Add profile backgroundImage api
* cron: add support for extensions (@reboot, @weekly etc)
[7.2.2]
* Update cloudron-manifestformat for new scheduler patterns
* collectd: FQDNLookup causes collectd install to fail
[7.2.3]
* appstore: allow re-registration on server side delete
* transfer ownership route is not used anymore
* graphite: fix issue where disk names with '.' do not render
* dark mode fixes
* sendmail: mail from display name
* Use volumes for app data instead of raw path
* initial xfs support
[7.2.4]
* volumes: Ensure long volume names do not overflow the table
* Move all appstore filter to the left
* app data: allow sameness of old and new dir
[7.2.5]
* Fix storage volume migration
* Fix issue where only 25 group members were returned
* Fix eventlog display
[7.3.0]
* Proxied apps
* Applinks - app bookmarks in dashboard
* backups: optional encryption of backup file names
* eventlog: add event for impersonated user login
* ldap & user directory: Remove virtual user and admin groups
* Randomize certificate generation cronjob to lighten load on Let's Encrypt servers
* mail: catch all address can be any domain
* mail: accept only STARTTLS servers for relay
* graphs: cgroup v2 support
* mail: fix issue where signature was appended to text attachments
* redis: restart button will now rebuild if the container is missing
* backups: allow space in label name
* mail: fix crash when solr is enabled on Ubuntu 22 (cgroup v2 detection fix)
* mail: fix issue where certificate renewal did not restart the mail container properly
* notification: Fix crash when backupId is null
* IPv6: initial support for ipv6 only server
* User directory: Cloudron connector uses 2FA auth
* port bindings: add read only flag
* mail: add storage quota support
* mail: allow aliases to have wildcard
* proxyAuth: add supportsBearerAuth flag
* backups: Fix precondition check which was not erroring if mount is missing
* mail: add queue management API and UI
* graphs: show app disk usage graphs
* UI: fix issue where mailbox display name was not init correctly
* wasabi: add singapore and sydney regions
* filemanager: add split view
* nginx: fix zero length certs when out of disk space
* read only API tokens
[7.3.1]
* Add cloudlare R2
+5 -5
View File
@@ -9,7 +9,7 @@ const fs = require('fs'),
safe = require('safetydance'),
server = require('./src/server.js'),
settings = require('./src/settings.js'),
userdirectory = require('./src/userdirectory.js');
directoryServer = require('./src/directoryserver.js');
let logFd;
@@ -38,8 +38,8 @@ async function startServers() {
await proxyAuth.start();
await ldap.start();
const conf = await settings.getUserDirectoryConfig();
if (conf.enabled) await userdirectory.start();
const conf = await settings.getDirectoryServerConfig();
if (conf.enabled) await directoryServer.start();
}
async function main() {
@@ -54,7 +54,7 @@ async function main() {
await proxyAuth.stop();
await server.stop();
await userdirectory.stop();
await directoryServer.stop();
await ldap.stop();
setTimeout(process.exit.bind(process), 3000);
});
@@ -64,7 +64,7 @@ async function main() {
await proxyAuth.stop();
await server.stop();
await userdirectory.stop();
await directoryServer.stop();
await ldap.stop();
setTimeout(process.exit.bind(process), 3000);
});
@@ -0,0 +1,20 @@
'use strict';
exports.up = function(db, callback) {
db.all('SELECT * FROM settings WHERE name = ?', [ 'api_server_origin' ], function (error, result) {
if (error || result.length === 0) return callback(error);
let consoleOrigin;
switch (result[0].value) {
case 'https://api.dev.cloudron.io': consoleOrigin = 'https://console.dev.cloudron.io'; break;
case 'https://api.staging.cloudron.io': consoleOrigin = 'https://console.staging.cloudron.io'; break;
default: consoleOrigin = 'https://console.cloudron.io'; break;
}
db.runSql('REPLACE INTO settings (name, value) VALUES (?, ?)', [ 'console_server_origin', consoleOrigin ], callback);
});
};
exports.down = function(db, callback) {
callback();
};
@@ -0,0 +1,9 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN backgroundImage MEDIUMBLOB', callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN backgroundImage', callback);
};
@@ -0,0 +1,12 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN mailboxDisplayName VARCHAR(128) DEFAULT "" NOT NULL', [], callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN mailboxDisplayName', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,53 @@
'use strict';
const path = require('path'),
safe = require('safetydance'),
uuid = require('uuid');
function getMountPoint(dataDir) {
const output = safe.child_process.execSync(`df --output=target "${dataDir}" | tail -1`, { encoding: 'utf8' });
if (!output) return dataDir;
const mountPoint = output.trim();
if (mountPoint === '/') return dataDir;
return mountPoint;
}
exports.up = async function(db) {
// use safe() here because this migration failed midway in 7.2.4
await safe(db.runSql('ALTER TABLE apps ADD storageVolumeId VARCHAR(128), ADD FOREIGN KEY(storageVolumeId) REFERENCES volumes(id)'));
await safe(db.runSql('ALTER TABLE apps ADD storageVolumePrefix VARCHAR(128)'));
await safe(db.runSql('ALTER TABLE apps ADD CONSTRAINT apps_storageVolume UNIQUE (storageVolumeId, storageVolumePrefix)'));
const apps = await db.runSql('SELECT * FROM apps WHERE dataDir IS NOT NULL');
for (const app of apps) {
const allVolumes = await db.runSql('SELECT * FROM volumes');
console.log(`data-dir (${app.id}): migrating data dir ${app.dataDir}`);
const mountPoint = getMountPoint(app.dataDir);
const prefix = path.relative(mountPoint, app.dataDir);
console.log(`data-dir (${app.id}): migrating to mountpoint ${mountPoint} and prefix ${prefix}`);
const volume = allVolumes.find(v => v.hostPath === mountPoint);
if (volume) {
console.log(`data-dir (${app.id}): using existing volume ${volume.id}`);
await db.runSql('UPDATE apps SET storageVolumeId=?, storageVolumePrefix=? WHERE id=?', [ volume.id, prefix, app.id ]);
continue;
}
const id = uuid.v4().replace(/-/g, ''); // to make systemd mount file names more readable
const name = `appdata-${id}`;
const type = app.dataDir === mountPoint ? 'filesystem' : 'mountpoint';
console.log(`data-dir (${app.id}): creating new volume ${id}`);
await db.runSql('INSERT INTO volumes (id, name, hostPath, mountType, mountOptionsJson) VALUES (?, ?, ?, ?, ?)', [ id, name, mountPoint, type, JSON.stringify({}) ]);
await db.runSql('UPDATE apps SET storageVolumeId=?, storageVolumePrefix=? WHERE id=?', [ id, prefix, app.id ]);
}
await db.runSql('ALTER TABLE apps DROP COLUMN dataDir');
};
exports.down = async function(/*db*/) {
};
@@ -0,0 +1,9 @@
'use strict';
exports.up = async function (db) {
await db.runSql('ALTER TABLE apps ADD COLUMN upstreamUri VARCHAR(256) DEFAULT ""');
};
exports.down = async function (db) {
await db.runSql('ALTER TABLE apps DROP COLUMN upstreamUri');
};
@@ -0,0 +1,15 @@
'use strict';
exports.up = async function(db) {
const result = await db.runSql('SELECT * FROM settings WHERE name=?', [ 'backup_config' ]);
if (!result.length) return;
const backupConfig = JSON.parse(result[0].value);
if (backupConfig.encryption && backupConfig.format === 'rsync') backupConfig.encryptedFilenames = true;
await db.runSql('UPDATE settings SET value=? WHERE name=?', [ JSON.stringify(backupConfig), 'backup_config', ]);
};
exports.down = async function(/* db */) {
};
@@ -0,0 +1,22 @@
'use strict';
exports.up = async function (db) {
var cmd = 'CREATE TABLE applinks(' +
'id VARCHAR(128) NOT NULL UNIQUE,' +
'accessRestrictionJson TEXT,' +
'creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
'updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,' +
'ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,' +
'label VARCHAR(128),' +
'tagsJson VARCHAR(2048),' +
'icon MEDIUMBLOB,' +
'upstreamUri VARCHAR(256) DEFAULT "",' +
'PRIMARY KEY (id)) CHARACTER SET utf8 COLLATE utf8_bin';
await db.runSql(cmd);
};
exports.down = async function (db) {
await db.runSql('DROP TABLE applinks');
};
@@ -0,0 +1,17 @@
'use strict';
const async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN storageQuota BIGINT DEFAULT 0'),
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN messagesQuota BIGINT DEFAULT 0'),
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP COLUMN storageQuota'),
db.runSql.bind(db, 'ALTER TABLE mailboxes DROP COLUMN messagesQuota')
], callback);
};
@@ -0,0 +1,18 @@
'use strict';
const safe = require('safetydance');
exports.up = async function (db) {
const mailDomains = await db.runSql('SELECT * FROM mail', []);
for (const mailDomain of mailDomains) {
let catchAll = safe.JSON.parse(mailDomain.catchAllJson) || [];
if (catchAll.length === 0) continue;
catchAll = catchAll.map(a => `${a}@${mailDomain.domain}`);
await db.runSql('UPDATE mail SET catchAllJson = ? WHERE domain = ?', [ JSON.stringify(catchAll), mailDomain.domain ]);
}
};
exports.down = async function( /* db */) {
};
@@ -0,0 +1,13 @@
'use strict';
exports.up = async function (db) {
await db.runSql('ALTER TABLE tokens DROP COLUMN scope');
await db.runSql('ALTER TABLE tokens ADD COLUMN scopeJson TEXT');
await db.runSql('UPDATE tokens SET scopeJson = ?', [ JSON.stringify({'*':'rw'})]);
};
exports.down = async function (db) {
await db.runSql('ALTER TABLE tokens ADD COLUMN scope VARCHAR(512) NOT NULL DEFAULT ""');
await db.runSql('ALTER TABLE tokens DROP COLUMN scopeJson');
};
+25 -4
View File
@@ -33,6 +33,7 @@ CREATE TABLE IF NOT EXISTS users(
resetTokenCreationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
active BOOLEAN DEFAULT 1,
avatar MEDIUMBLOB NOT NULL,
backgroundImage MEDIUMBLOB,
loginLocationsJson MEDIUMTEXT, // { locations: [{ ip, userAgent, city, country, ts }] }
INDEX creationTime_index (creationTime),
@@ -57,7 +58,7 @@ CREATE TABLE IF NOT EXISTS tokens(
accessToken VARCHAR(128) NOT NULL UNIQUE,
identifier VARCHAR(128) NOT NULL, // resourceId: app id or user id
clientId VARCHAR(128),
scope VARCHAR(512) NOT NULL,
scopeJson TEXT,
expires BIGINT NOT NULL, // FIXME: make this a timestamp
lastUsedTime TIMESTAMP NULL,
PRIMARY KEY(accessToken));
@@ -85,13 +86,15 @@ CREATE TABLE IF NOT EXISTS apps(
enableAutomaticUpdate BOOLEAN DEFAULT 1,
enableMailbox BOOLEAN DEFAULT 1, // whether sendmail addon is enabled
mailboxName VARCHAR(128), // mailbox of this app
mailboxDomain VARCHAR(128), // mailbox domain of this apps
mailboxDomain VARCHAR(128), // mailbox domain of this app
mailboxDisplayName VARCHAR(128), // mailbox display name
enableInbox BOOLEAN DEFAULT 0, // whether recvmail addon is enabled
inboxName VARCHAR(128), // mailbox of this app
inboxDomain VARCHAR(128), // mailbox domain of this apps
inboxDomain VARCHAR(128), // mailbox domain of this app
label VARCHAR(128), // display name
tagsJson VARCHAR(2048), // array of tags
dataDir VARCHAR(256) UNIQUE,
storageVolumeId VARCHAR(128),
storageVolumePrefix VARCHAR(128),
taskId INTEGER, // current task
errorJson TEXT,
servicesConfigJson TEXT, // app services configuration
@@ -99,9 +102,12 @@ CREATE TABLE IF NOT EXISTS apps(
appStoreIcon MEDIUMBLOB,
icon MEDIUMBLOB,
crontab TEXT,
upstreamUri VARCHAR(256) DEFAULT "",
FOREIGN KEY(mailboxDomain) REFERENCES domains(domain),
FOREIGN KEY(taskId) REFERENCES tasks(id),
FOREIGN KEY(storageVolumeId) REFERENCES volumes(id),
UNIQUE (storageVolumeId, storageVolumePrefix),
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS appPortBindings(
@@ -211,6 +217,8 @@ CREATE TABLE IF NOT EXISTS mailboxes(
domain VARCHAR(128),
active BOOLEAN DEFAULT 1,
enablePop3 BOOLEAN DEFAULT 0,
storageQuota BIGINT DEFAULT 0,
messagesQuota BIGINT DEFAULT 0,
FOREIGN KEY(domain) REFERENCES mail(domain),
FOREIGN KEY(aliasDomain) REFERENCES mail(domain),
@@ -292,4 +300,17 @@ CREATE TABLE IF NOT EXISTS blobs(
value MEDIUMBLOB,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS appLinks(
id VARCHAR(128) NOT NULL UNIQUE,
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the app was installed
updateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, // when the last app update was done
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, // when this db record was updated (useful for UI caching)
label VARCHAR(128), // display name
tagsJson VARCHAR(2048), // array of tags
icon MEDIUMBLOB,
upstreamUri VARCHAR(256) DEFAULT "",
PRIMARY KEY(id));
CHARACTER SET utf8 COLLATE utf8_bin;
+840 -147
View File
File diff suppressed because it is too large Load Diff
+2 -2
View File
@@ -18,7 +18,7 @@
"aws-sdk": "^2.1115.0",
"basic-auth": "^2.0.1",
"body-parser": "^1.20.0",
"cloudron-manifestformat": "^5.15.2",
"cloudron-manifestformat": "^5.18.0",
"connect": "^3.7.0",
"connect-lastmile": "^2.1.1",
"connect-timeout": "^1.9.0",
@@ -34,6 +34,7 @@
"express": "^4.17.3",
"ipaddr.js": "^2.0.1",
"js-yaml": "^4.1.0",
"jsdom": "^20.0.0",
"json": "^11.0.0",
"jsonwebtoken": "^8.5.1",
"ldapjs": "^2.3.2",
@@ -44,7 +45,6 @@
"multiparty": "^4.2.3",
"mysql": "^2.18.1",
"nodemailer": "^6.7.3",
"nodemailer-smtp-transport": "^2.7.4",
"progress-stream": "^2.0.0",
"qrcode": "^1.5.0",
"readdirp": "^3.6.0",
+26 -11
View File
@@ -11,7 +11,7 @@ trap exitHandler EXIT
# change this to a hash when we make a upgrade release
readonly LOG_FILE="/var/log/cloudron-setup.log"
readonly MINIMUM_DISK_SIZE_GB="18" # this is the size of "/" and required to fit in docker images 18 is a safe bet for different reporting on 20GB min
readonly MINIMUM_MEMORY="974" # this is mostly reported for 1GB main memory (DO 992, EC2 990, Linode 989, Serverdiscounter.com 974)
readonly MINIMUM_MEMORY="960" # this is mostly reported for 1GB main memory (DO 992, EC2 967, Linode 989, Serverdiscounter.com 974)
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400"
@@ -26,8 +26,8 @@ readonly GREEN='\033[32m'
readonly DONE='\033[m'
# verify the system has minimum requirements met
if [[ "${rootfs_type}" != "ext4" ]]; then
echo "Error: Cloudron requires '/' to be ext4" # see #364
if [[ "${rootfs_type}" != "ext4" && "${rootfs_type}" != "xfs" ]]; then
echo "Error: Cloudron requires '/' to be ext4 or xfs" # see #364
exit 1
fi
@@ -62,6 +62,7 @@ requestedVersion=""
installServerOrigin="https://api.cloudron.io"
apiServerOrigin="https://api.cloudron.io"
webServerOrigin="https://cloudron.io"
consoleServerOrigin="https://console.cloudron.io"
sourceTarballUrl=""
rebootServer="true"
setupToken="" # this is a OTP for securing an installation (https://forum.cloudron.io/topic/6389/add-password-for-initial-configuration)
@@ -80,10 +81,12 @@ while true; do
if [[ "$2" == "dev" ]]; then
apiServerOrigin="https://api.dev.cloudron.io"
webServerOrigin="https://dev.cloudron.io"
consoleServerOrigin="https://console.dev.cloudron.io"
installServerOrigin="https://api.dev.cloudron.io"
elif [[ "$2" == "staging" ]]; then
apiServerOrigin="https://api.staging.cloudron.io"
webServerOrigin="https://staging.cloudron.io"
consoleServerOrigin="https://console.staging.cloudron.io"
installServerOrigin="https://api.staging.cloudron.io"
elif [[ "$2" == "unstable" ]]; then
installServerOrigin="https://api.dev.cloudron.io"
@@ -209,9 +212,10 @@ fi
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('api_server_origin', '${apiServerOrigin}');" 2>/dev/null
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('web_server_origin', '${webServerOrigin}');" 2>/dev/null
mysql -uroot -ppassword -e "REPLACE INTO box.settings (name, value) VALUES ('console_server_origin', '${consoleServerOrigin}');" 2>/dev/null
if [[ -n "${appstoreSetupToken}" ]]; then
if ! setupResponse=$(curl -X POST -H "Content-type: application/json" --data "{\"setupToken\": \"${appstoreSetupToken}\"}" "${apiServerOrigin}/api/v1/cloudron_setup_done"); then
if ! setupResponse=$(curl -sX POST -H "Content-type: application/json" --data "{\"setupToken\": \"${appstoreSetupToken}\"}" "${apiServerOrigin}/api/v1/cloudron_setup_done"); then
echo "Could not complete setup. See ${LOG_FILE} for details"
exit 1
fi
@@ -232,20 +236,31 @@ while true; do
sleep 10
done
if ! ip=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p'); then
ip='<IP>'
fi
ip4=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
ip6=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv6.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
url4=""
url6=""
fallbackUrl=""
if [[ -z "${setupToken}" ]]; then
url="https://${ip}"
[[ -n "${ip4}" ]] && url4="https://${ip4}"
[[ -n "${ip6}" ]] && url6="https://[${ip6}]"
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>"
else
url="https://${ip}/?setupToken=${setupToken}"
[[ -n "${ip4}" ]] && url4="https://${ip4}/?setupToken=${setupToken}"
[[ -n "${ip6}" ]] && url6="https://[${ip6}]/?setupToken=${setupToken}"
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>?setupToken=${setupToken}"
fi
echo -e "\n\n${GREEN}After reboot, visit ${url} and accept the self-signed certificate to finish setup.${DONE}\n"
echo -e "\n\n${GREEN}After reboot, visit one of the following URLs and accept the self-signed certificate to finish setup.${DONE}\n"
[[ -n "${url4}" ]] && echo -e " * ${GREEN}${url4}${DONE}"
[[ -n "${url6}" ]] && echo -e " * ${GREEN}${url6}${DONE}"
[[ -n "${fallbackUrl}" ]] && echo -e " * ${GREEN}${fallbackUrl}${DONE}"
if [[ "${rebootServer}" == "true" ]]; then
systemctl stop box mysql # sometimes mysql ends up having corrupt privilege tables
read -p "The server has to be rebooted to apply all the settings. Reboot now ? [Y/n] " yn
# https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#ANSI_002dC-Quoting
read -p $'\n'"The server has to be rebooted to apply all the settings. Reboot now ? [Y/n] " yn
yn=${yn:-y}
case $yn in
[Yy]* ) exitHandler; systemctl reboot;;
+1 -2
View File
@@ -8,7 +8,7 @@ set -eu -o pipefail
PASTEBIN="https://paste.cloudron.io"
OUT="/tmp/cloudron-support.log"
LINE="\n========================================================\n"
CLOUDRON_SUPPORT_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQVilclYAIu+ioDp/sgzzFz6YU0hPcRYY7ze/LiF/lC7uQqK062O54BFXTvQ3ehtFZCx3bNckjlT2e6gB8Qq07OM66De4/S/g+HJW4TReY2ppSPMVNag0TNGxDzVH8pPHOysAm33LqT2b6L/wEXwC6zWFXhOhHjcMqXvi8Ejaj20H1HVVcf/j8qs5Thkp9nAaFTgQTPu8pgwD8wDeYX1hc9d0PYGesTADvo6HF4hLEoEnefLw7PaStEbzk2fD3j7/g5r5HcgQQXBe74xYZ/1gWOX2pFNuRYOBSEIrNfJEjFJsqk3NR1+ZoMGK7j+AZBR4k0xbrmncQLcQzl6MMDzkp support@cloudron.io"
CLOUDRON_SUPPORT_PUBLIC_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGWS+930b8QdzbchGljt3KSljH9wRhYvht8srrtQHdzg support@cloudron.io"
HELP_MESSAGE="
This script collects diagnostic information to help debug server related issues.
@@ -86,7 +86,6 @@ if [[ "${enableSSH}" == "true" ]]; then
echo -e $LINE"SSH"$LINE >> $OUT
echo "Username: ${ssh_user}" >> $OUT
echo "Port: ${ssh_port}" >> $OUT
echo "PermitRootLogin: ${permit_root_login}" >> $OUT
echo "Key file: ${keys_file}" >> $OUT
echo -n "Enabling ssh access for the Cloudron support team..."
+18 -19
View File
@@ -1,6 +1,7 @@
#!/bin/bash
# This script is run on the base ubuntu. Put things here which are managed by ubuntu
# This script is also run after ubuntu upgrade
set -euv -o pipefail
@@ -18,14 +19,6 @@ export DEBIAN_FRONTEND=noninteractive
readonly ubuntu_codename=$(lsb_release -cs)
readonly ubuntu_version=$(lsb_release -rs)
# enable ubuntu proposed for collectd (https://launchpad.net/ubuntu/+source/collectd)
if [[ "${ubuntu_version}" == "22.04" ]]; then
cat <<EOF >/etc/apt/sources.list.d/ubuntu-$(lsb_release -cs)-proposed.list
# Enable Ubuntu proposed archive
deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -cs)-proposed restricted main multiverse universe
EOF
fi
# hold grub since updating it breaks on some VPS providers. also, dist-upgrade will trigger it
apt-mark hold grub* >/dev/null
apt-get -o Dpkg::Options::="--force-confdef" update -y
@@ -117,18 +110,24 @@ update-grub
echo "==> Install collectd"
# without this, libnotify4 will install gnome-shell
apt-get install -y libnotify4 --no-install-recommends
if ! apt-get install -y --no-install-recommends libcurl3-gnutls collectd collectd-utils; then
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
echo "Failed to install collectd. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
fi
apt-get install -y libnotify4 libcurl3-gnutls --no-install-recommends
# https://bugs.launchpad.net/ubuntu/+source/collectd/+bug/1872281
if [[ "${ubuntu_version}" == "20.04" ]]; then
echo -e "\nLD_PRELOAD=/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8.so" >> /etc/default/collectd
elif [[ "${ubuntu_version}" == "22.04" ]]; then
if [[ "${ubuntu_version}" == "22.04" ]]; then
readonly launchpad="https://launchpad.net/ubuntu/+source/collectd/5.12.0-9/+build/23189375/+files"
cd /tmp && wget -q "${launchpad}/collectd_5.12.0-9_amd64.deb" "${launchpad}/collectd-utils_5.12.0-9_amd64.deb" "${launchpad}/collectd-core_5.12.0-9_amd64.deb" "${launchpad}/libcollectdclient1_5.12.0-9_amd64.deb"
cd /tmp && apt install -y --no-install-recommends ./libcollectdclient1_5.12.0-9_amd64.deb ./collectd-core_5.12.0-9_amd64.deb ./collectd_5.12.0-9_amd64.deb ./collectd-utils_5.12.0-9_amd64.deb && rm -f /tmp/collectd_*.deb
echo -e "\nLD_PRELOAD=/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/libpython3.10.so" >> /etc/default/collectd
else
if ! apt-get install -y --no-install-recommends collectd collectd-utils; then
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
echo "Failed to install collectd, continuing anyway. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
fi
if [[ "${ubuntu_version}" == "20.04" ]]; then
echo -e "\nLD_PRELOAD=/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8.so" >> /etc/default/collectd
fi
fi
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
# some hosts like atlantic install ntp which conflicts with timedatectl. https://serverfault.com/questions/1024770/ubuntu-20-04-time-sync-problems-and-possibly-incorrect-status-information
echo "==> Configuring host"
@@ -146,7 +145,7 @@ sed -e '/Port 22/ i # NOTE: Cloudron only supports moving SSH to port 202. See h
# https://bugs.launchpad.net/ubuntu/+source/base-files/+bug/1701068
echo "==> Disabling motd news"
if [ -f "/etc/default/motd-news" ]; then
if [[ -f "/etc/default/motd-news" ]]; then
sed -i 's/^ENABLED=.*/ENABLED=0/' /etc/default/motd-news
fi
@@ -181,7 +180,7 @@ systemctl disable systemd-resolved || true
ufw disable || true
# we need unbound to work as this is required for installer.sh to do any DNS requests
echo -e "server:\n\tinterface: 127.0.0.1\n\tdo-ip6: no" > /etc/unbound/unbound.conf.d/cloudron-network.conf
echo -e "server:\n\tinterface: 127.0.0.1\n" > /etc/unbound/unbound.conf.d/cloudron-network.conf
systemctl restart unbound
# Ubuntu 22 has private home directories by default (https://discourse.ubuntu.com/t/private-home-directories-for-ubuntu-21-04-onwards/)
+11 -4
View File
@@ -69,7 +69,7 @@ readonly ubuntu_codename=$(lsb_release -cs)
readonly is_update=$(systemctl is-active -q box && echo "yes" || echo "no")
log "Updating from $(cat $box_src_dir/VERSION) to $(cat $box_src_tmp_dir/VERSION)"
log "Updating from $(cat $box_src_dir/VERSION 2>/dev/null) to $(cat $box_src_tmp_dir/VERSION 2>/dev/null)"
# https://docs.docker.com/engine/installation/linux/ubuntulinux/
readonly docker_version=20.10.14
@@ -145,12 +145,18 @@ log "downloading new addon images"
images=$(node -e "let i = require('${box_src_tmp_dir}/src/infra_version.js'); console.log(i.baseImages.map(function (x) { return x.tag; }).join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
log "\tPulling docker images: ${images}"
if ! curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip; then
docker_registry=registry.ipv6.docker.com
else
docker_registry=registry-1.docker.io
fi
for image in ${images}; do
while ! docker pull "${image}"; do # this pulls the image using the sha256
while ! docker pull "${docker_registry}/${image}"; do # this pulls the image using the sha256
log "Could not pull ${image}"
sleep 5
done
while ! docker pull "${image%@sha256:*}"; do # this will tag the image for readability
while ! docker pull "${docker_registry}/${image%@sha256:*}"; do # this will tag the image for readability
log "Could not pull ${image%@sha256:*}"
sleep 5
done
@@ -163,7 +169,8 @@ CLOUDRON_SYSLOG_VERSION="1.1.0"
while [[ ! -f "${CLOUDRON_SYSLOG}" || "$(${CLOUDRON_SYSLOG} --version)" != ${CLOUDRON_SYSLOG_VERSION} ]]; do
rm -rf "${CLOUDRON_SYSLOG_DIR}"
mkdir -p "${CLOUDRON_SYSLOG_DIR}"
if npm install --unsafe-perm -g --prefix "${CLOUDRON_SYSLOG_DIR}" cloudron-syslog@${CLOUDRON_SYSLOG_VERSION}; then break; fi
# verbatim is not needed in node 18 since that is the default there. in node 16, ipv4 is preferred and this breaks on ipv6 only servers
if NODE_OPTIONS="--dns-result-order=verbatim" npm install --unsafe-perm -g --prefix "${CLOUDRON_SYSLOG_DIR}" cloudron-syslog@${CLOUDRON_SYSLOG_VERSION}; then break; fi
log "Failed to install cloudron-syslog, trying again"
sleep 5
done
+32
View File
@@ -0,0 +1,32 @@
#!/bin/bash
set -eu -o pipefail
readonly logfile="/home/yellowtent/platformdata/logs/box.log"
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
echo "This will re-create all the containers. Services will go down for a bit."
read -p "Do you want to proceed? (y/N) " -n 1 -r choice
echo
if [[ ! $choice =~ ^[Yy]$ ]]; then
exit 1
fi
echo -n "Re-creating addon containers (this takes a while) ."
line_count=$(cat /home/yellowtent/platformdata/logs/box.log | wc -l)
sed -e 's/"version": ".*",/"version":"48.0.0",/' -i /home/yellowtent/platformdata/INFRA_VERSION
systemctl restart box
while ! tail -n "+${line_count}" "${logfile}" | grep -q "platform is ready"; do
echo -n "."
sleep 2
done
echo -e "\nDone.\nThe Cloudron dashboard will say 'Configuring (Queued)' for each app. The apps will come up in a short while."
+1 -2
View File
@@ -77,8 +77,6 @@ if [[ -f "${ldap_allowlist_json}" ]]; then
done < "${ldap_allowlist_json}"
# ldap server we expose 3004 and also redirect from standard ldaps port 636
$iptables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null || $iptables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
$iptables -t nat -I PREROUTING -p tcp --dport 636 -j REDIRECT --to-ports 3004
$iptables -t filter -A CLOUDRON -m set --match-set cloudron_ldap_allowlist src -p tcp --dport 3004 -j ACCEPT
@@ -149,6 +147,7 @@ for port in 3306 5432 6379 27017; do
$iptables -A CLOUDRON_RATELIMIT -p tcp --syn -s 172.18.0.0/16 -d 172.18.0.0/16 --dport ${port} -m connlimit --connlimit-above 5000 -j CLOUDRON_RATELIMIT_LOG
done
# Add the rate limit chain to input chain
$iptables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null || $iptables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
$ip6tables -t filter -C INPUT -j CLOUDRON_RATELIMIT 2>/dev/null || $ip6tables -t filter -I INPUT 1 -j CLOUDRON_RATELIMIT
+33 -12
View File
@@ -4,30 +4,51 @@
printf "**********************************************************************\n\n"
cache_file="/var/cache/cloudron-motd-cache"
readonly cache_file4="/var/cache/cloudron-motd-cache4"
readonly cache_file6="/var/cache/cloudron-motd-cache6"
if [[ -z "$(ls -A /home/yellowtent/platformdata/addons/mail/dkim)" ]]; then
if [[ ! -f "${cache_file}" ]]; then
curl --fail --connect-timeout 2 --max-time 2 -q https://ipv4.api.cloudron.io/api/v1/helper/public_ip --output "${cache_file}" || true
fi
if [[ -f "${cache_file}" ]]; then
ip=$(sed -n -e 's/.*"ip": "\(.*\)"/\1/p' /var/cache/cloudron-motd-cache)
url4=""
url6=""
fallbackUrl=""
function detectIp() {
if [[ ! -f "${cache_file4}" ]]; then
ip4=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv4.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
[[ -n "${ip4}" ]] && echo "${ip4}" > "${cache_file4}"
else
ip='<IP>'
ip4=$(cat "${cache_file4}")
fi
if [[ ! -f "${cache_file6}" ]]; then
ip6=$(curl -s --fail --connect-timeout 2 --max-time 2 https://ipv6.api.cloudron.io/api/v1/helper/public_ip | sed -n -e 's/.*"ip": "\(.*\)"/\1/p' || true)
[[ -n "${ip6}" ]] && echo "${ip6}" > "${cache_file6}"
else
ip6=$(cat "${cache_file6}")
fi
if [[ ! -f /etc/cloudron/SETUP_TOKEN ]]; then
url="https://${ip}"
[[ -n "${ip4}" ]] && url4="https://${ip4}"
[[ -n "${ip6}" ]] && url6="https://[${ip6}]"
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>"
else
setupToken="$(cat /etc/cloudron/SETUP_TOKEN)"
url="https://${ip}/?setupToken=${setupToken}"
[[ -n "${ip4}" ]] && url4="https://${ip4}/?setupToken=${setupToken}"
[[ -n "${ip6}" ]] && url6="https://[${ip6}]/?setupToken=${setupToken}"
[[ -z "${ip4}" && -z "${ip6}" ]] && fallbackUrl="https://<IP>?setupToken=${setupToken}"
fi
}
if [[ -z "$(ls -A /home/yellowtent/platformdata/addons/mail/dkim)" ]]; then
detectIp
printf "\t\t\tWELCOME TO CLOUDRON\n"
printf "\t\t\t-------------------\n"
printf '\n\e[1;32m%-6s\e[m\n\n' "Visit ${url} on your browser and accept the self-signed certificate to finish setup."
printf "Cloudron overview - https://docs.cloudron.io/ \n"
printf '\n\e[1;32m%-6s\e[m\n' "Visit one of the following URLs on your browser and accept the self-signed certificate to finish setup."
[[ -n "${url4}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${url4}"
[[ -n "${url6}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${url6}"
[[ -n "${fallbackUrl}" ]] && printf '\e[1;32m%-6s\e[m\n' " * ${fallbackUrl}"
printf "\nCloudron overview - https://docs.cloudron.io/ \n"
printf "Cloudron setup - https://docs.cloudron.io/installation/#setup \n"
else
printf "\t\t\tNOTE TO CLOUDRON ADMINS\n"
+2 -2
View File
@@ -13,7 +13,7 @@
##############################################################################
Hostname "localhost"
#FQDNLookup true
FQDNLookup false
#BaseDir "/var/lib/collectd"
#PluginDir "/usr/lib/collectd"
#TypesDB "/usr/share/collectd/types.db" "/etc/collectd/my_types.db"
@@ -232,7 +232,7 @@ LoadPlugin swap
<Plugin write_graphite>
<Node "graphing">
Host "localhost"
Host "127.0.0.1"
Port "2003"
Protocol "tcp"
LogSendErrors true
+1 -1
View File
@@ -14,7 +14,7 @@ def read():
for d in disks:
device = d[0]
if 'devicemapper' in d[1] or not device.startswith('/dev/'): continue
instance = device[len('/dev/'):].replace('/', '_') # see #348
instance = device[len('/dev/'):].replace('/', '_').replace('.', '_') # see #348
try:
st = os.statvfs(d[1]) # handle disk removal
+3
View File
@@ -1,6 +1,9 @@
# sudo logging breaks journalctl output with very long urls (systemd bug)
Defaults !syslog
Defaults!/home/yellowtent/box/src/scripts/checkvolume.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/checkvolume.sh
Defaults!/home/yellowtent/box/src/scripts/clearvolume.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/clearvolume.sh
+1 -1
View File
@@ -6,7 +6,7 @@ server:
interface: 127.0.0.1
interface: 172.18.0.1
ip-freebind: yes
do-ip6: no
do-ip6: yes
access-control: 127.0.0.1 allow
access-control: 172.18.0.1/16 allow
cache-max-negative-ttl: 30
-26
View File
@@ -1,26 +0,0 @@
'use strict';
exports = module.exports = {
verifyToken
};
const assert = require('assert'),
BoxError = require('./boxerror.js'),
safe = require('safetydance'),
tokens = require('./tokens.js'),
users = require('./users.js');
async function verifyToken(accessToken) {
assert.strictEqual(typeof accessToken, 'string');
const token = await tokens.getByAccessToken(accessToken);
if (!token) throw new BoxError(BoxError.INVALID_CREDENTIALS, 'No such token');
const user = await users.get(token.identifier);
if (!user) throw new BoxError(BoxError.INVALID_CREDENTIALS, 'User not found');
if (!user.active) throw new BoxError(BoxError.INVALID_CREDENTIALS, 'User not active');
await safe(tokens.update(token.id, { lastUsedTime: new Date() })); // ignore any error
return user;
}
+11 -11
View File
@@ -522,32 +522,32 @@ Acme2.prototype.loadDirectory = async function () {
});
};
Acme2.prototype.getCertificate = async function (vhost, domain, paths) {
assert.strictEqual(typeof vhost, 'string');
Acme2.prototype.getCertificate = async function (fqdn, domain, paths) {
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof paths, 'object');
debug(`getCertificate: start acme flow for ${vhost} from ${this.caDirectory}`);
debug(`getCertificate: start acme flow for ${fqdn} from ${this.caDirectory}`);
if (vhost !== domain && this.wildcard) { // bare domain is not part of wildcard SAN
vhost = dns.makeWildcard(vhost);
debug(`getCertificate: will get wildcard cert for ${vhost}`);
if (fqdn !== domain && this.wildcard) { // bare domain is not part of wildcard SAN
fqdn = dns.makeWildcard(fqdn);
debug(`getCertificate: will get wildcard cert for ${fqdn}`);
}
await this.loadDirectory();
await this.acmeFlow(vhost, domain, paths);
await this.acmeFlow(fqdn, domain, paths);
};
async function getCertificate(vhost, domain, paths, options) {
assert.strictEqual(typeof vhost, 'string'); // this can also be a wildcard domain (for alias domains)
async function getCertificate(fqdn, domain, paths, options) {
assert.strictEqual(typeof fqdn, 'string'); // this can also be a wildcard domain (for alias domains)
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof paths, 'object');
assert.strictEqual(typeof options, 'object');
await promiseRetry({ times: 3, interval: 0, debug }, async function () {
debug(`getCertificate: for vhost ${vhost} and domain ${domain}`);
debug(`getCertificate: for fqdn ${fqdn} and domain ${domain}`);
const acme = new Acme2(options || { });
return await acme.getCertificate(vhost, domain, paths);
return await acme.getCertificate(fqdn, domain, paths);
});
}
+18 -10
View File
@@ -70,25 +70,35 @@ async function checkAppHealth(app, options) {
const manifest = app.manifest;
const [error, data] = await safe(docker.inspect(app.containerId));
if (error || !data || !data.State) return await setHealth(app, apps.HEALTH_ERROR);
if (data.State.Running !== true) return await setHealth(app, apps.HEALTH_DEAD);
let healthCheckUrl, host;
if (app.manifest.id === constants.PROXY_APP_APPSTORE_ID) {
healthCheckUrl = app.upstreamUri;
host = '';
} else {
const [error, data] = await safe(docker.inspect(app.containerId));
if (error || !data || !data.State) return await setHealth(app, apps.HEALTH_ERROR);
if (data.State.Running !== true) return await setHealth(app, apps.HEALTH_DEAD);
// non-appstore apps may not have healthCheckPath
if (!manifest.healthCheckPath) return await setHealth(app, apps.HEALTH_HEALTHY);
// non-appstore apps may not have healthCheckPath
if (!manifest.healthCheckPath) return await setHealth(app, apps.HEALTH_HEALTHY);
healthCheckUrl = `http://${app.containerIp}:${manifest.httpPort}${manifest.healthCheckPath}`;
host = app.fqdn;
}
const healthCheckUrl = `http://${app.containerIp}:${manifest.httpPort}${manifest.healthCheckPath}`;
const [healthCheckError, response] = await safe(superagent
.get(healthCheckUrl)
.set('Host', app.fqdn) // required for some apache configs with rewrite rules
.set('Host', host) // required for some apache configs with rewrite rules
.set('User-Agent', 'Mozilla (CloudronHealth)') // required for some apps (e.g. minio)
.redirects(0)
.ok(() => true)
.timeout(options.timeout * 1000));
if (healthCheckError) {
await apps.appendLogLine(app, `=> Healtheck error: ${healthCheckError}`);
await setHealth(app, apps.HEALTH_UNHEALTHY);
} else if (response.status > 403) { // 2xx and 3xx are ok. even 401 and 403 are ok for now (for WP sites)
await apps.appendLogLine(app, `=> Healtheck error got response status ${response.status}`);
await setHealth(app, apps.HEALTH_UNHEALTHY);
} else {
await setHealth(app, apps.HEALTH_HEALTHY);
@@ -129,9 +139,7 @@ async function processDockerEvents(options) {
const [error, info] = await safe(getContainerInfo(containerId));
const program = error ? containerId : (info.addonName || info.app.fqdn);
const now = Date.now();
// do not send mails for dev apps
const notifyUser = !(info.app && info.app.debugMode) && ((now - gLastOomMailTime) > OOM_EVENT_LIMIT);
const notifyUser = !info?.app?.debugMode && ((now - gLastOomMailTime) > OOM_EVENT_LIMIT);
debug(`OOM ${program} notifyUser: ${notifyUser}. lastOomTime: ${gLastOomMailTime} (now: ${now})`);
+178
View File
@@ -0,0 +1,178 @@
'use strict';
exports = module.exports = {
list,
listByUser,
add,
get,
update,
remove,
getIcon
};
const assert = require('assert'),
apps = require('./apps.js'),
database = require('./database.js'),
BoxError = require('./boxerror.js'),
uuid = require('uuid'),
safe = require('safetydance'),
superagent = require('superagent'),
validator = require('validator'),
jsdom = require('jsdom'),
debug = require('debug')('box:applinks');
const APPLINKS_FIELDS= [ 'id', 'accessRestrictionJson', 'creationTime', 'updateTime', 'ts', 'label', 'tagsJson', 'icon', 'upstreamUri' ].join(',');
function postProcess(result) {
assert.strictEqual(typeof result, 'object');
assert(result.tagsJson === null || typeof result.tagsJson === 'string');
result.tags = safe.JSON.parse(result.tagsJson) || [];
delete result.tagsJson;
assert(result.accessRestrictionJson === null || typeof result.accessRestrictionJson === 'string');
result.accessRestriction = safe.JSON.parse(result.accessRestrictionJson);
if (result.accessRestriction && !result.accessRestriction.users) result.accessRestriction.users = [];
delete result.accessRestrictionJson;
result.ts = new Date(result.ts).getTime();
result.icon = result.icon ? result.icon : null;
}
async function list() {
const results = await database.query(`SELECT ${APPLINKS_FIELDS} FROM applinks ORDER BY upstreamUri`);
results.forEach(postProcess);
return results;
}
async function listByUser(user) {
assert.strictEqual(typeof user, 'object');
const result = await list();
return result.filter((app) => apps.canAccess(app, user));
}
async function detectMetaInfo(applink) {
assert.strictEqual(typeof applink, 'object');
const [error, response] = await safe(superagent.get(applink.upstreamUri));
if (error || !response.text) throw new BoxError(BoxError.BAD_FIELD, 'cannot fetch upstream uri for favicon and label');
// fixup upstreamUri to match the redirect
if (response.redirects && response.redirects.length) {
debug(`detectMetaInfo: found redirect from ${applink.upstreamUri} to ${response.redirects[0]}`);
applink.upstreamUri = response.redirects[0];
}
if (applink.favicon && applink.label) return;
const dom = new jsdom.JSDOM(response.text);
if (!applink.icon) {
let favicon = '';
if (dom.window.document.querySelector('link[rel="apple-touch-icon"]')) favicon = dom.window.document.querySelector('link[rel="apple-touch-icon"]').href ;
if (!favicon.endsWith('.png') && dom.window.document.querySelector('meta[name="msapplication-TileImage"]')) favicon = dom.window.document.querySelector('meta[name="msapplication-TileImage"]').content ;
if (!favicon.endsWith('.png') && dom.window.document.querySelector('link[rel="shortcut icon"]')) favicon = dom.window.document.querySelector('link[rel="shortcut icon"]').href ;
if (!favicon.endsWith('.png') && dom.window.document.querySelector('link[rel="icon"]')) favicon = dom.window.document.querySelector('link[rel="icon"]').href ;
if (!favicon.endsWith('.png') && dom.window.document.querySelector('meta[itemprop="image"]')) favicon = dom.window.document.querySelector('meta[itemprop="image"]').content;
if (favicon) {
if (favicon.startsWith('/')) favicon = applink.upstreamUri + favicon;
const [error, response] = await safe(superagent.get(favicon));
if (error) console.error(`Failed to fetch icon ${favicon}: `, error);
else if (response.ok && response.headers['content-type'] === 'image/png') applink.icon = response.body;
else console.error(`Failed to fetch icon ${favicon}: statusCode=${response.status}`);
} else {
console.error(`Unable to find a suitable icon for ${applink.upstreamUri}`);
}
}
if (!applink.label) {
if (dom.window.document.querySelector('meta[property="og:title"]')) applink.label = dom.window.document.querySelector('meta[property="og:title"]').content;
else if (dom.window.document.querySelector('meta[property="og:site_name"]')) applink.label = dom.window.document.querySelector('meta[property="og:site_name"]').content;
else if (dom.window.document.title) applink.label = dom.window.document.title;
}
}
async function add(applink) {
assert.strictEqual(typeof applink, 'object');
assert.strictEqual(typeof applink.upstreamUri, 'string');
debug(`add: ${applink.upstreamUri}`, applink);
if (applink.icon) {
if (!validator.isBase64(applink.icon)) throw new BoxError(BoxError.BAD_FIELD, 'icon is not base64');
applink.icon = Buffer.from(applink.icon, 'base64');
}
await detectMetaInfo(applink);
const data = {
id: uuid.v4(),
accessRestrictionJson: applink.accessRestriction ? JSON.stringify(applink.accessRestriction) : null,
label: applink.label || '',
tagsJson: applink.tags ? JSON.stringify(applink.tags) : null,
icon: applink.icon || null,
upstreamUri: applink.upstreamUri
};
const query = 'INSERT INTO applinks (id, accessRestrictionJson, label, tagsJson, icon, upstreamUri) VALUES (?, ?, ?, ?, ?, ?)';
const args = [ data.id, data.accessRestrictionJson, data.label, data.tagsJson, data.icon, data.upstreamUri ];
const [error] = await safe(database.query(query, args));
if (error) throw error;
return data.id;
}
async function get(applinkId) {
assert.strictEqual(typeof applinkId, 'string');
const result = await database.query(`SELECT ${APPLINKS_FIELDS} FROM applinks WHERE id = ?`, [ applinkId ]);
if (result.length === 0) throw new BoxError(BoxError.NOT_FOUND, 'Applink not found');
postProcess(result[0]);
return result[0];
}
async function update(applinkId, applink) {
assert.strictEqual(typeof applinkId, 'string');
assert.strictEqual(typeof applink, 'object');
assert.strictEqual(typeof applink.upstreamUri, 'string');
debug(`update: ${applinkId} ${applink.upstreamUri}`, applink);
if (applink.icon) {
if (!validator.isBase64(applink.icon)) throw new BoxError(BoxError.BAD_FIELD, 'icon is not base64');
applink.icon = Buffer.from(applink.icon, 'base64');
}
await detectMetaInfo(applink);
const query = 'UPDATE applinks SET label=?, icon=?, upstreamUri=?, tagsJson=?, accessRestrictionJson=? WHERE id = ?';
const args = [ applink.label, applink.icon || null, applink.upstreamUri, applink.tags ? JSON.stringify(applink.tags) : null, applink.accessRestriction ? JSON.stringify(applink.accessRestriction) : null, applinkId ];
const result = await database.query(query, args);
if (result.affectedRows !== 1) throw new BoxError(BoxError.NOT_FOUND, 'Applink not found');
}
async function remove(applinkId) {
assert.strictEqual(typeof applinkId, 'string');
debug(`remove: ${applinkId}`);
const result = await database.query(`DELETE FROM applinks WHERE id = ?`, [ applinkId ]);
if (result.affectedRows !== 1) throw new BoxError(BoxError.NOT_FOUND, 'Applink not found');
}
async function getIcon(applinkId) {
assert.strictEqual(typeof applinkId, 'string');
const applink = await get(applinkId);
return applink.icon;
}
+244 -91
View File
@@ -27,6 +27,7 @@ exports = module.exports = {
setAccessRestriction,
setOperators,
setCrontab,
setUpstreamUri,
setLabel,
setIcon,
setTags,
@@ -42,7 +43,7 @@ exports = module.exports = {
setMailbox,
setInbox,
setLocation,
setDataDir,
setStorage,
repair,
restore,
@@ -60,13 +61,17 @@ exports = module.exports = {
getLogPaths,
getLogs,
appendLogLine,
getCertificate,
start,
stop,
restart,
exec,
createExec,
startExec,
getExec,
checkManifestConstraints,
downloadManifest,
@@ -79,7 +84,7 @@ exports = module.exports = {
schedulePendingTasks,
restartAppsUsingAddons,
getDataDir,
getStorageDir,
getIcon,
getMemoryLimit,
getLimits,
@@ -157,6 +162,7 @@ const appstore = require('./appstore.js'),
mail = require('./mail.js'),
manifestFormat = require('cloudron-manifestformat'),
mounts = require('./mounts.js'),
notifications = require('./notifications.js'),
once = require('./once.js'),
os = require('os'),
path = require('path'),
@@ -166,6 +172,7 @@ const appstore = require('./appstore.js'),
semver = require('semver'),
services = require('./services.js'),
settings = require('./settings.js'),
shell = require('./shell.js'),
spawn = require('child_process').spawn,
split = require('split'),
superagent = require('superagent'),
@@ -176,18 +183,21 @@ const appstore = require('./appstore.js'),
util = require('util'),
uuid = require('uuid'),
validator = require('validator'),
volumes = require('./volumes.js'),
_ = require('underscore');
const APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.installationState', 'apps.errorJson', 'apps.runState',
'apps.health', 'apps.containerId', 'apps.manifestJson', 'apps.accessRestrictionJson', 'apps.memoryLimit', 'apps.cpuShares',
'apps.label', 'apps.tagsJson', 'apps.taskId', 'apps.reverseProxyConfigJson', 'apps.servicesConfigJson', 'apps.operatorsJson',
'apps.sso', 'apps.debugModeJson', 'apps.enableBackup', 'apps.proxyAuth', 'apps.containerIp', 'apps.crontab',
'apps.creationTime', 'apps.updateTime', 'apps.enableAutomaticUpdate',
'apps.enableMailbox', 'apps.mailboxName', 'apps.mailboxDomain', 'apps.enableInbox', 'apps.inboxName', 'apps.inboxDomain',
'apps.dataDir', 'apps.ts', 'apps.healthTime', '(apps.icon IS NOT NULL) AS hasIcon', '(apps.appStoreIcon IS NOT NULL) AS hasAppStoreIcon' ].join(',');
'apps.creationTime', 'apps.updateTime', 'apps.enableAutomaticUpdate', 'apps.upstreamUri',
'apps.enableMailbox', 'apps.mailboxDisplayName', 'apps.mailboxName', 'apps.mailboxDomain', 'apps.enableInbox', 'apps.inboxName', 'apps.inboxDomain',
'apps.storageVolumeId', 'apps.storageVolumePrefix', 'apps.ts', 'apps.healthTime', '(apps.icon IS NOT NULL) AS hasIcon', '(apps.appStoreIcon IS NOT NULL) AS hasAppStoreIcon' ].join(',');
// const PORT_BINDINGS_FIELDS = [ 'hostPort', 'type', 'environmentVariable', 'appId' ].join(',');
const CHECKVOLUME_CMD = path.join(__dirname, 'scripts/checkvolume.sh');
function validatePortBindings(portBindings, manifest) {
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof manifest, 'object');
@@ -233,7 +243,10 @@ function validatePortBindings(portBindings, manifest) {
if (!portBindings) return null;
for (let portName in portBindings) {
const tcpPorts = manifest.tcpPorts || { };
const udpPorts = manifest.udpPorts || { };
for (const portName in portBindings) {
if (!/^[a-zA-Z0-9_]+$/.test(portName)) return new BoxError(BoxError.BAD_FIELD, `${portName} is not a valid environment variable in portBindings`);
const hostPort = portBindings[portName];
@@ -241,14 +254,11 @@ function validatePortBindings(portBindings, manifest) {
if (RESERVED_PORTS.indexOf(hostPort) !== -1) return new BoxError(BoxError.BAD_FIELD, `Port ${hostPort} for ${portName} is reserved in portBindings`);
if (RESERVED_PORT_RANGES.find(range => (hostPort >= range[0] && hostPort <= range[1]))) return new BoxError(BoxError.BAD_FIELD, `Port ${hostPort} for ${portName} is reserved in portBindings`);
if (ALLOWED_PORTS.indexOf(hostPort) === -1 && (hostPort <= 1023 || hostPort > 65535)) return new BoxError(BoxError.BAD_FIELD, `${hostPort} for ${portName} is not in permitted range in portBindings`);
}
// it is OK if there is no 1-1 mapping between values in manifest.tcpPorts and portBindings. missing values implies
// that the user wants the service disabled
const tcpPorts = manifest.tcpPorts || { };
const udpPorts = manifest.udpPorts || { };
for (let portName in portBindings) {
if (!(portName in tcpPorts) && !(portName in udpPorts)) return new BoxError(BoxError.BAD_FIELD, `Invalid portBindings ${portName}`);
// it is OK if there is no 1-1 mapping between values in manifest.tcpPorts and portBindings. missing values implies the service is disabled
const portSpec = tcpPorts[portName] || udpPorts[portName];
if (!portSpec) return new BoxError(BoxError.BAD_FIELD, `Invalid portBinding ${portName}`);
if (portSpec.readOnly && portSpec.defaultValue !== hostPort) return new BoxError(BoxError.BAD_FIELD, `portBinding ${portName} is readOnly and cannot have a different value that the default`);
}
return null;
@@ -300,6 +310,18 @@ function translateSecondaryDomains(secondaryDomains) {
function parseCrontab(crontab) {
assert(crontab === null || typeof crontab === 'string');
// https://www.man7.org/linux/man-pages/man5/crontab.5.html#EXTENSIONS
const KNOWN_EXTENSIONS = {
'@service' : '@service', // runs once
'@reboot' : '@service',
'@yearly' : '0 0 1 1 *',
'@annually' : '0 0 1 1 *',
'@monthly' : '0 0 1 * *',
'@weekly' : '0 0 * * 0',
'@daily' : '0 0 * * *',
'@hourly' : '0 * * * *',
};
const result = [];
if (!crontab) return result;
@@ -307,20 +329,28 @@ function parseCrontab(crontab) {
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
if (!line || line.startsWith('#')) continue;
const parts = /^(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(.+)$/.exec(line);
if (!parts) throw new BoxError(BoxError.BAD_FIELD, `Invalid cron configuration at line ${i+1}`);
const schedule = parts.slice(1, 6).join(' ');
const command = parts[6];
if (line.startsWith('@')) {
const parts = /^(@\S+)\s+(.+)$/.exec(line);
if (!parts) throw new BoxError(BoxError.BAD_FIELD, `Invalid cron configuration at line ${i+1}`);
const [, extension, command] = parts;
if (!KNOWN_EXTENSIONS[extension]) throw new BoxError(BoxError.BAD_FIELD, `Unknown extension pattern at line ${i+1}`);
result.push({ schedule: KNOWN_EXTENSIONS[extension], command });
} else {
const parts = /^(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(.+)$/.exec(line);
if (!parts) throw new BoxError(BoxError.BAD_FIELD, `Invalid cron configuration at line ${i+1}`);
const schedule = parts.slice(1, 6).join(' ');
const command = parts[6];
try {
new CronJob('00 ' + schedule, function() {}); // second is disallowed
} catch (ex) {
throw new BoxError(BoxError.BAD_FIELD, `Invalid cron pattern at line ${i+1}`);
try {
new CronJob('00 ' + schedule, function() {}); // second is disallowed
} catch (ex) {
throw new BoxError(BoxError.BAD_FIELD, `Invalid cron pattern at line ${i+1}`);
}
if (command.length === 0) throw new BoxError(BoxError.BAD_FIELD, `Invalid cron pattern. Command must not be empty at line ${i+1}`); // not possible with the regexp we have
result.push({ schedule, command });
}
if (command.length === 0) throw new BoxError(BoxError.BAD_FIELD, `Invalid cron pattern. Command must not be empty at line ${i+1}`); // not possible with the regexp we have
result.push({ schedule, command });
}
return result;
@@ -428,6 +458,23 @@ function validateBackupFormat(format) {
return new BoxError(BoxError.BAD_FIELD, 'Invalid backup format');
}
function validateUpstreamUri(upstreamUri) {
assert.strictEqual(typeof upstreamUri, 'string');
if (!upstreamUri) return null;
const uri = safe(() => new URL(upstreamUri));
if (!uri) return new BoxError(BoxError.BAD_FIELD, `upstreamUri is invalid: ${safe.error.message}`);
if (uri.protocol !== 'http:' && uri.protocol !== 'https:') return new BoxError(BoxError.BAD_FIELD, 'upstreamUri has an unsupported scheme');
if (uri.search || uri.hash) return new BoxError(BoxError.BAD_FIELD, 'upstreamUri cannot have search or hash');
if (uri.pathname !== '/') return new BoxError(BoxError.BAD_FIELD, 'upstreamUri cannot have a path');
// we use the uri in a named location @wellknown-upstream. nginx does not support having paths in it
if (upstreamUri.endsWith('/')) return new BoxError(BoxError.BAD_FIELD, 'upstreamUri cannot have a path');
return null;
}
function validateLabel(label) {
if (label === null) return null;
@@ -455,28 +502,29 @@ function validateEnv(env) {
return null;
}
function validateDataDir(dataDir) {
if (dataDir === null) return null;
async function checkStorage(app, volumeId, prefix) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof volumeId, 'string');
assert.strictEqual(typeof prefix, 'string');
if (!path.isAbsolute(dataDir)) return new BoxError(BoxError.BAD_FIELD, `${dataDir} is not an absolute path`);
if (dataDir.endsWith('/')) return new BoxError(BoxError.BAD_FIELD, `${dataDir} contains trailing slash`);
if (path.normalize(dataDir) !== dataDir) return new BoxError(BoxError.BAD_FIELD, `${dataDir} is not a normalized path`);
const volume = await volumes.get(volumeId);
if (volume === null) throw new BoxError(BoxError.BAD_FIELD, 'Storage volume not found');
// nfs shares will have the directory mounted already
let stat = safe.fs.lstatSync(dataDir);
if (stat) {
if (!stat.isDirectory()) return new BoxError(BoxError.BAD_FIELD, `${dataDir} is not a directory`);
let entries = safe.fs.readdirSync(dataDir);
if (!entries) return new BoxError(BoxError.BAD_FIELD, `${dataDir} could not be listed`);
if (entries.length !== 0) return new BoxError(BoxError.BAD_FIELD, `${dataDir} is not empty. If this is the root of a mounted volume, provide a subdirectory.`);
}
const status = await volumes.getStatus(volume);
if (status.state !== 'active') throw new BoxError(BoxError.BAD_FIELD, 'Volume is not active');
// backup logic relies on paths not overlapping (because it recurses)
if (dataDir.startsWith(paths.APPS_DATA_DIR)) return new BoxError(BoxError.BAD_FIELD, `${dataDir} cannot be inside apps data`);
if (path.isAbsolute(prefix)) throw new BoxError(BoxError.BAD_FIELD, `prefix "${prefix}" must be a relative path`);
if (prefix.endsWith('/')) throw new BoxError(BoxError.BAD_FIELD, `prefix "${prefix}" contains trailing slash`);
if (prefix !== '' && path.normalize(prefix) !== prefix) throw new BoxError(BoxError.BAD_FIELD, `prefix "${prefix}" is not a normalized path`);
// if we made it this far, it cannot start with any of these realistically
const fhs = [ '/bin', '/boot', '/etc', '/lib', '/lib32', '/lib64', '/proc', '/run', '/sbin', '/tmp', '/usr' ];
if (fhs.some((p) => dataDir.startsWith(p))) return new BoxError(BoxError.BAD_FIELD, `${dataDir} cannot be placed inside this location`);
const sourceDir = await getStorageDir(app);
const targetDir = path.join(volume.hostPath, prefix);
const rel = path.relative(sourceDir, targetDir);
if (!rel.startsWith('../') && rel.split('/').length > 1) throw new BoxError(BoxError.BAD_FIELD, 'Only one level subdirectory moves are supported');
const [error] = await safe(shell.promises.sudo('checkStorage', [ CHECKVOLUME_CMD, targetDir, sourceDir ], {}));
if (error && error.code === 2) throw new BoxError(BoxError.BAD_FIELD, `Target directory ${targetDir} is not empty`);
if (error && error.code === 3) throw new BoxError(BoxError.BAD_FIELD, `Target directory ${targetDir} does not support chown`);
return null;
}
@@ -508,34 +556,52 @@ function getDuplicateErrorDetails(errorMessage, locations, domainObjectMap, port
if (portBindings[portName] === parseInt(match[1])) return new BoxError(BoxError.ALREADY_EXISTS, `Port ${match[1]} is in use`);
}
if (match[2] === 'dataDir') {
return new BoxError(BoxError.BAD_FIELD, `Data directory ${match[1]} is in use`);
if (match[2] === 'apps_storageVolume') {
return new BoxError(BoxError.BAD_FIELD, `Storage directory ${match[1]} is in use`);
}
return new BoxError(BoxError.ALREADY_EXISTS, `${match[2]} '${match[1]}' is in use`);
}
function getDataDir(app, dataDir) {
assert(dataDir === null || typeof dataDir === 'string');
async function getStorageDir(app) {
assert.strictEqual(typeof app, 'object');
return dataDir || path.join(paths.APPS_DATA_DIR, app.id, 'data');
if (!app.storageVolumeId) return path.join(paths.APPS_DATA_DIR, app.id, 'data');
const volume = await volumes.get(app.storageVolumeId);
if (!volume) throw new BoxError(BoxError.NOT_FOUND, 'Volume not found'); // not possible
return path.join(volume.hostPath, app.storageVolumePrefix);
}
function removeCertificateKeys(app) {
if (app.certificate) delete app.certificate.key;
app.secondaryDomains.forEach(sd => { if (sd.certificate) delete sd.certificate.key; });
app.aliasDomains.forEach(ad => { if (ad.certificate) delete ad.certificate.key; });
app.redirectDomains.forEach(rd => { if (rd.certificate) delete rd.certificate.key; });
}
function removeInternalFields(app) {
return _.pick(app,
const result = _.pick(app,
'id', 'appStoreId', 'installationState', 'error', 'runState', 'health', 'taskId',
'subdomain', 'domain', 'fqdn', 'crontab',
'subdomain', 'domain', 'fqdn', 'certificate', 'crontab', 'upstreamUri',
'accessRestriction', 'manifest', 'portBindings', 'iconUrl', 'memoryLimit', 'cpuShares', 'operators',
'sso', 'debugMode', 'reverseProxyConfig', 'enableBackup', 'creationTime', 'updateTime', 'ts', 'tags',
'label', 'secondaryDomains', 'redirectDomains', 'aliasDomains', 'env', 'enableAutomaticUpdate', 'dataDir', 'mounts',
'enableMailbox', 'mailboxName', 'mailboxDomain', 'enableInbox', 'inboxName', 'inboxDomain');
'label', 'secondaryDomains', 'redirectDomains', 'aliasDomains', 'env', 'enableAutomaticUpdate',
'storageVolumeId', 'storageVolumePrefix', 'mounts',
'enableMailbox', 'mailboxDisplayName', 'mailboxName', 'mailboxDomain', 'enableInbox', 'inboxName', 'inboxDomain');
removeCertificateKeys(result);
return result;
}
// non-admins can only see these
function removeRestrictedFields(app) {
return _.pick(app,
'id', 'appStoreId', 'installationState', 'error', 'runState', 'health', 'taskId', 'accessRestriction', 'secondaryDomains', 'redirectDomains', 'aliasDomains', 'sso',
'subdomain', 'domain', 'fqdn', 'manifest', 'portBindings', 'iconUrl', 'creationTime', 'ts', 'tags', 'label', 'enableBackup');
const result = _.pick(app,
'id', 'appStoreId', 'installationState', 'error', 'runState', 'health', 'taskId', 'accessRestriction',
'secondaryDomains', 'redirectDomains', 'aliasDomains', 'sso', 'subdomain', 'domain', 'fqdn', 'certificate',
'manifest', 'portBindings', 'iconUrl', 'creationTime', 'ts', 'tags', 'label', 'enableBackup', 'upstreamUri');
removeCertificateKeys(result);
return result;
}
async function getIcon(app, options) {
@@ -633,30 +699,35 @@ function postProcess(result) {
const subdomains = JSON.parse(result.subdomains),
domains = JSON.parse(result.domains),
subdomainTypes = JSON.parse(result.subdomainTypes),
subdomainEnvironmentVariables = JSON.parse(result.subdomainEnvironmentVariables);
subdomainEnvironmentVariables = JSON.parse(result.subdomainEnvironmentVariables),
subdomainCertificateJsons = JSON.parse(result.subdomainCertificateJsons);
delete result.subdomains;
delete result.domains;
delete result.subdomainTypes;
delete result.subdomainEnvironmentVariables;
delete result.subdomainCertificateJsons;
result.secondaryDomains = [];
result.redirectDomains = [];
result.aliasDomains = [];
for (let i = 0; i < subdomainTypes.length; i++) {
const subdomain = subdomains[i], domain = domains[i], certificate = safe.JSON.parse(subdomainCertificateJsons[i]);
if (subdomainTypes[i] === exports.LOCATION_TYPE_PRIMARY) {
result.subdomain = subdomains[i];
result.domain = domains[i];
result.subdomain = subdomain;
result.domain = domain;
result.certificate = certificate;
} else if (subdomainTypes[i] === exports.LOCATION_TYPE_SECONDARY) {
result.secondaryDomains.push({ domain: domains[i], subdomain: subdomains[i], environmentVariable: subdomainEnvironmentVariables[i] });
result.secondaryDomains.push({ domain, subdomain, certificate, environmentVariable: subdomainEnvironmentVariables[i] });
} else if (subdomainTypes[i] === exports.LOCATION_TYPE_REDIRECT) {
result.redirectDomains.push({ domain: domains[i], subdomain: subdomains[i] });
result.redirectDomains.push({ domain, subdomain, certificate });
} else if (subdomainTypes[i] === exports.LOCATION_TYPE_ALIAS) {
result.aliasDomains.push({ domain: domains[i], subdomain: subdomains[i] });
result.aliasDomains.push({ domain, subdomain, certificate });
}
}
let envNames = JSON.parse(result.envNames), envValues = JSON.parse(result.envValues);
const envNames = JSON.parse(result.envNames), envValues = JSON.parse(result.envValues);
delete result.envNames;
delete result.envValues;
result.env = {};
@@ -664,7 +735,7 @@ function postProcess(result) {
if (envNames[i]) result.env[envNames[i]] = envValues[i];
}
let volumeIds = JSON.parse(result.volumeIds);
const volumeIds = JSON.parse(result.volumeIds);
delete result.volumeIds;
let volumeReadOnlys = JSON.parse(result.volumeReadOnlys);
delete result.volumeReadOnlys;
@@ -757,9 +828,11 @@ async function add(id, appStoreId, manifest, subdomain, domain, portBindings, da
tagsJson = data.tags ? JSON.stringify(data.tags) : null,
mailboxName = data.mailboxName || null,
mailboxDomain = data.mailboxDomain || null,
mailboxDisplayName = data.mailboxDisplayName || '',
reverseProxyConfigJson = data.reverseProxyConfig ? JSON.stringify(data.reverseProxyConfig) : null,
servicesConfigJson = data.servicesConfig ? JSON.stringify(data.servicesConfig) : null,
enableMailbox = data.enableMailbox || false,
upstreamUri = data.upstreamUri || '',
icon = data.icon || null;
const queries = [];
@@ -767,10 +840,11 @@ async function add(id, appStoreId, manifest, subdomain, domain, portBindings, da
queries.push({
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, runState, accessRestrictionJson, memoryLimit, cpuShares, '
+ 'sso, debugModeJson, mailboxName, mailboxDomain, label, tagsJson, reverseProxyConfigJson, servicesConfigJson, icon, '
+ 'enableMailbox) '
+ ' VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
+ 'enableMailbox, mailboxDisplayName, upstreamUri) '
+ ' VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, installationState, runState, accessRestrictionJson, memoryLimit, cpuShares,
sso, debugModeJson, mailboxName, mailboxDomain, label, tagsJson, reverseProxyConfigJson, servicesConfigJson, icon, enableMailbox ]
sso, debugModeJson, mailboxName, mailboxDomain, label, tagsJson, reverseProxyConfigJson, servicesConfigJson, icon,
enableMailbox, mailboxDisplayName, upstreamUri ]
});
queries.push({
@@ -993,9 +1067,9 @@ async function getDomainObjectMap() {
// each query simply join apps table with another table by id. we then join the full result together
const PB_QUERY = 'SELECT id, GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables, GROUP_CONCAT(appPortBindings.type) AS portTypes FROM apps LEFT JOIN appPortBindings ON apps.id = appPortBindings.appId GROUP BY apps.id';
const ENV_QUERY = 'SELECT id, JSON_ARRAYAGG(appEnvVars.name) AS envNames, JSON_ARRAYAGG(appEnvVars.value) AS envValues FROM apps LEFT JOIN appEnvVars ON apps.id = appEnvVars.appId GROUP BY apps.id';
const SUBDOMAIN_QUERY = 'SELECT id, JSON_ARRAYAGG(locations.subdomain) AS subdomains, JSON_ARRAYAGG(locations.domain) AS domains, JSON_ARRAYAGG(locations.type) AS subdomainTypes, JSON_ARRAYAGG(locations.environmentVariable) AS subdomainEnvironmentVariables FROM apps LEFT JOIN locations ON apps.id = locations.appId GROUP BY apps.id';
const SUBDOMAIN_QUERY = 'SELECT id, JSON_ARRAYAGG(locations.subdomain) AS subdomains, JSON_ARRAYAGG(locations.domain) AS domains, JSON_ARRAYAGG(locations.type) AS subdomainTypes, JSON_ARRAYAGG(locations.environmentVariable) AS subdomainEnvironmentVariables, JSON_ARRAYAGG(locations.certificateJson) AS subdomainCertificateJsons FROM apps LEFT JOIN locations ON apps.id = locations.appId GROUP BY apps.id';
const MOUNTS_QUERY = 'SELECT id, JSON_ARRAYAGG(appMounts.volumeId) AS volumeIds, JSON_ARRAYAGG(appMounts.readOnly) AS volumeReadOnlys FROM apps LEFT JOIN appMounts ON apps.id = appMounts.appId GROUP BY apps.id';
const APPS_QUERY = `SELECT ${APPS_FIELDS_PREFIXED}, hostPorts, environmentVariables, portTypes, envNames, envValues, subdomains, domains, subdomainTypes, subdomainEnvironmentVariables, volumeIds, volumeReadOnlys FROM apps`
const APPS_QUERY = `SELECT ${APPS_FIELDS_PREFIXED}, hostPorts, environmentVariables, portTypes, envNames, envValues, subdomains, domains, subdomainTypes, subdomainEnvironmentVariables, subdomainCertificateJsons, volumeIds, volumeReadOnlys FROM apps`
+ ` LEFT JOIN (${PB_QUERY}) AS q1 on q1.id = apps.id`
+ ` LEFT JOIN (${ENV_QUERY}) AS q2 on q2.id = apps.id`
+ ` LEFT JOIN (${SUBDOMAIN_QUERY}) AS q3 on q3.id = apps.id`
@@ -1251,6 +1325,7 @@ async function install(data, auditSource) {
overwriteDns = 'overwriteDns' in data ? data.overwriteDns : false,
skipDnsSetup = 'skipDnsSetup' in data ? data.skipDnsSetup : false,
appStoreId = data.appStoreId,
upstreamUri = data.upstreamUri || '',
manifest = data.manifest;
let error = manifestFormat.parse(manifest);
@@ -1274,6 +1349,9 @@ async function install(data, auditSource) {
error = validateLabel(label);
if (error) throw error;
error = validateUpstreamUri(upstreamUri);
if (error) throw error;
error = validateTags(tags);
if (error) throw error;
@@ -1331,6 +1409,7 @@ async function install(data, auditSource) {
tags,
icon,
enableMailbox,
upstreamUri,
runState: exports.RSTATE_RUNNING,
installationState: exports.ISTATE_PENDING_INSTALL
};
@@ -1398,6 +1477,22 @@ async function setCrontab(app, crontab, auditSource) {
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, crontab });
}
async function setUpstreamUri(app, upstreamUri, auditSource) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof upstreamUri, 'string');
assert.strictEqual(typeof auditSource, 'object');
const appId = app.id;
const error = validateUpstreamUri(upstreamUri);
if (error) throw error;
await reverseProxy.writeAppConfigs(_.extend({}, app, { upstreamUri }));
await update(appId, { upstreamUri });
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, upstreamUri });
}
async function setLabel(app, label, auditSource) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof label, 'string');
@@ -1571,6 +1666,7 @@ async function setMailbox(app, data, auditSource) {
const optional = 'optional' in app.manifest.addons.sendmail ? app.manifest.addons.sendmail.optional : false;
if (!optional && !enableMailbox) throw new BoxError(BoxError.BAD_FIELD, 'App requires sendmail to be enabled');
const mailboxDisplayName = data.mailboxDisplayName || '';
let mailboxName = data.mailboxName || null;
const mailboxDomain = data.mailboxDomain || null;
@@ -1583,15 +1679,20 @@ async function setMailbox(app, data, auditSource) {
} else {
mailboxName = mailboxNameForSubdomain(app.subdomain, app.domain, app.manifest);
}
if (mailboxDisplayName) {
error = mail.validateDisplayName(mailboxDisplayName);
if (error) throw new BoxError(BoxError.BAD_FIELD, error.message);
}
}
const task = {
args: {},
values: { enableMailbox, mailboxName, mailboxDomain }
values: { enableMailbox, mailboxName, mailboxDomain, mailboxDisplayName }
};
const taskId = await addTask(appId, exports.ISTATE_PENDING_RECREATE_CONTAINER, task, auditSource);
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, mailboxName, mailboxDomain, taskId });
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, mailboxName, mailboxDomain, mailboxDisplayName, taskId });
return { taskId };
}
@@ -1665,7 +1766,7 @@ async function setReverseProxyConfig(app, reverseProxyConfig, auditSource) {
error = validateRobotsTxt(reverseProxyConfig.robotsTxt);
if (error) throw error;
await reverseProxy.writeAppConfig(_.extend({}, app, { reverseProxyConfig }));
await reverseProxy.writeAppConfigs(_.extend({}, app, { reverseProxyConfig }));
await update(appId, { reverseProxyConfig });
@@ -1677,18 +1778,23 @@ async function setCertificate(app, data, auditSource) {
assert(data && typeof data === 'object');
assert.strictEqual(typeof auditSource, 'object');
const appId = app.id;
const { location, domain, cert, key } = data;
const { subdomain, domain, cert, key } = data;
const domainObject = await domains.get(domain);
if (domainObject === null) throw new BoxError(BoxError.NOT_FOUND, 'Domain not found');
if (cert && key) {
const error = reverseProxy.validateCertificate(location, domainObject, { cert, key });
const error = reverseProxy.validateCertificate(subdomain, domainObject, { cert, key });
if (error) throw error;
}
await reverseProxy.setAppCertificate(location, domainObject, { cert, key });
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, cert, key });
const certificate = cert && key ? { cert, key } : null;
const result = await database.query('UPDATE locations SET certificateJson=? WHERE location=? AND domain=?', [ certificate ? JSON.stringify(certificate) : null, subdomain, domain ]);
if (result.affectedRows === 0) throw new BoxError(BoxError.NOT_FOUND, 'Location not found');
app = await get(app.id); // refresh app object
await reverseProxy.setUserCertificate(app, dns.fqdn(subdomain, domainObject), certificate);
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId: app.id, app, subdomain, domain, cert });
}
async function setLocation(app, data, auditSource) {
@@ -1764,25 +1870,29 @@ async function setLocation(app, data, auditSource) {
return { taskId };
}
async function setDataDir(app, dataDir, auditSource) {
async function setStorage(app, volumeId, volumePrefix, auditSource) {
assert.strictEqual(typeof app, 'object');
assert(dataDir === null || typeof dataDir === 'string');
assert(volumeId === null || typeof volumeId === 'string');
assert(volumePrefix === null || typeof volumePrefix === 'string');
assert.strictEqual(typeof auditSource, 'object');
const appId = app.id;
let error = checkAppState(app, exports.ISTATE_PENDING_DATA_DIR_MIGRATION);
if (error) throw error;
error = validateDataDir(dataDir);
if (error) throw error;
if (volumeId) {
await checkStorage(app, volumeId, volumePrefix);
} else {
volumeId = volumePrefix = null;
}
const task = {
args: { newDataDir: dataDir },
args: { newStorageVolumeId: volumeId, newStorageVolumePrefix: volumePrefix },
values: {}
};
const taskId = await addTask(appId, exports.ISTATE_PENDING_DATA_DIR_MIGRATION, task, auditSource);
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, dataDir, taskId });
await eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId, app, volumeId, volumePrefix, taskId });
return { taskId };
}
@@ -1940,13 +2050,23 @@ async function getLogs(app, options) {
return transformStream;
}
// never fails just prints error
async function appendLogLine(app, line) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof line, 'string');
const logFilePath = path.join(paths.LOG_DIR, app.id, 'app.log');
if (!safe.fs.appendFileSync(logFilePath, line)) console.error(`Could not append log line for app ${app.id} at ${logFilePath}: ${safe.error.message}`);
}
async function getCertificate(subdomain, domain) {
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof domain, 'string');
const result = await database.query('SELECT certificateJson FROM locations WHERE subdomain=? AND domain=?', [ subdomain, domain ]);
if (result.length === 0) return null;
return JSON.parse(result[0].certificateJson);
return safe.JSON.parse(result[0].certificateJson);
}
// does a re-configure when called from most states. for install/clone errors, it re-installs with an optional manifest
@@ -2213,7 +2333,8 @@ async function clone(app, data, user, auditSource) {
tags: app.tags,
enableAutomaticUpdate: app.enableAutomaticUpdate,
icon: icons.icon,
enableMailbox: app.enableMailbox
enableMailbox: app.enableMailbox,
mailboxDisplayName: app.mailboxDisplayName
};
const [addError] = await safe(add(newAppId, appStoreId, manifest, subdomain, domain, translatePortBindings(portBindings, manifest), obj));
@@ -2335,7 +2456,7 @@ function checkManifestConstraints(manifest) {
return null;
}
async function exec(app, options) {
async function createExec(app, options) {
assert.strictEqual(typeof app, 'object');
assert(options && typeof options === 'object');
@@ -2346,7 +2467,7 @@ async function exec(app, options) {
throw new BoxError(BoxError.BAD_STATE, 'App not installed or running');
}
const execOptions = {
const createOptions = {
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
@@ -2358,6 +2479,18 @@ async function exec(app, options) {
Cmd: cmd
};
return await docker.createExec(app.containerId, createOptions);
}
async function startExec(app, execId, options) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof execId, 'string');
assert(options && typeof options === 'object');
if (app.installationState !== exports.ISTATE_INSTALLED || app.runState !== exports.RSTATE_RUNNING) {
throw new BoxError(BoxError.BAD_STATE, 'App not installed or running');
}
const startOptions = {
Detach: false,
Tty: options.tty,
@@ -2373,10 +2506,26 @@ async function exec(app, options) {
stderr: true
};
const stream = await docker.execContainer(app.containerId, { execOptions, startOptions, rows: options.rows, columns: options.columns });
const stream = await docker.startExec(execId, startOptions);
if (options.rows && options.columns) {
// there is a race where resizing too early results in a 404 "no such exec"
// https://git.cloudron.io/cloudron/box/issues/549
setTimeout(async function () {
await safe(docker.resizeExec(execId, { h: options.rows, w: options.columns }, { debug }));
}, 2000);
}
return stream;
}
async function getExec(app, execId) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof execId, 'string');
return await docker.getExec(execId);
}
function canAutoupdateApp(app, updateInfo) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof updateInfo, 'object');
@@ -2419,6 +2568,7 @@ async function autoupdateApps(updateInfo, auditSource) { // updateInfo is { appI
if (!canAutoupdateApp(app, updateInfo[appId])) {
debug(`app ${app.fqdn} requires manual update`);
notifications.alert(notifications.ALERT_MANUAL_APP_UPDATE, `${app.manifest.title} at ${app.fqdn} requires manual update to version ${updateInfo[appId].manifest.version}`, `Changelog:\n${updateInfo[appId].manifest.changelog}\n`);
continue;
}
@@ -2601,7 +2751,8 @@ async function downloadFile(app, filePath) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof filePath, 'string');
const statStream = await exec(app, { cmd: [ 'stat', '--printf=%F-%s', filePath ], tty: true });
const statExecId = await createExec(app, { cmd: [ 'stat', '--printf=%F-%s', filePath ], tty: true });
const statStream = await startExec(app, statExecId, { tty: true });
const data = await drainStream(statStream);
const parts = data.split('-');
@@ -2622,7 +2773,8 @@ async function downloadFile(app, filePath) {
throw new BoxError(BoxError.NOT_FOUND, 'only files or dirs can be downloaded');
}
const inputStream = await exec(app, { cmd, tty: false });
const execId = await createExec(app, { cmd, tty: false });
const inputStream = await startExec(app, execId, { tty: false });
// transforms the docker stream into a normal stream
const stdoutStream = new TransformStream({
@@ -2663,7 +2815,8 @@ async function uploadFile(app, sourceFilePath, destFilePath) {
const escapedDestFilePath = safe.child_process.execSync(`printf %q '${destFilePath.replace(/'/g, '\'\\\'\'')}'`, { shell: '/bin/bash', encoding: 'utf8' });
debug(`uploadFile: ${sourceFilePath} -> ${escapedDestFilePath}`);
const destStream = await exec(app, { cmd: [ 'bash', '-c', `cat - > ${escapedDestFilePath}` ], tty: false });
const execId = await createExec(app, { cmd: [ 'bash', '-c', `cat - > ${escapedDestFilePath}` ], tty: false });
const destStream = await startExec(app, execId, { tty: false });
return new Promise((resolve, reject) => {
const done = once(error => reject(new BoxError(BoxError.FS_ERROR, error.message)));
+13 -20
View File
@@ -89,8 +89,8 @@ async function login(email, password, totpToken) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `login status code: ${response.status}`);
if (!response.body.accessToken) throw new BoxError(BoxError.EXTERNAL_ERROR, `login invalid response: ${response.text}`);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Login error. status code: ${response.status}`);
if (!response.body.accessToken) throw new BoxError(BoxError.EXTERNAL_ERROR, `Login error. invalid response: ${response.text}`);
return response.body; // { userId, accessToken }
}
@@ -100,13 +100,13 @@ async function registerUser(email, password) {
assert.strictEqual(typeof password, 'string');
const [error, response] = await safe(superagent.post(`${settings.apiServerOrigin()}/api/v1/register_user`)
.send({ email, password })
.send({ email, password, utmSource: 'cloudron-dashboard' })
.timeout(30 * 1000)
.ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 409) throw new BoxError(BoxError.ALREADY_EXISTS, 'account already exists');
if (response.status !== 201) throw new BoxError(BoxError.EXTERNAL_ERROR, `register status code: ${response.status}`);
if (response.status === 409) throw new BoxError(BoxError.ALREADY_EXISTS, 'Registration error: account already exists');
if (response.status !== 201) throw new BoxError(BoxError.EXTERNAL_ERROR, `Registration error. invalid response: ${response.status}`);
}
async function getWebToken() {
@@ -129,7 +129,6 @@ async function getSubscription() {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR);
if (response.status === 502) throw new BoxError(BoxError.EXTERNAL_ERROR, `Stripe error: ${error.message}`);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${error.message}`);
@@ -185,8 +184,7 @@ async function unpurchaseApp(appId, data) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 404) return; // was never purchased
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status !== 201 && response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('App unpurchase failed. %s %j', response.status, response.body));
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `App unpurchase failed to get app. status:${response.status}`);
[error, response] = await safe(superagent.del(url)
.send(data)
@@ -196,7 +194,7 @@ async function unpurchaseApp(appId, data) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status !== 204) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('App unpurchase failed. %s %j', response.status, response.body));
if (response.status !== 204) throw new BoxError(BoxError.EXTERNAL_ERROR, `App unpurchase failed. status:${response.status}`);
}
async function getBoxUpdate(options) {
@@ -218,7 +216,6 @@ async function getBoxUpdate(options) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status === 204) return; // no update
if (response.status !== 200 || !response.body) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('Bad response: %s %s', response.status, response.text));
@@ -261,7 +258,6 @@ async function getAppUpdate(app, options) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status === 204) return; // no update
if (response.status !== 200 || !response.body) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('Bad response: %s %s', response.status, response.text));
@@ -285,10 +281,10 @@ async function getAppUpdate(app, options) {
async function registerCloudron(data) {
assert.strictEqual(typeof data, 'object');
const { domain, accessToken, version } = data;
const { domain, accessToken, version, existingApps } = data;
const [error, response] = await safe(superagent.post(`${settings.apiServerOrigin()}/api/v1/register_cloudron`)
.send({ domain, accessToken, version })
.send({ domain, accessToken, version, existingApps })
.timeout(30 * 1000)
.ok(() => true));
@@ -326,7 +322,6 @@ async function updateCloudron(data) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('Bad response: %s %s', response.status, response.text));
debug(`updateCloudron: Cloudron updated with data ${JSON.stringify(data)}`);
@@ -335,13 +330,14 @@ async function updateCloudron(data) {
async function registerWithLoginCredentials(options) {
assert.strictEqual(typeof options, 'object');
const token = await settings.getAppstoreApiToken();
if (token) throw new BoxError(BoxError.CONFLICT, 'Cloudron is already registered');
if (options.signup) await registerUser(options.email, options.password);
const result = await login(options.email, options.password, options.totpToken || '');
await registerCloudron({ domain: settings.dashboardDomain(), accessToken: result.accessToken, version: constants.VERSION });
for (const app of await apps.list()) {
await purchaseApp({ appId: app.id, appstoreId: app.appStoreId, manifestId: app.manifest.id || 'customapp' });
}
}
async function unregister() {
@@ -391,7 +387,6 @@ async function createTicket(info, auditSource) {
const [error, response] = await safe(request);
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status !== 201) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('Bad response: %s %s', response.status, response.text));
await eventlog.add(eventlog.ACTION_SUPPORT_TICKET, auditSource, info);
@@ -412,7 +407,6 @@ async function getApps() {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 403 || response.status === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('App listing failed. %s %j', response.status, response.body));
if (!response.body.apps) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('Bad response: %s %s', response.status, response.text));
@@ -443,7 +437,6 @@ async function getAppVersion(appId, version) {
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status === 403 || response.statusCode === 401) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (response.status === 404) throw new BoxError(BoxError.NOT_FOUND);
if (response.status === 402) throw new BoxError(BoxError.LICENSE_ERROR, response.body.message);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, util.format('App fetch failed. %s %j', response.status, response.body));
return response.body;
+48 -19
View File
@@ -50,7 +50,7 @@ const CGROUP_VERSION = fs.existsSync('/sys/fs/cgroup/cgroup.controllers') ? '2'
const COLLECTD_CONFIG_EJS = fs.readFileSync(`${__dirname}/collectd/app_cgroup_v${CGROUP_VERSION}.ejs`, { encoding: 'utf8' });
function makeTaskError(error, app) {
assert.strictEqual(typeof error, 'object');
assert(error instanceof BoxError);
assert.strictEqual(typeof app, 'object');
// track a few variables which helps 'repair' restart the task (see also scheduleTask in apps.js)
@@ -74,6 +74,8 @@ async function updateApp(app, values) {
async function allocateContainerIp(app) {
assert.strictEqual(typeof app, 'object');
if (app.manifest.id === constants.PROXY_APP_APPSTORE_ID) return;
await promiseRetry({ times: 10, interval: 0, debug }, async function () {
const iprange = iputils.intFromIp('172.18.20.255') - iputils.intFromIp('172.18.16.1');
let rnd = Math.floor(Math.random() * iprange);
@@ -86,6 +88,8 @@ async function createContainer(app) {
assert.strictEqual(typeof app, 'object');
assert(!app.containerId); // otherwise, it will trigger volumeFrom
if (app.manifest.id === constants.PROXY_APP_APPSTORE_ID) return;
debug('createContainer: creating container');
const container = await docker.createContainer(app);
@@ -160,7 +164,8 @@ async function deleteAppDir(app, options) {
async function addCollectdProfile(app) {
assert.strictEqual(typeof app, 'object');
const collectdConf = ejs.render(COLLECTD_CONFIG_EJS, { appId: app.id, containerId: app.containerId, appDataDir: apps.getDataDir(app, app.dataDir) });
const appDataDir = await apps.getStorageDir(app);
const collectdConf = ejs.render(COLLECTD_CONFIG_EJS, { appId: app.id, containerId: app.containerId, appDataDir });
await collectd.addProfile(app.id, collectdConf);
}
@@ -268,24 +273,30 @@ async function waitForDnsPropagation(app) {
}
}
async function moveDataDir(app, targetDir) {
async function moveDataDir(app, targetVolumeId, targetVolumePrefix) {
assert.strictEqual(typeof app, 'object');
assert(targetDir === null || typeof targetDir === 'string');
assert(targetVolumeId === null || typeof targetVolumeId === 'string');
assert(targetVolumePrefix === null || typeof targetVolumePrefix === 'string');
const resolvedSourceDir = apps.getDataDir(app, app.dataDir);
const resolvedTargetDir = apps.getDataDir(app, targetDir);
const resolvedSourceDir = await apps.getStorageDir(app);
const resolvedTargetDir = await apps.getStorageDir(_.extend({}, app, { storageVolumeId: targetVolumeId, storageVolumePrefix: targetVolumePrefix }));
debug(`moveDataDir: migrating data from ${resolvedSourceDir} to ${resolvedTargetDir}`);
if (resolvedSourceDir === resolvedTargetDir) return;
if (resolvedSourceDir !== resolvedTargetDir) {
const [error] = await safe(shell.promises.sudo('moveDataDir', [ MV_VOLUME_CMD, resolvedSourceDir, resolvedTargetDir ], {}));
if (error) throw new BoxError(BoxError.EXTERNAL_ERROR, `Error migrating data directory: ${error.message}`);
}
const [error] = await safe(shell.promises.sudo('moveDataDir', [ MV_VOLUME_CMD, resolvedSourceDir, resolvedTargetDir ], {}));
if (error) throw new BoxError(BoxError.EXTERNAL_ERROR, `Error migrating data directory: ${error.message}`);
await updateApp(app, { storageVolumeId: targetVolumeId, storageVolumePrefix: targetVolumePrefix });
}
async function downloadImage(manifest) {
assert.strictEqual(typeof manifest, 'object');
// skip for relay app
if (manifest.id === constants.PROXY_APP_APPSTORE_ID) return;
const info = await docker.info();
const [dfError, diskUsage] = await safe(df.file(info.DockerRootDir));
if (dfError) throw new BoxError(BoxError.FS_ERROR, `Error getting file system info: ${dfError.message}`);
@@ -300,6 +311,9 @@ async function startApp(app) {
if (app.runState === apps.RSTATE_STOPPED) return;
// skip for relay app
if (app.manifest.id === constants.PROXY_APP_APPSTORE_ID) return;
await docker.startContainer(app.id);
}
@@ -520,8 +534,9 @@ async function migrateDataDir(app, args, progressCallback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof progressCallback, 'function');
const newDataDir = args.newDataDir;
assert(newDataDir === null || typeof newDataDir === 'string');
const { newStorageVolumeId, newStorageVolumePrefix } = args;
assert(newStorageVolumeId === null || typeof newStorageVolumeId === 'string');
assert(newStorageVolumePrefix === null || typeof newStorageVolumePrefix === 'string');
await progressCallback({ percent: 10, message: 'Cleaning up old install' });
await deleteContainers(app, { managedOnly: true });
@@ -529,12 +544,12 @@ async function migrateDataDir(app, args, progressCallback) {
await progressCallback({ percent: 45, message: 'Ensuring app data directory' });
await createAppDir(app);
// re-setup addons since this creates the localStorage volume
// re-setup addons since this creates the localStorage destination
await progressCallback({ percent: 50, message: 'Setting up addons' });
await services.setupAddons(_.extend({}, app, { dataDir: newDataDir }), app.manifest.addons);
await services.setupAddons(_.extend({}, app, { storageVolumeId: newStorageVolumeId, storageVolumePrefix: newStorageVolumePrefix }), app.manifest.addons);
await progressCallback({ percent: 60, message: 'Moving data dir' });
await moveDataDir(app, newDataDir);
await moveDataDir(app, newStorageVolumeId, newStorageVolumePrefix);
await progressCallback({ percent: 90, message: 'Creating container' });
await createContainer(app);
@@ -542,7 +557,7 @@ async function migrateDataDir(app, args, progressCallback) {
await startApp(app);
await progressCallback({ percent: 100, message: 'Done' });
await updateApp(app, { installationState: apps.ISTATE_INSTALLED, error: null, health: null, dataDir: newDataDir });
await updateApp(app, { installationState: apps.ISTATE_INSTALLED, error: null, health: null });
}
// configure is called for an infra update and repair to re-create container, reverseproxy config. it's all "local"
@@ -670,8 +685,10 @@ async function start(app, args, progressCallback) {
await progressCallback({ percent: 10, message: 'Starting app services' });
await services.startAppServices(app);
await progressCallback({ percent: 35, message: 'Starting container' });
await docker.startContainer(app.id);
if (app.manifest.id !== constants.PROXY_APP_APPSTORE_ID) {
await progressCallback({ percent: 35, message: 'Starting container' });
await docker.startContainer(app.id);
}
await progressCallback({ percent: 60, message: 'Adding collectd profile' });
await addCollectdProfile(app);
@@ -708,8 +725,20 @@ async function restart(app, args, progressCallback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof progressCallback, 'function');
await progressCallback({ percent: 20, message: 'Restarting container' });
await docker.restartContainer(app.id);
if (app.manifest.id !== constants.PROXY_APP_APPSTORE_ID) {
await progressCallback({ percent: 10, message: 'Starting app services' });
await services.startAppServices(app);
await progressCallback({ percent: 20, message: 'Restarting container' });
await docker.restartContainer(app.id);
}
await progressCallback({ percent: 60, message: 'Adding collectd profile' });
await addCollectdProfile(app);
// stopped apps do not renew certs. currently, we don't do DNS to not overwrite existing user settings
await progressCallback({ percent: 80, message: 'Configuring reverse proxy' });
await reverseProxy.configureApp(app, AuditSource.APPTASK);
await progressCallback({ percent: 100, message: 'Done' });
await updateApp(app, { installationState: apps.ISTATE_INSTALLED, error: null, health: null });
+6 -5
View File
@@ -10,6 +10,7 @@ exports = module.exports = {
const apps = require('./apps.js'),
assert = require('assert'),
backupFormat = require('./backupformat.js'),
backups = require('./backups.js'),
constants = require('./constants.js'),
debug = require('debug')('box:backupcleaner'),
@@ -85,7 +86,7 @@ async function removeBackup(backupConfig, backup, progressCallback) {
assert.strictEqual(typeof backup, 'object');
assert.strictEqual(typeof progressCallback, 'function');
const backupFilePath = storage.getBackupFilePath(backupConfig, backup.remotePath, backup.format);
const backupFilePath = backupFormat.api(backup.format).getBackupFilePath(backupConfig, backup.remotePath);
let removeError;
if (backup.format ==='tgz') {
@@ -212,7 +213,7 @@ async function cleanupMissingBackups(backupConfig, progressCallback) {
result = await backups.list(page, perPage);
for (const backup of result) {
let backupFilePath = storage.getBackupFilePath(backupConfig, backup.remotePath, backup.format);
let backupFilePath = backupFormat.api(backup.format).getBackupFilePath(backupConfig, backup.remotePath);
if (backup.format === 'rsync') backupFilePath = backupFilePath + '/'; // add trailing slash to indicate directory
const [existsError, exists] = await safe(storage.api(backupConfig.provider).exists(backupConfig, backupFilePath));
@@ -251,9 +252,9 @@ async function cleanupSnapshots(backupConfig) {
if (app) continue; // app is still installed
if (info[appId].format ==='tgz') {
await safe(storage.api(backupConfig.provider).remove(backupConfig, storage.getBackupFilePath(backupConfig, `snapshot/app_${appId}`, info[appId].format)), { debug });
await safe(storage.api(backupConfig.provider).remove(backupConfig, backupFormat.api(info[appId].format).getBackupFilePath(backupConfig, `snapshot/app_${appId}`)), { debug });
} else {
await safe(storage.api(backupConfig.provider).removeDir(backupConfig, storage.getBackupFilePath(backupConfig, `snapshot/app_${appId}`, info[appId].format), progressCallback), { debug });
await safe(storage.api(backupConfig.provider).removeDir(backupConfig, backupFormat.api(info[appId].format).getBackupFilePath(backupConfig, `snapshot/app_${appId}`), progressCallback), { debug });
}
safe.fs.unlinkSync(path.join(paths.BACKUP_INFO_DIR, `${appId}.sync.cache`));
@@ -292,7 +293,7 @@ async function run(progressCallback) {
await progressCallback({ percent: 40, message: 'Cleaning app backups' });
const removedAppBackupPaths = await cleanupAppBackups(backupConfig, referencedBackupIds, progressCallback);
await progressCallback({ percent: 70, message: 'Cleaning missing backups' });
await progressCallback({ percent: 70, message: 'Checking storage backend and removing stale entries in database' });
const missingBackupPaths = await cleanupMissingBackups(backupConfig, progressCallback);
await progressCallback({ percent: 90, message: 'Cleaning snapshots' });
+12
View File
@@ -0,0 +1,12 @@
'use strict';
exports = module.exports = {
api
};
function api(format) {
switch (format) {
case 'tgz': return require('./backupformat/tgz.js');
case 'rsync': return require('./backupformat/rsync.js');
}
}
+245
View File
@@ -0,0 +1,245 @@
'use strict';
exports = module.exports = {
getBackupFilePath,
download,
upload,
_saveFsMetadata: saveFsMetadata,
_restoreFsMetadata: restoreFsMetadata
};
const assert = require('assert'),
async = require('async'),
BoxError = require('../boxerror.js'),
DataLayout = require('../datalayout.js'),
debug = require('debug')('box:backupformat/rsync'),
fs = require('fs'),
hush = require('../hush.js'),
once = require('../once.js'),
path = require('path'),
safe = require('safetydance'),
storage = require('../storage.js'),
syncer = require('../syncer.js'),
util = require('util');
function getBackupFilePath(backupConfig, remotePath) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
const rootPath = storage.api(backupConfig.provider).getRootPath(backupConfig);
return path.join(rootPath, remotePath);
}
function sync(backupConfig, remotePath, dataLayout, progressCallback, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
// the number here has to take into account the s3.upload partSize (which is 10MB). So 20=200MB
const concurrency = backupConfig.syncConcurrency || (backupConfig.provider === 's3' ? 20 : 10);
const removeDir = util.callbackify(storage.api(backupConfig.provider).removeDir);
const remove = util.callbackify(storage.api(backupConfig.provider).remove);
syncer.sync(dataLayout, function processTask(task, iteratorCallback) {
debug('sync: processing task: %j', task);
// the empty task.path is special to signify the directory
const destPath = task.path && backupConfig.encryptedFilenames ? hush.encryptFilePath(task.path, backupConfig.encryption) : task.path;
const backupFilePath = path.join(getBackupFilePath(backupConfig, remotePath), destPath);
if (task.operation === 'removedir') {
debug(`Removing directory ${backupFilePath}`);
return removeDir(backupConfig, backupFilePath, progressCallback, iteratorCallback);
} else if (task.operation === 'remove') {
debug(`Removing ${backupFilePath}`);
return remove(backupConfig, backupFilePath, iteratorCallback);
}
let retryCount = 0;
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
retryCallback = once(retryCallback); // protect again upload() erroring much later after read stream error
++retryCount;
if (task.operation === 'add') {
progressCallback({ message: `Adding ${task.path}` + (retryCount > 1 ? ` (Try ${retryCount})` : '') });
debug(`Adding ${task.path} position ${task.position} try ${retryCount}`);
const stream = hush.createReadStream(dataLayout.toLocalPath('./' + task.path), backupConfig.encryption);
stream.on('error', (error) => retryCallback(error.message.includes('ENOENT') ? null : error)); // ignore error if file disappears
stream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: `Uploading ${task.path}` }); // 0M@0MBps looks wrong
progressCallback({ message: `Uploading ${task.path}: ${transferred}M@${speed}MBps` }); // 0M@0MBps looks wrong
});
// only create the destination path when we have confirmation that the source is available. otherwise, we end up with
// files owned as 'root' and the cp later will fail
stream.on('open', function () {
storage.api(backupConfig.provider).upload(backupConfig, backupFilePath, stream, function (error) {
debug(error ? `Error uploading ${task.path} try ${retryCount}: ${error.message}` : `Uploaded ${task.path}`);
retryCallback(error);
});
});
}
}, iteratorCallback);
}, concurrency, function (error) {
if (error) return callback(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
callback();
});
}
// this is not part of 'snapshotting' because we need root access to traverse
async function saveFsMetadata(dataLayout, metadataFile) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof metadataFile, 'string');
// contains paths prefixed with './'
const metadata = {
emptyDirs: [],
execFiles: [],
symlinks: []
};
// we assume small number of files. spawnSync will raise a ENOBUFS error after maxBuffer
for (let lp of dataLayout.localPaths()) {
const emptyDirs = safe.child_process.execSync(`find ${lp} -type d -empty`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (emptyDirs === null) throw new BoxError(BoxError.FS_ERROR, `Error finding empty dirs: ${safe.error.message}`);
if (emptyDirs.length) metadata.emptyDirs = metadata.emptyDirs.concat(emptyDirs.trim().split('\n').map((ed) => dataLayout.toRemotePath(ed)));
const execFiles = safe.child_process.execSync(`find ${lp} -type f -executable`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (execFiles === null) throw new BoxError(BoxError.FS_ERROR, `Error finding executables: ${safe.error.message}`);
if (execFiles.length) metadata.execFiles = metadata.execFiles.concat(execFiles.trim().split('\n').map((ef) => dataLayout.toRemotePath(ef)));
const symlinks = safe.child_process.execSync(`find ${lp} -type l`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (symlinks === null) throw new BoxError(BoxError.FS_ERROR, `Error finding symlinks: ${safe.error.message}`);
if (symlinks.length) metadata.symlinks = metadata.symlinks.concat(symlinks.trim().split('\n').map((sl) => {
const target = safe.fs.readlinkSync(sl);
return { path: dataLayout.toRemotePath(sl), target };
}));
}
if (!safe.fs.writeFileSync(metadataFile, JSON.stringify(metadata, null, 4))) throw new BoxError(BoxError.FS_ERROR, `Error writing fs metadata: ${safe.error.message}`);
}
async function restoreFsMetadata(dataLayout, metadataFile) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof metadataFile, 'string');
debug(`Recreating empty directories in ${dataLayout.toString()}`);
const metadataJson = safe.fs.readFileSync(metadataFile, 'utf8');
if (metadataJson === null) throw new BoxError(BoxError.EXTERNAL_ERROR, 'Error loading fsmetadata.json:' + safe.error.message);
const metadata = safe.JSON.parse(metadataJson);
if (metadata === null) throw new BoxError(BoxError.EXTERNAL_ERROR, 'Error parsing fsmetadata.json:' + safe.error.message);
for (const emptyDir of metadata.emptyDirs) {
const [mkdirError] = await safe(fs.promises.mkdir(dataLayout.toLocalPath(emptyDir), { recursive: true }));
if (mkdirError) throw new BoxError(BoxError.FS_ERROR, `unable to create path: ${mkdirError.message}`);
}
for (const execFile of metadata.execFiles) {
const [chmodError] = await safe(fs.promises.chmod(dataLayout.toLocalPath(execFile), parseInt('0755', 8)));
if (chmodError) throw new BoxError(BoxError.FS_ERROR, `unable to chmod: ${chmodError.message}`);
}
for (const symlink of (metadata.symlinks || [])) {
if (!symlink.target) continue;
// the path may not exist if we had a directory full of symlinks
const [mkdirError] = await safe(fs.promises.mkdir(path.dirname(dataLayout.toLocalPath(symlink.path)), { recursive: true }));
if (mkdirError) throw new BoxError(BoxError.FS_ERROR, `unable to symlink (mkdir): ${mkdirError.message}`);
const [symlinkError] = await safe(fs.promises.symlink(symlink.target, dataLayout.toLocalPath(symlink.path), 'file'));
if (symlinkError) throw new BoxError(BoxError.FS_ERROR, `unable to symlink: ${symlinkError.message}`);
}
}
function downloadDir(backupConfig, backupFilePath, dataLayout, progressCallback, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof backupFilePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
debug(`downloadDir: ${backupFilePath} to ${dataLayout.toString()}`);
function downloadFile(entry, done) {
let relativePath = path.relative(backupFilePath, entry.fullPath);
if (backupConfig.encryptedFilenames) {
const { error, result } = hush.decryptFilePath(relativePath, backupConfig.encryption);
if (error) return done(new BoxError(BoxError.CRYPTO_ERROR, 'Unable to decrypt file'));
relativePath = result;
}
const destFilePath = dataLayout.toLocalPath('./' + relativePath);
fs.mkdir(path.dirname(destFilePath), { recursive: true }, function (error) {
if (error) return done(new BoxError(BoxError.FS_ERROR, error.message));
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
storage.api(backupConfig.provider).download(backupConfig, entry.fullPath, function (error, sourceStream) {
if (error) {
progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} errored: ${error.message}` });
return retryCallback(error);
}
let destStream = hush.createWriteStream(destFilePath, backupConfig.encryption);
// protect against multiple errors. must destroy the write stream so that a previous retry does not write
let closeAndRetry = once((error) => {
if (error) progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} errored: ${error.message}` });
else progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} finished` });
sourceStream.destroy();
destStream.destroy();
retryCallback(error);
});
destStream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: `Downloading ${entry.fullPath}` }); // 0M@0MBps looks wrong
progressCallback({ message: `Downloading ${entry.fullPath}: ${transferred}M@${speed}MBps` });
});
destStream.on('error', closeAndRetry);
sourceStream.on('error', closeAndRetry);
progressCallback({ message: `Downloading ${entry.fullPath} to ${destFilePath}` });
sourceStream.pipe(destStream, { end: true }).on('done', closeAndRetry);
});
}, done);
});
}
storage.api(backupConfig.provider).listDir(backupConfig, backupFilePath, 1000, function (entries, iteratorDone) {
// https://www.digitalocean.com/community/questions/rate-limiting-on-spaces?answer=40441
const concurrency = backupConfig.downloadConcurrency || (backupConfig.provider === 's3' ? 30 : 10);
async.eachLimit(entries, concurrency, downloadFile, iteratorDone);
}, callback);
}
async function download(backupConfig, remotePath, dataLayout, progressCallback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
debug(`download: Downloading ${remotePath} to ${dataLayout.toString()}`);
const backupFilePath = getBackupFilePath(backupConfig, remotePath);
const downloadDirAsync = util.promisify(downloadDir);
await downloadDirAsync(backupConfig, backupFilePath, dataLayout, progressCallback);
await restoreFsMetadata(dataLayout, `${dataLayout.localRoot()}/fsmetadata.json`);
}
async function upload(backupConfig, remotePath, dataLayout, progressCallback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert.strictEqual(typeof dataLayout, 'object');
assert.strictEqual(typeof progressCallback, 'function');
const syncAsync = util.promisify(sync);
await saveFsMetadata(dataLayout, `${dataLayout.localRoot()}/fsmetadata.json`);
await syncAsync(backupConfig, remotePath, dataLayout, progressCallback);
}
+195
View File
@@ -0,0 +1,195 @@
'use strict';
exports = module.exports = {
getBackupFilePath,
download,
upload
};
const assert = require('assert'),
async = require('async'),
BoxError = require('../boxerror.js'),
DataLayout = require('../datalayout.js'),
debug = require('debug')('box:backupformat/tgz'),
{ DecryptStream, EncryptStream } = require('../hush.js'),
once = require('../once.js'),
path = require('path'),
progressStream = require('progress-stream'),
storage = require('../storage.js'),
tar = require('tar-fs'),
zlib = require('zlib');
function getBackupFilePath(backupConfig, remotePath) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
const rootPath = storage.api(backupConfig.provider).getRootPath(backupConfig);
const fileType = backupConfig.encryption ? '.tar.gz.enc' : '.tar.gz';
return path.join(rootPath, remotePath + fileType);
}
function tarPack(dataLayout, encryption) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof encryption, 'object');
const pack = tar.pack('/', {
dereference: false, // pack the symlink and not what it points to
entries: dataLayout.localPaths(),
ignoreStatError: (path, err) => {
debug(`tarPack: error stat'ing ${path} - ${err.code}`);
return err.code === 'ENOENT'; // ignore if file or dir got removed (probably some temporary file)
},
map: function(header) {
header.name = dataLayout.toRemotePath(header.name);
// the tar pax format allows us to encode filenames > 100 and size > 8GB (see #640)
// https://www.systutorials.com/docs/linux/man/5-star/
if (header.size > 8589934590 || header.name > 99) header.pax = { size: header.size };
return header;
},
strict: false // do not error for unknown types (skip fifo, char/block devices)
});
const gzip = zlib.createGzip({});
const ps = progressStream({ time: 10000 }); // emit 'progress' every 10 seconds
pack.on('error', function (error) {
debug('tarPack: tar stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
gzip.on('error', function (error) {
debug('tarPack: gzip stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
if (encryption) {
const encryptStream = new EncryptStream(encryption);
encryptStream.on('error', function (error) {
debug('tarPack: encrypt stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
pack.pipe(gzip).pipe(encryptStream).pipe(ps);
} else {
pack.pipe(gzip).pipe(ps);
}
return ps;
}
function tarExtract(inStream, dataLayout, encryption) {
assert.strictEqual(typeof inStream, 'object');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof encryption, 'object');
const gunzip = zlib.createGunzip({});
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
const extract = tar.extract('/', {
map: function (header) {
header.name = dataLayout.toLocalPath(header.name);
return header;
},
dmode: 500 // ensure directory is writable
});
const emitError = once((error) => {
inStream.destroy();
ps.emit('error', error);
});
inStream.on('error', function (error) {
debug('tarExtract: input stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
gunzip.on('error', function (error) {
debug('tarExtract: gunzip stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
extract.on('error', function (error) {
debug('tarExtract: extract stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
extract.on('finish', function () {
debug('tarExtract: done.');
// we use a separate event because ps is a through2 stream which emits 'finish' event indicating end of inStream and not extract
ps.emit('done');
});
if (encryption) {
const decrypt = new DecryptStream(encryption);
decrypt.on('error', function (error) {
debug('tarExtract: decrypt stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, `Failed to decrypt: ${error.message}`));
});
inStream.pipe(ps).pipe(decrypt).pipe(gunzip).pipe(extract);
} else {
inStream.pipe(ps).pipe(gunzip).pipe(extract);
}
return ps;
}
async function download(backupConfig, remotePath, dataLayout, progressCallback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
debug(`download: Downloading ${remotePath} to ${dataLayout.toString()}`);
const backupFilePath = getBackupFilePath(backupConfig, remotePath);
return new Promise((resolve, reject) => {
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
progressCallback({ message: `Downloading backup ${remotePath}` });
storage.api(backupConfig.provider).download(backupConfig, backupFilePath, function (error, sourceStream) {
if (error) return retryCallback(error);
const ps = tarExtract(sourceStream, dataLayout, backupConfig.encryption);
ps.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: 'Downloading backup' }); // 0M@0MBps looks wrong
progressCallback({ message: `Downloading ${transferred}M@${speed}MBps` });
});
ps.on('error', retryCallback);
ps.on('done', retryCallback);
});
}, (error) => {
if (error) return reject(error);
resolve();
});
});
}
async function upload(backupConfig, remotePath, dataLayout, progressCallback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert.strictEqual(typeof dataLayout, 'object');
assert.strictEqual(typeof progressCallback, 'function');
return new Promise((resolve, reject) => {
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
retryCallback = once(retryCallback); // protect again upload() erroring much later after tar stream error
const tarStream = tarPack(dataLayout, backupConfig.encryption);
tarStream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: 'Uploading backup' }); // 0M@0MBps looks wrong
progressCallback({ message: `Uploading backup ${transferred}M@${speed}MBps` });
});
tarStream.on('error', retryCallback); // already returns BoxError
storage.api(backupConfig.provider).upload(backupConfig, getBackupFilePath(backupConfig, remotePath), tarStream, retryCallback);
}, (error) => {
if (error) return reject(error);
resolve();
});
});
}
+4 -3
View File
@@ -127,7 +127,8 @@ async function add(data) {
const creationTime = data.creationTime || new Date(); // allow tests to set the time
const manifestJson = JSON.stringify(data.manifest);
const id = `${data.type}_${data.identifier}_v${data.packageVersion}_${hat(256)}`; // id is used by the UI to derive dependent packages. making this a UUID will require a lot of db querying
const prefixId = data.type === exports.BACKUP_TYPE_APP ? `${data.type}_${data.identifier}` : data.type; // type and identifier are same for other types
const id = `${prefixId}_v${data.packageVersion}_${hat(256)}`; // id is used by the UI to derive dependent packages. making this a UUID will require a lot of db querying
const [error] = await safe(database.query('INSERT INTO backups (id, remotePath, identifier, encryptionVersion, packageVersion, type, creationTime, state, dependsOnJson, manifestJson, format, preserveSecs) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
[ id, data.remotePath, data.identifier, data.encryptionVersion, data.packageVersion, data.type, creationTime, data.state, JSON.stringify(data.dependsOn), manifestJson, data.format, data.preserveSecs ]));
@@ -176,7 +177,7 @@ function validateLabel(label) {
assert.strictEqual(typeof label, 'string');
if (label.length >= 200) return new BoxError(BoxError.BAD_FIELD, 'label too long');
if (/[^a-zA-Z0-9._()-]/.test(label)) return new BoxError(BoxError.BAD_FIELD, 'label can only contain alphanumerals, dot, hyphen, brackets or underscore');
if (/[^a-zA-Z0-9._() -]/.test(label)) return new BoxError(BoxError.BAD_FIELD, 'label can only contain alphanumerals, space, dot, hyphen, brackets or underscore');
return null;
}
@@ -241,7 +242,7 @@ async function startBackupTask(auditSource) {
const errorMessage = error ? error.message : '';
const timedOut = error ? error.code === tasks.ETIMEOUT : false;
const backup = await get(backupId);
const backup = backupId ? await get(backupId) : null;
await safe(eventlog.add(eventlog.ACTION_BACKUP_FINISH, auditSource, { taskId, errorMessage, timedOut, backupId, remotePath: backup?.remotePath }), { debug });
});
+32 -612
View File
@@ -12,40 +12,26 @@ exports = module.exports = {
downloadMail,
upload,
_restoreFsMetadata: restoreFsMetadata,
_saveFsMetadata: saveFsMetadata,
};
const apps = require('./apps.js'),
assert = require('assert'),
async = require('async'),
backupFormat = require('./backupformat.js'),
backups = require('./backups.js'),
BoxError = require('./boxerror.js'),
constants = require('./constants.js'),
crypto = require('crypto'),
DataLayout = require('./datalayout.js'),
database = require('./database.js'),
debug = require('debug')('box:backuptask'),
fs = require('fs'),
once = require('./once.js'),
path = require('path'),
paths = require('./paths.js'),
progressStream = require('progress-stream'),
safe = require('safetydance'),
services = require('./services.js'),
settings = require('./settings.js'),
shell = require('./shell.js'),
storage = require('./storage.js'),
syncer = require('./syncer.js'),
tar = require('tar-fs'),
TransformStream = require('stream').Transform,
zlib = require('zlib'),
util = require('util');
storage = require('./storage.js');
const BACKUP_UPLOAD_CMD = path.join(__dirname, 'scripts/backupupload.js');
const getBackupConfig = util.callbackify(settings.getBackupConfig);
const runBackupUploadAsync = util.promisify(runBackupUpload);
function canBackupApp(app) {
// only backup apps that are installed or specific pending states
@@ -61,587 +47,32 @@ function canBackupApp(app) {
app.installationState === apps.ISTATE_PENDING_UPDATE; // called from apptask
}
function encryptFilePath(filePath, encryption) {
assert.strictEqual(typeof filePath, 'string');
assert.strictEqual(typeof encryption, 'object');
const encryptedParts = filePath.split('/').map(function (part) {
let hmac = crypto.createHmac('sha256', Buffer.from(encryption.filenameHmacKey, 'hex'));
const iv = hmac.update(part).digest().slice(0, 16); // iv has to be deterministic, for our sync (copy) logic to work
const cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(encryption.filenameKey, 'hex'), iv);
let crypt = cipher.update(part);
crypt = Buffer.concat([ iv, crypt, cipher.final() ]);
return crypt.toString('base64') // ensures path is valid
.replace(/\//g, '-') // replace '/' of base64 since it conflicts with path separator
.replace(/=/g,''); // strip trailing = padding. this is only needed if we concat base64 strings, which we don't
});
return encryptedParts.join('/');
}
function decryptFilePath(filePath, encryption) {
assert.strictEqual(typeof filePath, 'string');
assert.strictEqual(typeof encryption, 'object');
const decryptedParts = [];
for (let part of filePath.split('/')) {
part = part + Array(part.length % 4).join('='); // add back = padding
part = part.replace(/-/g, '/'); // replace with '/'
try {
const buffer = Buffer.from(part, 'base64');
const iv = buffer.slice(0, 16);
let decrypt = crypto.createDecipheriv('aes-256-cbc', Buffer.from(encryption.filenameKey, 'hex'), iv);
const plainText = decrypt.update(buffer.slice(16));
const plainTextString = Buffer.concat([ plainText, decrypt.final() ]).toString('utf8');
const hmac = crypto.createHmac('sha256', Buffer.from(encryption.filenameHmacKey, 'hex'));
if (!hmac.update(plainTextString).digest().slice(0, 16).equals(iv)) return { error: new BoxError(BoxError.CRYPTO_ERROR, `mac error decrypting part ${part} of path ${filePath}`) };
decryptedParts.push(plainTextString);
} catch (error) {
debug(`Error decrypting part ${part} of path ${filePath}:`, error);
return { error: new BoxError(BoxError.CRYPTO_ERROR, `Error decrypting part ${part} of path ${filePath}: ${error.message}`) };
}
}
return { result: decryptedParts.join('/') };
}
class EncryptStream extends TransformStream {
constructor(encryption) {
super();
this._headerPushed = false;
this._iv = crypto.randomBytes(16);
this._cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(encryption.dataKey, 'hex'), this._iv);
this._hmac = crypto.createHmac('sha256', Buffer.from(encryption.dataHmacKey, 'hex'));
}
pushHeaderIfNeeded() {
if (!this._headerPushed) {
const magic = Buffer.from('CBV2');
this.push(magic);
this._hmac.update(magic);
this.push(this._iv);
this._hmac.update(this._iv);
this._headerPushed = true;
}
}
_transform(chunk, ignoredEncoding, callback) {
this.pushHeaderIfNeeded();
try {
const crypt = this._cipher.update(chunk);
this._hmac.update(crypt);
callback(null, crypt);
} catch (error) {
callback(error);
}
}
_flush(callback) {
try {
this.pushHeaderIfNeeded(); // for 0-length files
const crypt = this._cipher.final();
this.push(crypt);
this._hmac.update(crypt);
callback(null, this._hmac.digest()); // +32 bytes
} catch (error) {
callback(error);
}
}
}
class DecryptStream extends TransformStream {
constructor(encryption) {
super();
this._key = Buffer.from(encryption.dataKey, 'hex');
this._header = Buffer.alloc(0);
this._decipher = null;
this._hmac = crypto.createHmac('sha256', Buffer.from(encryption.dataHmacKey, 'hex'));
this._buffer = Buffer.alloc(0);
}
_transform(chunk, ignoredEncoding, callback) {
const needed = 20 - this._header.length; // 4 for magic, 16 for iv
if (this._header.length !== 20) { // not gotten header yet
this._header = Buffer.concat([this._header, chunk.slice(0, needed)]);
if (this._header.length !== 20) return callback();
if (!this._header.slice(0, 4).equals(new Buffer.from('CBV2'))) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid magic in header'));
const iv = this._header.slice(4);
this._decipher = crypto.createDecipheriv('aes-256-cbc', this._key, iv);
this._hmac.update(this._header);
}
this._buffer = Buffer.concat([ this._buffer, chunk.slice(needed) ]);
if (this._buffer.length < 32) return callback(); // hmac trailer length is 32
try {
const cipherText = this._buffer.slice(0, -32);
this._hmac.update(cipherText);
const plainText = this._decipher.update(cipherText);
this._buffer = this._buffer.slice(-32);
callback(null, plainText);
} catch (error) {
callback(error);
}
}
_flush (callback) {
if (this._buffer.length !== 32) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid password or tampered file (not enough data)'));
try {
if (!this._hmac.digest().equals(this._buffer)) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid password or tampered file (mac mismatch)'));
const plainText = this._decipher.final();
callback(null, plainText);
} catch (error) {
callback(error);
}
}
}
function createReadStream(sourceFile, encryption) {
assert.strictEqual(typeof sourceFile, 'string');
assert.strictEqual(typeof encryption, 'object');
const stream = fs.createReadStream(sourceFile);
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
stream.on('error', function (error) {
debug(`createReadStream: read stream error at ${sourceFile}`, error);
ps.emit('error', new BoxError(BoxError.FS_ERROR, `Error reading ${sourceFile}: ${error.message} ${error.code}`));
});
stream.on('open', () => ps.emit('open'));
if (encryption) {
let encryptStream = new EncryptStream(encryption);
encryptStream.on('error', function (error) {
debug(`createReadStream: encrypt stream error ${sourceFile}`, error);
ps.emit('error', new BoxError(BoxError.CRYPTO_ERROR, `Encryption error at ${sourceFile}: ${error.message}`));
});
return stream.pipe(encryptStream).pipe(ps);
} else {
return stream.pipe(ps);
}
}
function createWriteStream(destFile, encryption) {
assert.strictEqual(typeof destFile, 'string');
assert.strictEqual(typeof encryption, 'object');
const stream = fs.createWriteStream(destFile);
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
stream.on('error', function (error) {
debug(`createWriteStream: write stream error ${destFile}`, error);
ps.emit('error', new BoxError(BoxError.FS_ERROR, `Write error ${destFile}: ${error.message}`));
});
stream.on('finish', function () {
debug('createWriteStream: done.');
// we use a separate event because ps is a through2 stream which emits 'finish' event indicating end of inStream and not write
ps.emit('done');
});
if (encryption) {
let decrypt = new DecryptStream(encryption);
decrypt.on('error', function (error) {
debug(`createWriteStream: decrypt stream error ${destFile}`, error);
ps.emit('error', new BoxError(BoxError.CRYPTO_ERROR, `Decryption error at ${destFile}: ${error.message}`));
});
ps.pipe(decrypt).pipe(stream);
} else {
ps.pipe(stream);
}
return ps;
}
function tarPack(dataLayout, encryption, callback) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof encryption, 'object');
assert.strictEqual(typeof callback, 'function');
const pack = tar.pack('/', {
dereference: false, // pack the symlink and not what it points to
entries: dataLayout.localPaths(),
ignoreStatError: (path, err) => {
debug(`tarPack: error stat'ing ${path} - ${err.code}`);
return err.code === 'ENOENT'; // ignore if file or dir got removed (probably some temporary file)
},
map: function(header) {
header.name = dataLayout.toRemotePath(header.name);
// the tar pax format allows us to encode filenames > 100 and size > 8GB (see #640)
// https://www.systutorials.com/docs/linux/man/5-star/
if (header.size > 8589934590 || header.name > 99) header.pax = { size: header.size };
return header;
},
strict: false // do not error for unknown types (skip fifo, char/block devices)
});
const gzip = zlib.createGzip({});
const ps = progressStream({ time: 10000 }); // emit 'progress' every 10 seconds
pack.on('error', function (error) {
debug('tarPack: tar stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
gzip.on('error', function (error) {
debug('tarPack: gzip stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
if (encryption) {
const encryptStream = new EncryptStream(encryption);
encryptStream.on('error', function (error) {
debug('tarPack: encrypt stream error.', error);
ps.emit('error', new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
pack.pipe(gzip).pipe(encryptStream).pipe(ps);
} else {
pack.pipe(gzip).pipe(ps);
}
return callback(null, ps);
}
function sync(backupConfig, remotePath, dataLayout, progressCallback, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
// the number here has to take into account the s3.upload partSize (which is 10MB). So 20=200MB
const concurrency = backupConfig.syncConcurrency || (backupConfig.provider === 's3' ? 20 : 10);
const removeDir = util.callbackify(storage.api(backupConfig.provider).removeDir);
const remove = util.callbackify(storage.api(backupConfig.provider).remove);
syncer.sync(dataLayout, function processTask(task, iteratorCallback) {
debug('sync: processing task: %j', task);
// the empty task.path is special to signify the directory
const destPath = task.path && backupConfig.encryption ? encryptFilePath(task.path, backupConfig.encryption) : task.path;
const backupFilePath = path.join(storage.getBackupFilePath(backupConfig, remotePath, backupConfig.format), destPath);
if (task.operation === 'removedir') {
debug(`Removing directory ${backupFilePath}`);
return removeDir(backupConfig, backupFilePath, progressCallback, iteratorCallback);
} else if (task.operation === 'remove') {
debug(`Removing ${backupFilePath}`);
return remove(backupConfig, backupFilePath, iteratorCallback);
}
let retryCount = 0;
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
retryCallback = once(retryCallback); // protect again upload() erroring much later after read stream error
++retryCount;
if (task.operation === 'add') {
progressCallback({ message: `Adding ${task.path}` + (retryCount > 1 ? ` (Try ${retryCount})` : '') });
debug(`Adding ${task.path} position ${task.position} try ${retryCount}`);
const stream = createReadStream(dataLayout.toLocalPath('./' + task.path), backupConfig.encryption);
stream.on('error', (error) => retryCallback(error.message.includes('ENOENT') ? null : error)); // ignore error if file disappears
stream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: `Uploading ${task.path}` }); // 0M@0MBps looks wrong
progressCallback({ message: `Uploading ${task.path}: ${transferred}M@${speed}MBps` }); // 0M@0MBps looks wrong
});
// only create the destination path when we have confirmation that the source is available. otherwise, we end up with
// files owned as 'root' and the cp later will fail
stream.on('open', function () {
storage.api(backupConfig.provider).upload(backupConfig, backupFilePath, stream, function (error) {
debug(error ? `Error uploading ${task.path} try ${retryCount}: ${error.message}` : `Uploaded ${task.path}`);
retryCallback(error);
});
});
}
}, iteratorCallback);
}, concurrency, function (error) {
if (error) return callback(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
callback();
});
}
// this is not part of 'snapshotting' because we need root access to traverse
async function saveFsMetadata(dataLayout, metadataFile) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof metadataFile, 'string');
// contains paths prefixed with './'
const metadata = {
emptyDirs: [],
execFiles: [],
symlinks: []
};
// we assume small number of files. spawnSync will raise a ENOBUFS error after maxBuffer
for (let lp of dataLayout.localPaths()) {
const emptyDirs = safe.child_process.execSync(`find ${lp} -type d -empty`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (emptyDirs === null) throw new BoxError(BoxError.FS_ERROR, `Error finding empty dirs: ${safe.error.message}`);
if (emptyDirs.length) metadata.emptyDirs = metadata.emptyDirs.concat(emptyDirs.trim().split('\n').map((ed) => dataLayout.toRemotePath(ed)));
const execFiles = safe.child_process.execSync(`find ${lp} -type f -executable`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (execFiles === null) throw new BoxError(BoxError.FS_ERROR, `Error finding executables: ${safe.error.message}`);
if (execFiles.length) metadata.execFiles = metadata.execFiles.concat(execFiles.trim().split('\n').map((ef) => dataLayout.toRemotePath(ef)));
const symlinks = safe.child_process.execSync(`find ${lp} -type l`, { encoding: 'utf8', maxBuffer: 1024 * 1024 * 30 });
if (symlinks === null) throw new BoxError(BoxError.FS_ERROR, `Error finding symlinks: ${safe.error.message}`);
if (symlinks.length) metadata.symlinks = metadata.symlinks.concat(symlinks.trim().split('\n').map((sl) => {
const target = safe.fs.readlinkSync(sl);
return { path: dataLayout.toRemotePath(sl), target };
}));
}
if (!safe.fs.writeFileSync(metadataFile, JSON.stringify(metadata, null, 4))) throw new BoxError(BoxError.FS_ERROR, `Error writing fs metadata: ${safe.error.message}`);
}
// this function is called via backupupload (since it needs root to traverse app's directory)
function upload(remotePath, format, dataLayoutString, progressCallback, callback) {
async function upload(remotePath, format, dataLayoutString, progressCallback) {
assert.strictEqual(typeof remotePath, 'string');
assert.strictEqual(typeof format, 'string');
assert.strictEqual(typeof dataLayoutString, 'string');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
debug(`upload: path ${remotePath} format ${format} dataLayout ${dataLayoutString}`);
const dataLayout = DataLayout.fromString(dataLayoutString);
const backupConfig = await settings.getBackupConfig();
await storage.api(backupConfig.provider).checkPreconditions(backupConfig, dataLayout);
getBackupConfig(async function (error, backupConfig) {
if (error) return callback(error);
const [preconditionError] = await safe(storage.api(backupConfig.provider).checkPreconditions(backupConfig, dataLayout));
if (preconditionError) return callback(preconditionError);
if (format === 'tgz') {
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
retryCallback = once(retryCallback); // protect again upload() erroring much later after tar stream error
tarPack(dataLayout, backupConfig.encryption, function (error, tarStream) {
if (error) return retryCallback(error);
tarStream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: 'Uploading backup' }); // 0M@0MBps looks wrong
progressCallback({ message: `Uploading backup ${transferred}M@${speed}MBps` });
});
tarStream.on('error', retryCallback); // already returns BoxError
storage.api(backupConfig.provider).upload(backupConfig, storage.getBackupFilePath(backupConfig, remotePath, format), tarStream, retryCallback);
});
}, callback);
} else {
async.series([
saveFsMetadata.bind(null, dataLayout, `${dataLayout.localRoot()}/fsmetadata.json`),
sync.bind(null, backupConfig, remotePath, dataLayout, progressCallback)
], callback);
}
});
await backupFormat.api(format).upload(backupConfig, remotePath, dataLayout, progressCallback);
}
function tarExtract(inStream, dataLayout, encryption, callback) {
assert.strictEqual(typeof inStream, 'object');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof encryption, 'object');
assert.strictEqual(typeof callback, 'function');
const gunzip = zlib.createGunzip({});
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
const extract = tar.extract('/', {
map: function (header) {
header.name = dataLayout.toLocalPath(header.name);
return header;
},
dmode: 500 // ensure directory is writable
});
const emitError = once((error) => {
inStream.destroy();
ps.emit('error', error);
});
inStream.on('error', function (error) {
debug('tarExtract: input stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
gunzip.on('error', function (error) {
debug('tarExtract: gunzip stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
extract.on('error', function (error) {
debug('tarExtract: extract stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, error.message));
});
extract.on('finish', function () {
debug('tarExtract: done.');
// we use a separate event because ps is a through2 stream which emits 'finish' event indicating end of inStream and not extract
ps.emit('done');
});
if (encryption) {
let decrypt = new DecryptStream(encryption);
decrypt.on('error', function (error) {
debug('tarExtract: decrypt stream error.', error);
emitError(new BoxError(BoxError.EXTERNAL_ERROR, `Failed to decrypt: ${error.message}`));
});
inStream.pipe(ps).pipe(decrypt).pipe(gunzip).pipe(extract);
} else {
inStream.pipe(ps).pipe(gunzip).pipe(extract);
}
callback(null, ps);
}
async function restoreFsMetadata(dataLayout, metadataFile) {
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof metadataFile, 'string');
debug(`Recreating empty directories in ${dataLayout.toString()}`);
const metadataJson = safe.fs.readFileSync(metadataFile, 'utf8');
if (metadataJson === null) throw new BoxError(BoxError.EXTERNAL_ERROR, 'Error loading fsmetadata.json:' + safe.error.message);
const metadata = safe.JSON.parse(metadataJson);
if (metadata === null) throw new BoxError(BoxError.EXTERNAL_ERROR, 'Error parsing fsmetadata.json:' + safe.error.message);
for (const emptyDir of metadata.emptyDirs) {
const [mkdirError] = await safe(fs.promises.mkdir(dataLayout.toLocalPath(emptyDir), { recursive: true }));
if (mkdirError) throw new BoxError(BoxError.FS_ERROR, `unable to create path: ${mkdirError.message}`);
}
for (const execFile of metadata.execFiles) {
const [chmodError] = await safe(fs.promises.chmod(dataLayout.toLocalPath(execFile), parseInt('0755', 8)));
if (chmodError) throw new BoxError(BoxError.FS_ERROR, `unable to chmod: ${chmodError.message}`);
}
for (const symlink of (metadata.symlinks || [])) {
if (!symlink.target) continue;
// the path may not exist if we had a directory full of symlinks
const [mkdirError] = await safe(fs.promises.mkdir(path.dirname(dataLayout.toLocalPath(symlink.path)), { recursive: true }));
if (mkdirError) throw new BoxError(BoxError.FS_ERROR, `unable to symlink (mkdir): ${mkdirError.message}`);
const [symlinkError] = await safe(fs.promises.symlink(symlink.target, dataLayout.toLocalPath(symlink.path), 'file'));
if (symlinkError) throw new BoxError(BoxError.FS_ERROR, `unable to symlink: ${symlinkError.message}`);
}
}
function downloadDir(backupConfig, backupFilePath, dataLayout, progressCallback, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof backupFilePath, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
debug(`downloadDir: ${backupFilePath} to ${dataLayout.toString()}`);
function downloadFile(entry, done) {
let relativePath = path.relative(backupFilePath, entry.fullPath);
if (backupConfig.encryption) {
const { error, result } = decryptFilePath(relativePath, backupConfig.encryption);
if (error) return done(new BoxError(BoxError.CRYPTO_ERROR, 'Unable to decrypt file'));
relativePath = result;
}
const destFilePath = dataLayout.toLocalPath('./' + relativePath);
fs.mkdir(path.dirname(destFilePath), { recursive: true }, function (error) {
if (error) return done(new BoxError(BoxError.FS_ERROR, error.message));
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
storage.api(backupConfig.provider).download(backupConfig, entry.fullPath, function (error, sourceStream) {
if (error) {
progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} errored: ${error.message}` });
return retryCallback(error);
}
let destStream = createWriteStream(destFilePath, backupConfig.encryption);
// protect against multiple errors. must destroy the write stream so that a previous retry does not write
let closeAndRetry = once((error) => {
if (error) progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} errored: ${error.message}` });
else progressCallback({ message: `Download ${entry.fullPath} to ${destFilePath} finished` });
sourceStream.destroy();
destStream.destroy();
retryCallback(error);
});
destStream.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: `Downloading ${entry.fullPath}` }); // 0M@0MBps looks wrong
progressCallback({ message: `Downloading ${entry.fullPath}: ${transferred}M@${speed}MBps` });
});
destStream.on('error', closeAndRetry);
sourceStream.on('error', closeAndRetry);
progressCallback({ message: `Downloading ${entry.fullPath} to ${destFilePath}` });
sourceStream.pipe(destStream, { end: true }).on('done', closeAndRetry);
});
}, done);
});
}
storage.api(backupConfig.provider).listDir(backupConfig, backupFilePath, 1000, function (entries, iteratorDone) {
// https://www.digitalocean.com/community/questions/rate-limiting-on-spaces?answer=40441
const concurrency = backupConfig.downloadConcurrency || (backupConfig.provider === 's3' ? 30 : 10);
async.eachLimit(entries, concurrency, downloadFile, iteratorDone);
}, callback);
}
function download(backupConfig, remotePath, format, dataLayout, progressCallback, callback) {
async function download(backupConfig, remotePath, format, dataLayout, progressCallback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof remotePath, 'string');
assert.strictEqual(typeof format, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
debug(`download: Downloading ${remotePath} of format ${format} to ${dataLayout.toString()}`);
const backupFilePath = storage.getBackupFilePath(backupConfig, remotePath, format);
if (format === 'tgz') {
async.retry({ times: 5, interval: 20000 }, function (retryCallback) {
progressCallback({ message: `Downloading backup ${remotePath}` });
storage.api(backupConfig.provider).download(backupConfig, backupFilePath, function (error, sourceStream) {
if (error) return retryCallback(error);
tarExtract(sourceStream, dataLayout, backupConfig.encryption, function (error, ps) {
if (error) return retryCallback(error);
ps.on('progress', function (progress) {
const transferred = Math.round(progress.transferred/1024/1024), speed = Math.round(progress.speed/1024/1024);
if (!transferred && !speed) return progressCallback({ message: 'Downloading backup' }); // 0M@0MBps looks wrong
progressCallback({ message: `Downloading ${transferred}M@${speed}MBps` });
});
ps.on('error', retryCallback);
ps.on('done', retryCallback);
});
});
}, callback);
} else {
downloadDir(backupConfig, backupFilePath, dataLayout, progressCallback, async function (error) {
if (error) return callback(error);
[error] = await safe(restoreFsMetadata(dataLayout, `${dataLayout.localRoot()}/fsmetadata.json`));
callback(error);
});
}
await backupFormat.api(format).download(backupConfig, remotePath, dataLayout, progressCallback);
}
async function restore(backupConfig, remotePath, progressCallback) {
@@ -653,7 +84,7 @@ async function restore(backupConfig, remotePath, progressCallback) {
if (!boxDataDir) throw new BoxError(BoxError.FS_ERROR, `Error resolving boxdata: ${safe.error.message}`);
const dataLayout = new DataLayout(boxDataDir, []);
await util.promisify(download)(backupConfig, remotePath, backupConfig.format, dataLayout, progressCallback);
await download(backupConfig, remotePath, backupConfig.format, dataLayout, progressCallback);
debug('restore: download completed, importing database');
@@ -671,20 +102,18 @@ async function downloadApp(app, restoreConfig, progressCallback) {
const appDataDir = safe.fs.realpathSync(path.join(paths.APPS_DATA_DIR, app.id));
if (!appDataDir) throw new BoxError(BoxError.FS_ERROR, safe.error.message);
const dataLayout = new DataLayout(appDataDir, app.dataDir ? [{ localDir: app.dataDir, remoteDir: 'data' }] : []);
const dataLayout = new DataLayout(appDataDir, app.storageVolumeId ? [{ localDir: await apps.getStorageDir(app), remoteDir: 'data' }] : []);
const startTime = new Date();
const backupConfig = restoreConfig.backupConfig || await settings.getBackupConfig();
const downloadAsync = util.promisify(download);
await downloadAsync(backupConfig, restoreConfig.remotePath, restoreConfig.backupFormat, dataLayout, progressCallback);
await download(backupConfig, restoreConfig.remotePath, restoreConfig.backupFormat, dataLayout, progressCallback);
debug('downloadApp: time: %s', (new Date() - startTime)/1000);
}
function runBackupUpload(uploadConfig, progressCallback, callback) {
async function runBackupUpload(uploadConfig, progressCallback) {
assert.strictEqual(typeof uploadConfig, 'object');
assert.strictEqual(typeof progressCallback, 'function');
assert.strictEqual(typeof callback, 'function');
const { remotePath, backupConfig, dataLayout, progressTag } = uploadConfig;
assert.strictEqual(typeof remotePath, 'string');
@@ -692,8 +121,6 @@ function runBackupUpload(uploadConfig, progressCallback, callback) {
assert.strictEqual(typeof progressTag, 'string');
assert(dataLayout instanceof DataLayout, 'dataLayout must be a DataLayout');
let result = ''; // the script communicates error result as a string
// https://stackoverflow.com/questions/48387040/node-js-recommended-max-old-space-size
const envCopy = Object.assign({}, process.env);
if (backupConfig.memoryLimit && backupConfig.memoryLimit >= 2*1024*1024*1024) {
@@ -702,19 +129,19 @@ function runBackupUpload(uploadConfig, progressCallback, callback) {
envCopy.NODE_OPTIONS = `--max-old-space-size=${heapSize}`;
}
shell.sudo(`backup-${remotePath}`, [ BACKUP_UPLOAD_CMD, remotePath, backupConfig.format, dataLayout.toString() ], { env: envCopy, preserveEnv: true, ipc: true }, function (error) {
if (error && (error.code === null /* signal */ || (error.code !== 0 && error.code !== 50))) { // backuptask crashed
return callback(new BoxError(BoxError.INTERNAL_ERROR, 'Backuptask crashed'));
} else if (error && error.code === 50) { // exited with error
return callback(new BoxError(BoxError.EXTERNAL_ERROR, result));
}
callback();
}).on('message', function (progress) { // this is { message } or { result }
let result = ''; // the script communicates error result as a string
function onMessage(progress) { // this is { message } or { result }
if ('message' in progress) return progressCallback({ message: `${progress.message} (${progressTag})` });
debug(`runBackupUpload: result - ${JSON.stringify(progress)}`);
result = progress.result;
});
}
const [error] = await safe(shell.promises.sudo(`backup-${remotePath}`, [ BACKUP_UPLOAD_CMD, remotePath, backupConfig.format, dataLayout.toString() ], { env: envCopy, preserveEnv: true, ipc: true, onMessage }));
if (error && (error.code === null /* signal */ || (error.code !== 0 && error.code !== 50))) { // backuptask crashed
throw new BoxError(BoxError.INTERNAL_ERROR, 'Backuptask crashed');
} else if (error && error.code === 50) { // exited with error
throw new BoxError(BoxError.EXTERNAL_ERROR, result);
}
}
async function snapshotBox(progressCallback) {
@@ -748,7 +175,7 @@ async function uploadBoxSnapshot(backupConfig, progressCallback) {
const startTime = new Date();
await runBackupUploadAsync(uploadConfig, progressCallback);
await runBackupUpload(uploadConfig, progressCallback);
debug(`uploadBoxSnapshot: took ${(new Date() - startTime)/1000} seconds`);
@@ -762,18 +189,12 @@ async function copy(backupConfig, srcRemotePath, destRemotePath, progressCallbac
assert.strictEqual(typeof progressCallback, 'function');
const { provider, format } = backupConfig;
const oldFilePath = backupFormat.api(format).getBackupFilePath(backupConfig, srcRemotePath);
const newFilePath = backupFormat.api(format).getBackupFilePath(backupConfig, destRemotePath);
return new Promise((resolve, reject) => {
const startTime = new Date();
const copyEvents = storage.api(provider).copy(backupConfig, storage.getBackupFilePath(backupConfig, srcRemotePath, format), storage.getBackupFilePath(backupConfig, destRemotePath, format));
copyEvents.on('progress', (message) => progressCallback({ message }));
copyEvents.on('done', function (error) {
if (error) return reject(error);
debug(`copy: copied successfully to ${destRemotePath}. Took ${(new Date() - startTime)/1000} seconds`);
resolve();
});
});
const startTime = new Date();
await safe(storage.api(provider).copy(backupConfig, oldFilePath, newFilePath, progressCallback));
debug(`copy: copied successfully to ${destRemotePath}. Took ${(new Date() - startTime)/1000} seconds`);
}
async function rotateBoxBackup(backupConfig, tag, options, dependsOn, progressCallback) {
@@ -897,7 +318,7 @@ async function uploadAppSnapshot(backupConfig, app, progressCallback) {
const appDataDir = safe.fs.realpathSync(path.join(paths.APPS_DATA_DIR, app.id));
if (!appDataDir) throw new BoxError(BoxError.FS_ERROR, `Error resolving appsdata: ${safe.error.message}`);
const dataLayout = new DataLayout(appDataDir, app.dataDir ? [{ localDir: app.dataDir, remoteDir: 'data' }] : []);
const dataLayout = new DataLayout(appDataDir, app.storageVolumeId ? [{ localDir: await apps.getStorageDir(app), remoteDir: 'data' }] : []);
progressCallback({ message: `Uploading app snapshot ${app.fqdn}`});
@@ -910,7 +331,7 @@ async function uploadAppSnapshot(backupConfig, app, progressCallback) {
const startTime = new Date();
await runBackupUploadAsync(uploadConfig, progressCallback);
await runBackupUpload(uploadConfig, progressCallback);
debug(`uploadAppSnapshot: ${app.fqdn} uploaded to ${remotePath}. ${(new Date() - startTime)/1000} seconds`);
@@ -954,7 +375,7 @@ async function uploadMailSnapshot(backupConfig, progressCallback) {
const startTime = new Date();
await runBackupUploadAsync(uploadConfig, progressCallback);
await runBackupUpload(uploadConfig, progressCallback);
debug(`uploadMailSnapshot: took ${(new Date() - startTime)/1000} seconds`);
@@ -1025,8 +446,7 @@ async function downloadMail(restoreConfig, progressCallback) {
const startTime = new Date();
const downloadAsync = util.promisify(download);
await downloadAsync(restoreConfig.backupConfig, restoreConfig.remotePath, restoreConfig.backupFormat, dataLayout, progressCallback);
await download(restoreConfig.backupConfig, restoreConfig.remotePath, restoreConfig.backupFormat, dataLayout, progressCallback);
debug('downloadMail: time: %s', (new Date() - startTime)/1000);
}
+5 -4
View File
@@ -140,7 +140,7 @@ async function runStartupTasks() {
// we used to run tasks in parallel but simultaneous nginx reloads was causing issues
for (let i = 0; i < tasks.length; i++) {
const [error] = await safe(tasks[i]());
if (error) debug(`Startup task at index ${i} failed: ${error.message}`);
if (error) debug(`Startup task at index ${i} failed: ${error.message} ${error.stack}`);
}
}
@@ -155,6 +155,7 @@ async function getConfig() {
return {
apiServerOrigin: settings.apiServerOrigin(),
webServerOrigin: settings.webServerOrigin(),
consoleServerOrigin: settings.consoleServerOrigin(),
adminDomain: settings.dashboardDomain(),
adminFqdn: settings.dashboardFqdn(),
mailFqdn: settings.mailFqdn(),
@@ -266,12 +267,12 @@ async function prepareDashboardDomain(domain, auditSource) {
const domainObject = await domains.get(domain);
if (!domain) throw new BoxError(BoxError.NOT_FOUND, 'No such domain');
const fqdn = dns.fqdn(constants.DASHBOARD_LOCATION, domainObject);
const fqdn = dns.fqdn(constants.DASHBOARD_SUBDOMAIN, domainObject);
const result = await apps.list();
if (result.some(app => app.fqdn === fqdn)) throw new BoxError(BoxError.BAD_STATE, 'Dashboard location conflicts with an existing app');
const taskId = await tasks.add(tasks.TASK_SETUP_DNS_AND_CERT, [ constants.DASHBOARD_LOCATION, domain, auditSource ]);
const taskId = await tasks.add(tasks.TASK_SETUP_DNS_AND_CERT, [ constants.DASHBOARD_SUBDOMAIN, domain, auditSource ]);
tasks.startTask(taskId, {});
@@ -289,7 +290,7 @@ async function setDashboardDomain(domain, auditSource) {
if (!domain) throw new BoxError(BoxError.NOT_FOUND, 'No such domain');
await reverseProxy.writeDashboardConfig(domainObject);
const fqdn = dns.fqdn(constants.DASHBOARD_LOCATION, domainObject);
const fqdn = dns.fqdn(constants.DASHBOARD_SUBDOMAIN, domainObject);
await settings.setDashboardLocation(domain, fqdn);
+2 -22
View File
@@ -1,34 +1,14 @@
LoadPlugin "table"
<Plugin table>
<Table "/sys/fs/cgroup/memory/docker/<%= containerId %>/memory.stat">
Instance "<%= appId %>-memory"
Separator " \\n"
<Result>
Type gauge
InstancesFrom 0
ValuesFrom 1
</Result>
</Table>
<Table "/sys/fs/cgroup/memory/docker/<%= containerId %>/memory.max_usage_in_bytes">
<Table "/sys/fs/cgroup/memory/docker/<%= containerId %>/memory.memsw.usage_in_bytes">
Instance "<%= appId %>-memory"
Separator "\\n"
<Result>
Type gauge
InstancePrefix "max_usage_in_bytes"
InstancePrefix "memsw_usage_in_bytes"
ValuesFrom 0
</Result>
</Table>
<Table "/sys/fs/cgroup/cpuacct/docker/<%= containerId %>/cpuacct.stat">
Instance "<%= appId %>-cpu"
Separator " \\n"
<Result>
Type gauge
InstancesFrom 0
ValuesFrom 1
</Result>
</Table>
</Plugin>
<Plugin python>
+7 -17
View File
@@ -1,32 +1,22 @@
LoadPlugin "table"
<Plugin table>
<Table "/sys/fs/cgroup/docker/<%= containerId %>/memory.stat">
<Table "/sys/fs/cgroup/docker/<%= containerId %>/memory.current">
Instance "<%= appId %>-memory"
Separator " \\n"
<Result>
Type gauge
InstancesFrom 0
ValuesFrom 1
</Result>
</Table>
<Table "/sys/fs/cgroup/docker/<%= containerId %>/memory.max">
Instance "<%= appId %>-memory"
Separator "\\n"
<Result>
Type gauge
InstancePrefix "max_usage_in_bytes"
InstancePrefix "memory_current"
ValuesFrom 0
</Result>
</Table>
<Table "/sys/fs/cgroup/docker/<%= containerId %>/cpu.stat">
Instance "<%= appId %>-cpu"
Separator " \\n"
<Table "/sys/fs/cgroup/docker/<%= containerId %>/memory.swap.current">
Instance "<%= appId %>-memory"
Separator "\\n"
<Result>
Type gauge
InstancesFrom 0
ValuesFrom 1
InstancePrefix "memory_swap_current"
ValuesFrom 0
</Result>
</Table>
</Plugin>
+6 -4
View File
@@ -7,8 +7,8 @@ const CLOUDRON = process.env.BOX_ENV === 'cloudron',
TEST = process.env.BOX_ENV === 'test';
exports = module.exports = {
SMTP_LOCATION: 'smtp',
IMAP_LOCATION: 'imap',
SMTP_SUBDOMAIN: 'smtp',
IMAP_SUBDOMAIN: 'imap',
// These are combined into one array because users and groups become mailboxes
RESERVED_NAMES: [
@@ -22,7 +22,7 @@ exports = module.exports = {
'admins', 'users' // ldap code uses 'users' pseudo group
],
DASHBOARD_LOCATION: 'my',
DASHBOARD_SUBDOMAIN: 'my',
PORT: CLOUDRON ? 3000 : 5454,
INTERNAL_SMTP_PORT: 2525, // this value comes from the mail container
@@ -49,6 +49,8 @@ exports = module.exports = {
],
DEMO_APP_LIMIT: 20,
PROXY_APP_APPSTORE_ID: 'io.cloudron.builtin.appproxy',
AUTOUPDATE_PATTERN_NEVER: 'never',
// the db field is a blob so we make this explicit
@@ -72,6 +74,6 @@ exports = module.exports = {
FOOTER: '&copy; %YEAR% &nbsp; [Cloudron](https://cloudron.io) &nbsp; &nbsp; &nbsp; [Forum <i class="fa fa-comments"></i>](https://forum.cloudron.io)',
VERSION: process.env.BOX_ENV === 'cloudron' ? fs.readFileSync(path.join(__dirname, '../VERSION'), 'utf8').trim() : '7.0.0-test'
VERSION: process.env.BOX_ENV === 'cloudron' ? fs.readFileSync(path.join(__dirname, '../VERSION'), 'utf8').trim() : '7.2.0-test'
};
+32 -11
View File
@@ -28,13 +28,13 @@ const appHealthMonitor = require('./apphealthmonitor.js'),
dyndns = require('./dyndns.js'),
eventlog = require('./eventlog.js'),
janitor = require('./janitor.js'),
paths = require('./paths.js'),
safe = require('safetydance'),
scheduler = require('./scheduler.js'),
settings = require('./settings.js'),
system = require('./system.js'),
updater = require('./updater.js'),
updateChecker = require('./updatechecker.js'),
userdirectory = require('./userdirectory.js'),
_ = require('underscore');
const gJobs = {
@@ -61,12 +61,36 @@ const gJobs = {
// Months: 0-11
// Day of Week: 0-6
async function startJobs() {
debug('startJobs: starting cron jobs');
function getCronSeed() {
let hour = null;
let minute = null;
const seedData = safe.fs.readFileSync(paths.CRON_SEED_FILE, 'utf8') || '';
const parts = seedData.split(':');
if (parts.length === 2) {
hour = parseInt(parts[0]) || null;
minute = parseInt(parts[1]) || null;
}
if ((hour == null || hour < 0 || hour > 23) || (minute == null || minute < 0 || minute > 60)) {
hour = Math.floor(24 * Math.random());
minute = Math.floor(60 * Math.random());
debug(`getCronSeed: writing new cron seed file with ${hour}:${minute} to ${paths.CRON_SEED_FILE}`);
safe.fs.writeFileSync(paths.CRON_SEED_FILE, `${hour}:${minute}`);
}
return { hour, minute };
}
async function startJobs() {
const { hour, minute } = getCronSeed();
debug(`startJobs: starting cron jobs with hour ${hour} and minute ${minute}`);
const randomTick = Math.floor(60*Math.random());
gJobs.systemChecks = new CronJob({
cronTime: `${randomTick} ${randomTick} 2 * * *`, // once a day. if you change this interval, change the notification messages with correct duration
cronTime: `00 ${minute} 2 * * *`, // once a day. if you change this interval, change the notification messages with correct duration
onTick: async () => await safe(cloudron.runSystemChecks(), { debug }),
start: true
});
@@ -79,7 +103,7 @@ async function startJobs() {
// this is run separately from the update itself so that the user can disable automatic updates but can still get a notification
gJobs.updateCheckerJob = new CronJob({
cronTime: `${randomTick} ${randomTick} 1,5,9,13,17,21,23 * * *`,
cronTime: `00 ${minute} 1,5,9,13,17,21,23 * * *`,
onTick: async () => await safe(updateChecker.checkForUpdates({ automatic: true }), { debug }),
start: true
});
@@ -114,8 +138,9 @@ async function startJobs() {
start: true
});
// randomized per Cloudron based on hourlySeed
gJobs.certificateRenew = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
cronTime: `00 10 ${hour} * * *`,
onTick: async () => await safe(cloudron.renewCerts({}, AuditSource.CRON), { debug }),
start: true
});
@@ -148,10 +173,6 @@ async function handleSettingsChanged(key, value) {
await stopJobs();
await startJobs();
break;
case settings.USER_DIRECTORY_KEY:
if (value.enabled) await userdirectory.start();
else await userdirectory.stop();
break;
default:
break;
}
+2 -1
View File
@@ -61,8 +61,9 @@ async function initialize() {
// note the pool also has an 'acquire' event but that is called whenever we do a getConnection()
connection.on('error', (error) => debug(`Connection ${connection.threadId} error: ${error.message} ${error.code}`));
connection.query('USE ' + gDatabase.name);
connection.query(`USE ${gDatabase.name}`);
connection.query('SET SESSION sql_mode = \'strict_all_tables\'');
connection.query('SET SESSION group_concat_max_len = 65536'); // GROUP_CONCAT has only 1024 default
});
}
+41 -59
View File
@@ -11,7 +11,7 @@ exports = module.exports = {
const assert = require('assert'),
BoxError = require('./boxerror.js'),
constants = require('./constants.js'),
debug = require('debug')('box:userdirectory'),
debug = require('debug')('box:directoryserver'),
dns = require('./dns.js'),
domains = require('./domains.js'),
eventlog = require('./eventlog.js'),
@@ -23,6 +23,7 @@ const assert = require('assert'),
reverseproxy = require('./reverseproxy.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
speakeasy = require('speakeasy'),
shell = require('./shell.js'),
users = require('./users.js'),
util = require('util'),
@@ -32,8 +33,6 @@ let gServer = null;
const NOOP = function () {};
const GROUP_USERS_DN = 'cn=users,ou=groups,dc=cloudron';
const GROUP_ADMINS_DN = 'cn=admins,ou=groups,dc=cloudron';
const SET_LDAP_ALLOWLIST_CMD = path.join(__dirname, 'scripts/setldapallowlist.sh');
async function validateConfig(config) {
@@ -68,6 +67,8 @@ async function applyConfig(config) {
const [error] = await safe(shell.promises.sudo('setLdapAllowlist', [ SET_LDAP_ALLOWLIST_CMD ], {}));
if (error) throw new BoxError(BoxError.IPTABLES_ERROR, `Error setting ldap allowlist: ${error.message}`);
if (config.enabled) await start(); else await stop();
}
// helper function to deal with pagination
@@ -145,20 +146,20 @@ async function authorize(req, res, next) {
async function userSearch(req, res, next) {
debug('user search: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
const [error, result] = await safe(users.list());
const [error, allUsers] = await safe(users.list());
if (error) return next(new ldap.OperationsError(error.toString()));
const [groupsError, allGroups] = await safe(groups.listWithMembers());
if (groupsError) return next(new ldap.OperationsError(error.toString()));
let results = [];
// send user objects
result.forEach(function (user) {
for (const user of allUsers) {
// skip entries with empty username. Some apps like owncloud can't deal with this
if (!user.username) return;
if (!user.username) continue;
const dn = ldap.parseDN('cn=' + user.id + ',ou=users,dc=cloudron');
const memberof = [ GROUP_USERS_DN ];
if (users.compareRoles(user.role, users.ROLE_ADMIN) >= 0) memberof.push(GROUP_ADMINS_DN);
const dn = ldap.parseDN(`cn=${user.id},ou=users,dc=cloudron`);
const displayName = user.displayName || user.username || ''; // displayName can be empty and username can be null
const nameParts = displayName.split(' ');
@@ -179,10 +180,12 @@ async function userSearch(req, res, next) {
givenName: firstName,
username: user.username,
samaccountname: user.username, // to support ActiveDirectory clients
memberof: memberof
memberof: allGroups.filter(function (g) { return g.userIds.indexOf(user.id) !== -1; }).map(function (g) { return g.name; })
}
};
if (user.twoFactorAuthenticationEnabled) obj.attributes.twoFactorAuthenticationEnabled = true;
// http://www.zytrax.com/books/ldap/ape/core-schema.html#sn has 'name' as SUP which is a DirectoryString
// which is required to have atleast one character if present
if (lastName.length !== 0) obj.attributes.sn = lastName;
@@ -194,7 +197,7 @@ async function userSearch(req, res, next) {
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
results.push(obj);
}
});
}
finalSend(results, req, res, next);
}
@@ -202,54 +205,24 @@ async function userSearch(req, res, next) {
async function groupSearch(req, res, next) {
debug('group search: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
const [error, result] = await safe(users.list());
const [error, allUsers] = await safe(users.list());
if (error) return next(new ldap.OperationsError(error.toString()));
const results = [];
// those are the old virtual groups for backwards compat
const virtualGroups = [{
name: 'users',
admin: false
}, {
name: 'admins',
admin: true
}];
virtualGroups.forEach(function (group) {
const dn = ldap.parseDN('cn=' + group.name + ',ou=groups,dc=cloudron');
const members = group.admin ? result.filter(function (user) { return users.compareRoles(user.role, users.ROLE_ADMIN) >= 0; }) : result;
const obj = {
dn: dn.toString(),
attributes: {
objectclass: ['group'],
cn: group.name,
memberuid: members.map(function(entry) { return entry.id; }).sort()
}
};
// ensure all filter values are also lowercase
const lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
results.push(obj);
}
});
let [errorGroups, resultGroups] = await safe(groups.listWithMembers());
let [errorGroups, allGroups] = await safe(groups.listWithMembers());
if (errorGroups) return next(new ldap.OperationsError(errorGroups.toString()));
resultGroups.forEach(function (group) {
const dn = ldap.parseDN('cn=' + group.name + ',ou=groups,dc=cloudron');
const members = group.userIds.filter(function (uid) { return result.map(function (u) { return u.id; }).indexOf(uid) !== -1; });
for (const group of allGroups) {
const dn = ldap.parseDN(`cn=${group.name},ou=groups,dc=cloudron`);
const members = group.userIds.filter(function (uid) { return allUsers.map(function (u) { return u.id; }).indexOf(uid) !== -1; });
const obj = {
dn: dn.toString(),
attributes: {
objectclass: ['group'],
cn: group.name,
gidnumber: group.id,
memberuid: members
}
};
@@ -261,7 +234,7 @@ async function groupSearch(req, res, next) {
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
results.push(obj);
}
});
}
finalSend(results, req, res, next);
}
@@ -269,12 +242,15 @@ async function groupSearch(req, res, next) {
// Will attach req.user if successful
async function userAuth(req, res, next) {
// extract the common name which might have different attribute names
const attributeName = Object.keys(req.dn.rdns[0].attrs)[0];
const commonName = req.dn.rdns[0].attrs[attributeName].value;
const cnAttributeName = Object.keys(req.dn.rdns[0].attrs)[0];
const commonName = req.dn.rdns[0].attrs[cnAttributeName].value;
if (!commonName) return next(new ldap.NoSuchObjectError(req.dn.toString()));
const TOTPTOKEN_ATTRIBUTE_NAME = 'totptoken'; // This has to be in-sync with externalldap.js
const totpToken = req.dn.rdns[0].attrs[TOTPTOKEN_ATTRIBUTE_NAME] ? req.dn.rdns[0].attrs[TOTPTOKEN_ATTRIBUTE_NAME].value : null;
let verifyFunc;
if (attributeName === 'mail') {
if (cnAttributeName === 'mail') {
verifyFunc = users.verifyWithEmail;
} else if (commonName.indexOf('@') !== -1) { // if mail is specified, enforce mail check
verifyFunc = users.verifyWithEmail;
@@ -289,6 +265,12 @@ async function userAuth(req, res, next) {
if (error && error.reason === BoxError.INVALID_CREDENTIALS) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error.message));
// currently this is only optional if totpToken is provided and user has 2fa enabled
if (totpToken && user.twoFactorAuthenticationEnabled) {
const verified = speakeasy.totp.verify({ secret: user.twoFactorAuthenticationSecret, encoding: 'base32', token: totpToken, window: 2 });
if (!verified) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
}
req.user = user;
next();
@@ -308,12 +290,12 @@ async function start() {
};
const domainObject = await domains.get(settings.dashboardDomain());
const dashboardFqdn = dns.fqdn(constants.DASHBOARD_LOCATION, domainObject);
const bundle = await reverseproxy.getCertificatePath(dashboardFqdn, domainObject.domain);
const dashboardFqdn = dns.fqdn(constants.DASHBOARD_SUBDOMAIN, domainObject);
const certificatePath = await reverseproxy.getCertificatePath(dashboardFqdn, domainObject.domain);
gServer = ldap.createServer({
certificate: fs.readFileSync(bundle.certFilePath, 'utf8'),
key: fs.readFileSync(bundle.keyFilePath, 'utf8'),
certificate: fs.readFileSync(certificatePath.certFilePath, 'utf8'),
key: fs.readFileSync(certificatePath.keyFilePath, 'utf8'),
log: logger
});
@@ -324,12 +306,12 @@ async function start() {
gServer.bind('ou=system,dc=cloudron', async function(req, res, next) {
debug('system bind: %s (from %s)', req.dn.toString(), req.connection.ldap.id);
const tmp = await settings.getUserDirectoryConfig();
const tmp = await settings.getDirectoryServerConfig();
if (!req.dn.equals(constants.USER_DIRECTORY_LDAP_DN)) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
if (req.credentials !== tmp.secret) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
req.user = { user: 'userDirectoryAdmin' };
req.user = { user: 'directoryServerAdmin' };
res.end();
@@ -342,7 +324,7 @@ async function start() {
gServer.bind('ou=users,dc=cloudron', userAuth, async function (req, res) {
assert.strictEqual(typeof req.user, 'object');
await eventlog.upsertLoginEvent(eventlog.ACTION_USER_LOGIN, { authType: 'userdirectory', id: req.connection.ldap.id }, { userId: req.user.id, user: users.removePrivateFields(req.user) });
await eventlog.upsertLoginEvent(req.user.ghost ? eventlog.ACTION_USER_LOGIN_GHOST : eventlog.ACTION_USER_LOGIN, { authType: 'directoryserver', id: req.connection.ldap.id }, { userId: req.user.id, user: users.removePrivateFields(req.user) });
res.end();
});
+10 -9
View File
@@ -51,6 +51,7 @@ function api(provider) {
case 'namecom': return require('./dns/namecom.js');
case 'namecheap': return require('./dns/namecheap.js');
case 'netcup': return require('./dns/netcup.js');
case 'hetzner': return require('./dns/hetzner.js');
case 'noop': return require('./dns/noop.js');
case 'manual': return require('./dns/manual.js');
case 'wildcard': return require('./dns/wildcard.js');
@@ -75,11 +76,11 @@ function validateHostname(subdomain, domainObject) {
const hostname = fqdn(subdomain, domainObject);
const RESERVED_LOCATIONS = [
constants.SMTP_LOCATION,
constants.IMAP_LOCATION
const RESERVED_SUBDOMAINS = [
constants.SMTP_SUBDOMAIN,
constants.IMAP_SUBDOMAIN
];
if (RESERVED_LOCATIONS.indexOf(subdomain) !== -1) return new BoxError(BoxError.BAD_FIELD, `subdomain '${subdomain}' is reserved`);
if (RESERVED_SUBDOMAINS.indexOf(subdomain) !== -1) return new BoxError(BoxError.BAD_FIELD, `subdomain '${subdomain}' is reserved`);
if (hostname === settings.dashboardFqdn()) return new BoxError(BoxError.BAD_FIELD, `subdomain '${subdomain}' is reserved`);
@@ -183,11 +184,11 @@ async function waitForDnsRecord(subdomain, domain, type, value, options) {
await api(domainObject.provider).wait(domainObject, subdomain, type, value, options);
}
function makeWildcard(vhost) {
assert.strictEqual(typeof vhost, 'string');
function makeWildcard(fqdn) {
assert.strictEqual(typeof fqdn, 'string');
// if the vhost is like *.example.com, this function will do nothing
let parts = vhost.split('.');
// if the fqdn is like *.example.com, this function will do nothing
const parts = fqdn.split('.');
parts[0] = '*';
return parts.join('.');
}
@@ -294,7 +295,7 @@ async function syncDnsRecords(options, progressCallback) {
progress += Math.round(100/(1+allDomains.length));
let locations = [];
if (domain.domain === settings.dashboardDomain()) locations.push({ subdomain: constants.DASHBOARD_LOCATION, domain: settings.dashboardDomain() });
if (domain.domain === settings.dashboardDomain()) locations.push({ subdomain: constants.DASHBOARD_SUBDOMAIN, domain: settings.dashboardDomain() });
if (domain.domain === settings.mailDomain() && settings.mailFqdn() !== settings.dashboardFqdn()) locations.push({ subdomain: mailSubdomain, domain: settings.mailDomain() });
for (const app of allApps) {
+1 -2
View File
@@ -18,13 +18,12 @@ const assert = require('assert'),
dns = require('../dns.js'),
safe = require('safetydance'),
superagent = require('superagent'),
util = require('util'),
waitForDns = require('./waitfordns.js');
const DIGITALOCEAN_ENDPOINT = 'https://api.digitalocean.com';
function formatError(response) {
return util.format('DigitalOcean DNS error [%s] %j', response.statusCode, response.body);
return `DigitalOcean DNS error ${response.statusCode} ${JSON.stringify(response.body)}`;
}
function removePrivateFields(domainObject) {
+8 -1
View File
@@ -126,6 +126,13 @@ async function del(domainObject, location, type, values) {
debug(`del: ${name} in zone ${zoneName} of type ${type} with values ${JSON.stringify(values)}`);
const result = await get(domainObject, location, type);
if (result.length === 0) return;
const tmp = result.filter(r => !values.includes(r));
if (tmp.length) return await upsert(domainObject, location, type, tmp); // only remove 'values'
const [error, response] = await safe(superagent.del(`${GODADDY_API}/${zoneName}/records/${type}/${name}`)
.set('Authorization', `sso-key ${domainConfig.apiKey}:${domainConfig.apiSecret}`)
.timeout(30 * 1000)
@@ -171,7 +178,7 @@ async function verifyDomainConfig(domainObject) {
if (error && error.code === 'ENOTFOUND') throw new BoxError(BoxError.BAD_FIELD, 'Unable to resolve nameservers for this domain');
if (error || !nameservers) throw new BoxError(BoxError.BAD_FIELD, error ? error.message : 'Unable to get nameservers');
if (!nameservers.every(function (n) { return n.toLowerCase().indexOf('.domaincontrol.com') !== -1; })) {
if (!nameservers.every(function (n) { return n.toLowerCase().indexOf('.domaincontrol.com') !== -1 || n.toLowerCase().indexOf('.secureserver.net') !== -1; })) {
debug('verifyDomainConfig: %j does not contain GoDaddy NS', nameservers);
throw new BoxError(BoxError.BAD_FIELD, 'Domain nameservers are not set to GoDaddy');
}
+259
View File
@@ -0,0 +1,259 @@
'use strict';
exports = module.exports = {
removePrivateFields,
injectPrivateFields,
upsert,
get,
del,
wait,
verifyDomainConfig
};
const assert = require('assert'),
BoxError = require('../boxerror.js'),
constants = require('../constants.js'),
debug = require('debug')('box:dns/hetzner'),
dig = require('../dig.js'),
dns = require('../dns.js'),
safe = require('safetydance'),
superagent = require('superagent'),
waitForDns = require('./waitfordns.js');
const ENDPOINT = 'https://dns.hetzner.com/api/v1';
function formatError(response) {
return `Hetzner DNS error ${response.statusCode} ${JSON.stringify(response.body)}`;
}
function removePrivateFields(domainObject) {
domainObject.config.token = constants.SECRET_PLACEHOLDER;
return domainObject;
}
function injectPrivateFields(newConfig, currentConfig) {
if (newConfig.token === constants.SECRET_PLACEHOLDER) newConfig.token = currentConfig.token;
}
async function getZone(domainConfig, zoneName) {
assert.strictEqual(typeof domainConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
const [error, response] = await safe(superagent.get(`${ENDPOINT}/zones`)
.set('Auth-API-Token', domainConfig.token)
.query({ search_name: zoneName })
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.statusCode === 401 || response.statusCode === 403) throw new BoxError(BoxError.ACCESS_DENIED, formatError(response));
if (response.statusCode !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
if (!Array.isArray(response.body.zones)) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
const zone = response.body.zones.filter(z => z.name === zoneName);
if (zone.length === 0) throw new BoxError(BoxError.NOT_FOUND, formatError(response));
return zone[0];
}
async function getZoneRecords(domainConfig, zone, name, type) {
assert.strictEqual(typeof domainConfig, 'object');
assert.strictEqual(typeof zone, 'object');
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof type, 'string');
let page = 1, matchingRecords = [];
debug(`getInternal: getting dns records of ${zone.name} with ${name} and type ${type}`);
const perPage = 50;
// eslint-disable-next-line no-constant-condition
while (true) {
const [error, response] = await safe(superagent.get(`${ENDPOINT}/records`)
.set('Auth-API-Token', domainConfig.token)
.query({ zone_id: zone.id, page, per_page: perPage })
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.statusCode === 404) throw new BoxError(BoxError.NOT_FOUND, formatError(response));
if (response.statusCode === 401 || response.statusCode === 403) throw new BoxError(BoxError.ACCESS_DENIED, formatError(response));
if (response.statusCode !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
matchingRecords = matchingRecords.concat(response.body.records.filter(function (record) {
return (record.type === type && record.name === name);
}));
if (response.body.records.length < perPage) break;
++page;
}
return matchingRecords;
}
async function upsert(domainObject, location, type, values) {
assert.strictEqual(typeof domainObject, 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof type, 'string');
assert(Array.isArray(values));
const domainConfig = domainObject.config,
zoneName = domainObject.zoneName,
name = dns.getName(domainObject, location, type) || '@';
debug('upsert: %s for zone %s of type %s with values %j', name, zoneName, type, values);
const zone = await getZone(domainConfig, zoneName);
const records = await getZoneRecords(domainConfig, zone, name, type);
// used to track available records to update instead of create
let i = 0;
for (let value of values) {
const data = {
type,
name,
value,
ttl: 60,
zone_id: zone.id
};
if (i >= records.length) {
const [error, response] = await safe(superagent.post(`${ENDPOINT}/records`)
.set('Auth-API-Token', domainConfig.token)
.send(data)
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.statusCode === 403 || response.statusCode === 401) throw new BoxError(BoxError.ACCESS_DENIED, formatError(response));
if (response.statusCode === 422) throw new BoxError(BoxError.BAD_FIELD, response.body.message);
if (response.statusCode !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
} else {
const [error, response] = await safe(superagent.put(`${ENDPOINT}/records/${records[i].id}`)
.set('Auth-API-Token', domainConfig.token)
.send(data)
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
++i;
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.statusCode === 403 || response.statusCode === 401) throw new BoxError(BoxError.ACCESS_DENIED, formatError(response));
if (response.statusCode === 422) throw new BoxError(BoxError.BAD_FIELD, response.body.message);
if (response.statusCode !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
}
}
for (let j = values.length + 1; j < records.length; j++) {
const [error] = await safe(superagent.del(`${ENDPOINT}/records/${records[j].id}`)
.set('Auth-API-Token', domainConfig.token)
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
if (error) debug(`upsert: error removing record ${records[j].id}: ${error.message}`);
}
debug('upsert: completed');
}
async function get(domainObject, location, type) {
assert.strictEqual(typeof domainObject, 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof type, 'string');
const domainConfig = domainObject.config,
zoneName = domainObject.zoneName,
name = dns.getName(domainObject, location, type) || '@';
const zone = await getZone(domainConfig, zoneName);
const result = await getZoneRecords(domainConfig, zone, name, type);
return result.map(function (record) { return record.value; });
}
async function del(domainObject, location, type, values) {
assert.strictEqual(typeof domainObject, 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof type, 'string');
assert(Array.isArray(values));
const domainConfig = domainObject.config,
zoneName = domainObject.zoneName,
name = dns.getName(domainObject, location, type) || '@';
const zone = await getZone(domainConfig, zoneName);
const records = await getZoneRecords(domainConfig, zone, name, type);
if (records.length === 0) return;
const matchingRecords = records.filter(function (record) { return values.some(function (value) { return value === record.value; }); });
if (matchingRecords.length === 0) return;
for (const r of matchingRecords) {
const [error, response] = await safe(superagent.del(`${ENDPOINT}/records/${r.id}`)
.set('Auth-API-Token', domainConfig.token)
.timeout(30 * 1000)
.retry(5)
.ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.statusCode === 404) return;
if (response.statusCode === 403 || response.statusCode === 401) throw new BoxError(BoxError.ACCESS_DENIED, formatError(response));
if (response.statusCode !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, formatError(response));
}
}
async function wait(domainObject, subdomain, type, value, options) {
assert.strictEqual(typeof domainObject, 'object');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert(options && typeof options === 'object'); // { interval: 5000, times: 50000 }
const fqdn = dns.fqdn(subdomain, domainObject);
await waitForDns(fqdn, domainObject.zoneName, type, value, options);
}
async function verifyDomainConfig(domainObject) {
assert.strictEqual(typeof domainObject, 'object');
const domainConfig = domainObject.config,
zoneName = domainObject.zoneName;
if (!domainConfig.token || typeof domainConfig.token !== 'string') throw new BoxError(BoxError.BAD_FIELD, 'token must be a non-empty string');
const ip = '127.0.0.1';
const credentials = {
token: domainConfig.token
};
if (process.env.BOX_ENV === 'test') return credentials; // this shouldn't be here
const [error, nameservers] = await safe(dig.resolve(zoneName, 'NS', { timeout: 5000 }));
if (error && error.code === 'ENOTFOUND') throw new BoxError(BoxError.BAD_FIELD, 'Unable to resolve nameservers for this domain');
if (error || !nameservers) throw new BoxError(BoxError.BAD_FIELD, error ? error.message : 'Unable to get nameservers');
// https://docs.hetzner.com/dns-console/dns/general/dns-overview#the-hetzner-online-name-servers-are
if (nameservers.map(function (n) { return n.toLowerCase(); }).indexOf('oxygen.ns.hetzner.com') === -1) {
debug('verifyDomainConfig: %j does not contain Hetzner NS', nameservers);
throw new BoxError(BoxError.BAD_FIELD, 'Domain nameservers are not set to Hetzner');
}
const location = 'cloudrontestdns';
await upsert(domainObject, location, 'A', [ ip ]);
debug('verifyDomainConfig: Test A record added');
await del(domainObject, location, 'A', [ ip ]);
debug('verifyDomainConfig: Test A record removed again');
return credentials;
}
+1 -1
View File
@@ -99,7 +99,7 @@ async function upsert(domainObject, location, type, values) {
for (const value of values) {
const data = {
type,
ttl: 300 // lowest
ttl: 120 // lowest
};
if (type === 'MX') {
+48 -73
View File
@@ -20,13 +20,15 @@ exports = module.exports = {
createSubcontainer,
inspect,
getContainerIp,
execContainer,
getEvents,
memoryUsage,
createVolume,
removeVolume,
clearVolume,
update,
createExec,
startExec,
getExec,
resizeExec
};
const apps = require('./apps.js'),
@@ -36,7 +38,6 @@ const apps = require('./apps.js'),
debug = require('debug')('box:docker'),
delay = require('./delay.js'),
Docker = require('dockerode'),
path = require('path'),
reverseProxy = require('./reverseproxy.js'),
services = require('./services.js'),
settings = require('./settings.js'),
@@ -46,9 +47,6 @@ const apps = require('./apps.js'),
volumes = require('./volumes.js'),
_ = require('underscore');
const CLEARVOLUME_CMD = path.join(__dirname, 'scripts/clearvolume.sh'),
MKDIRVOLUME_CMD = path.join(__dirname, 'scripts/mkdirvolume.sh');
const DOCKER_SOCKET_PATH = '/var/run/docker.sock';
const gConnection = new Docker({ socketPath: DOCKER_SOCKET_PATH });
@@ -194,28 +192,30 @@ async function getAddonMounts(app) {
for (const addon of Object.keys(addons)) {
switch (addon) {
case 'localstorage':
case 'localstorage': {
const storageDir = await apps.getStorageDir(app);
mounts.push({
Target: '/app/data',
Source: `${app.id}-localstorage`,
Type: 'volume',
Source: storageDir,
Type: 'bind',
ReadOnly: false
});
break;
}
case 'tls': {
const bundle = await reverseProxy.getCertificatePath(app.fqdn, app.domain);
const certificatePath = await reverseProxy.getCertificatePath(app.fqdn, app.domain);
mounts.push({
Target: '/etc/certs/tls_cert.pem',
Source: bundle.certFilePath,
Source: certificatePath.certFilePath,
Type: 'bind',
ReadOnly: true
});
mounts.push({
Target: '/etc/certs/tls_key.pem',
Source: bundle.keyFilePath,
Source: certificatePath.keyFilePath,
Type: 'bind',
ReadOnly: true
});
@@ -394,6 +394,7 @@ async function createSubcontainer(app, name, cmd, options) {
// ipv6 for new interfaces is disabled in the container. this prevents the openvpn tun device having ipv6
// See https://github.com/moby/moby/issues/20569 and https://github.com/moby/moby/issues/33099
containerOptions.HostConfig.Sysctls['net.ipv6.conf.all.disable_ipv6'] = '0';
containerOptions.HostConfig.Sysctls['net.ipv6.conf.all.forwarding'] = '1';
}
if (capabilities.includes('mlock')) containerOptions.HostConfig.CapAdd.push('IPC_LOCK'); // mlock prevents swapping
if (!capabilities.includes('ping')) containerOptions.HostConfig.CapDrop.push('NET_RAW'); // NET_RAW is included by default by Docker
@@ -558,30 +559,51 @@ async function getContainerIp(containerId) {
return ip;
}
async function execContainer(containerId, options) {
async function createExec(containerId, options) {
assert.strictEqual(typeof containerId, 'string');
assert.strictEqual(typeof options, 'object');
const container = gConnection.getContainer(containerId);
const [error, exec] = await safe(container.exec(options.execOptions));
const [error, exec] = await safe(container.exec(options));
if (error && error.statusCode === 404) throw new BoxError(BoxError.NOT_FOUND);
if (error && error.statusCode === 409) throw new BoxError(BoxError.BAD_STATE, error.message); // container restarting/not running
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
const [startError, stream] = await safe(exec.start(options.startOptions)); /* in hijacked mode, stream is a net.socket */
if (startError) throw new BoxError(BoxError.DOCKER_ERROR, startError);
return exec.id;
}
if (options.rows && options.columns) {
// there is a race where resizing too early results in a 404 "no such exec"
// https://git.cloudron.io/cloudron/box/issues/549
setTimeout(function () {
exec.resize({ h: options.rows, w: options.columns }, function (error) { if (error) debug('Error resizing console', error); });
}, 2000);
}
async function startExec(execId, options) {
assert.strictEqual(typeof execId, 'string');
assert.strictEqual(typeof options, 'object');
const exec = gConnection.getExec(execId);
const [error, stream] = await safe(exec.start(options)); /* in hijacked mode, stream is a net.socket */
if (error && error.statusCode === 404) throw new BoxError(BoxError.NOT_FOUND);
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
return stream;
}
async function getExec(execId) {
assert.strictEqual(typeof execId, 'string');
const exec = gConnection.getExec(execId);
const [error, result] = await safe(exec.inspect());
if (error && error.statusCode === 404) throw new BoxError(BoxError.NOT_FOUND, `Unable to find exec container ${execId}`);
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
return { exitCode: result.ExitCode, running: result.Running };
}
async function resizeExec(execId, options) {
assert.strictEqual(typeof execId, 'string');
assert.strictEqual(typeof options, 'object');
const exec = gConnection.getExec(execId);
const [error] = await safe(exec.resize(options)); // { h, w }
if (error && error.statusCode === 404) throw new BoxError(BoxError.NOT_FOUND);
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
}
async function getEvents(options) {
assert.strictEqual(typeof options, 'object');
@@ -602,53 +624,6 @@ async function memoryUsage(containerId) {
return result;
}
async function createVolume(name, volumeDataDir, labels) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof volumeDataDir, 'string');
assert.strictEqual(typeof labels, 'object');
const volumeOptions = {
Name: name,
Driver: 'local',
DriverOpts: { // https://github.com/moby/moby/issues/19990#issuecomment-248955005
type: 'none',
device: volumeDataDir,
o: 'bind'
},
Labels: labels
};
// requires sudo because the path can be outside appsdata
let [error] = await safe(shell.promises.sudo('createVolume', [ MKDIRVOLUME_CMD, volumeDataDir ], {}));
if (error) throw new BoxError(BoxError.FS_ERROR, `Error creating app data dir: ${error.message}`);
[error] = await safe(gConnection.createVolume(volumeOptions));
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
}
async function clearVolume(name, options) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof options, 'object');
let volume = gConnection.getVolume(name);
let [error, v] = await safe(volume.inspect());
if (error && error.statusCode === 404) return;
if (error) throw new BoxError(BoxError.DOCKER_ERROR, error);
const volumeDataDir = v.Options.device;
[error] = await shell.promises.sudo('clearVolume', [ CLEARVOLUME_CMD, options.removeDirectory ? 'rmdir' : 'clear', volumeDataDir ], {});
if (error) throw new BoxError(BoxError.FS_ERROR, error);
}
// this only removes the volume and not the data
async function removeVolume(name) {
assert.strictEqual(typeof name, 'string');
let volume = gConnection.getVolume(name);
const [error] = await safe(volume.remove());
if (error && error.statusCode !== 404) throw new BoxError(BoxError.DOCKER_ERROR, `removeVolume: Error removing volume: ${error.message}`);
}
async function info() {
const [error, result] = await safe(gConnection.info());
if (error) throw new BoxError(BoxError.DOCKER_ERROR, 'Error connecting to docker');
+21 -25
View File
@@ -54,6 +54,7 @@ function api(provider) {
case 'digitalocean': return require('./dns/digitalocean.js');
case 'gandi': return require('./dns/gandi.js');
case 'godaddy': return require('./dns/godaddy.js');
case 'hetzner': return require('./dns/hetzner.js');
case 'linode': return require('./dns/linode.js');
case 'vultr': return require('./dns/vultr.js');
case 'namecom': return require('./dns/namecom.js');
@@ -76,13 +77,13 @@ async function verifyDomainConfig(domainConfig, domain, zoneName, provider) {
if (!backend) throw new BoxError(BoxError.BAD_FIELD, 'Invalid provider');
const domainObject = { config: domainConfig, domain: domain, zoneName: zoneName };
const [error, result] = await safe(api(provider).verifyDomainConfig(domainObject));
if (error && error.reason === BoxError.ACCESS_DENIED) return { error: new BoxError(BoxError.BAD_FIELD, `Access denied: ${error.message}`) };
if (error && error.reason === BoxError.NOT_FOUND) return { error: new BoxError(BoxError.BAD_FIELD, `Zone not found: ${error.message}`) };
if (error && error.reason === BoxError.EXTERNAL_ERROR) return { error: new BoxError(BoxError.BAD_FIELD, `Configuration error: ${error.message}`) };
if (error) return { error };
const [error, sanitizedConfig] = await safe(api(provider).verifyDomainConfig(domainObject));
if (error && error.reason === BoxError.ACCESS_DENIED) throw new BoxError(BoxError.BAD_FIELD, `Access denied: ${error.message}`);
if (error && error.reason === BoxError.NOT_FOUND) throw new BoxError(BoxError.BAD_FIELD, `Zone not found: ${error.message}`);
if (error && error.reason === BoxError.EXTERNAL_ERROR) throw new BoxError(BoxError.BAD_FIELD, `Configuration error: ${error.message}`);
if (error) throw error;
return { error: null, sanitizedConfig: result };
return sanitizedConfig;
}
function validateTlsConfig(tlsConfig, dnsProvider) {
@@ -150,12 +151,11 @@ async function add(domain, data, auditSource) {
dkimSelector = `cloudron-${suffix}`;
}
const result = await verifyDomainConfig(config, domain, zoneName, provider);
if (result.error) throw result.error;
const sanitizedConfig = await verifyDomainConfig(config, domain, zoneName, provider);
let queries = [
const queries = [
{ query: 'INSERT INTO domains (domain, zoneName, provider, configJson, tlsConfigJson, fallbackCertificateJson) VALUES (?, ?, ?, ?, ?, ?)',
args: [ domain, zoneName, provider, JSON.stringify(result.sanitizedConfig), JSON.stringify(tlsConfig), JSON.stringify(fallbackCertificate) ] },
args: [ domain, zoneName, provider, JSON.stringify(sanitizedConfig), JSON.stringify(tlsConfig), JSON.stringify(fallbackCertificate) ] },
{ query: 'INSERT INTO mail (domain, dkimKeyJson, dkimSelector) VALUES (?, ?, ?)', args: [ domain, JSON.stringify(dkimKey), dkimSelector || 'cloudron' ] },
];
@@ -194,7 +194,6 @@ async function setConfig(domain, data, auditSource) {
assert.strictEqual(typeof auditSource, 'object');
let { zoneName, provider, config, fallbackCertificate, tlsConfig } = data;
let error;
if (settings.isDemo() && (domain === settings.dashboardDomain())) throw new BoxError(BoxError.CONFLICT, 'Not allowed in demo mode');
@@ -210,16 +209,15 @@ async function setConfig(domain, data, auditSource) {
if (error) throw error;
}
error = validateTlsConfig(tlsConfig, provider);
if (error) throw error;
const tlsConfigError = validateTlsConfig(tlsConfig, provider);
if (tlsConfigError) throw tlsConfigError;
if (provider === domainObject.provider) api(provider).injectPrivateFields(config, domainObject.config);
const result = await verifyDomainConfig(config, domain, zoneName, provider);
if (result.error) throw result.error;
const sanitizedConfig = await verifyDomainConfig(config, domain, zoneName, provider);
const newData = {
config: result.sanitizedConfig,
config: sanitizedConfig,
zoneName,
provider,
tlsConfig,
@@ -227,7 +225,7 @@ async function setConfig(domain, data, auditSource) {
if (fallbackCertificate) newData.fallbackCertificate = fallbackCertificate;
let args = [ ], fields = [ ];
const args = [], fields = [];
for (const k in newData) {
if (k === 'config' || k === 'tlsConfig' || k === 'fallbackCertificate') { // json fields
fields.push(`${k}Json = ?`);
@@ -239,9 +237,8 @@ async function setConfig(domain, data, auditSource) {
}
args.push(domain);
[error] = await safe(database.query('UPDATE domains SET ' + fields.join(', ') + ' WHERE domain=?', args));
if (error && error.reason === BoxError.NOT_FOUND) throw new BoxError(BoxError.NOT_FOUND, 'Domain not found');
if (error) throw new BoxError(BoxError.DATABASE_ERROR, error);
const result = await database.query('UPDATE domains SET ' + fields.join(', ') + ' WHERE domain=?', args);
if (result.affectedRows === 0) throw new BoxError(BoxError.NOT_FOUND, 'Domain not found');
if (!fallbackCertificate) return;
@@ -255,12 +252,11 @@ async function setWellKnown(domain, wellKnown, auditSource) {
assert.strictEqual(typeof wellKnown, 'object');
assert.strictEqual(typeof auditSource, 'object');
let error = validateWellKnown(wellKnown);
if (error) throw error;
const wellKnownError = validateWellKnown(wellKnown);
if (wellKnownError) throw wellKnownError;
[error] = await safe(database.query('UPDATE domains SET wellKnownJson = ? WHERE domain=?', [ JSON.stringify(wellKnown), domain ]));
if (error && error.reason === BoxError.NOT_FOUND) throw new BoxError(BoxError.NOT_FOUND, 'Domain not found');
if (error) throw new BoxError(BoxError.DATABASE_ERROR, error);
const result = await database.query('UPDATE domains SET wellKnownJson = ? WHERE domain=?', [ JSON.stringify(wellKnown), domain ]);
if (result.affectedRows === 0) throw new BoxError(BoxError.NOT_FOUND, 'Domain not found');
await eventlog.add(eventlog.ACTION_DOMAIN_UPDATE, auditSource, { domain, wellKnown });
}
+2 -2
View File
@@ -36,8 +36,8 @@ async function sync(auditSource) {
}
debug(`refreshDNS: updating IP from ${info.ipv4} to ipv4: ${ipv4} (changed: ${ipv4Changed}) ipv6: ${ipv6} (changed: ${ipv6Changed})`);
if (ipv4Changed) await dns.upsertDnsRecords(constants.DASHBOARD_LOCATION, settings.dashboardDomain(), 'A', [ ipv4 ]);
if (ipv6Changed) await dns.upsertDnsRecords(constants.DASHBOARD_LOCATION, settings.dashboardDomain(), 'AAAA', [ ipv6 ]);
if (ipv4Changed) await dns.upsertDnsRecords(constants.DASHBOARD_SUBDOMAIN, settings.dashboardDomain(), 'A', [ ipv4 ]);
if (ipv6Changed) await dns.upsertDnsRecords(constants.DASHBOARD_SUBDOMAIN, settings.dashboardDomain(), 'AAAA', [ ipv6 ]);
const result = await apps.list();
for (const app of result) {
+1
View File
@@ -69,6 +69,7 @@ exports = module.exports = {
ACTION_USER_ADD: 'user.add',
ACTION_USER_LOGIN: 'user.login',
ACTION_USER_LOGIN_GHOST: 'user.login.ghost',
ACTION_USER_LOGOUT: 'user.logout',
ACTION_USER_REMOVE: 'user.remove',
ACTION_USER_UPDATE: 'user.update',
+33 -2
View File
@@ -2,6 +2,7 @@
exports = module.exports = {
verifyPassword,
verifyPasswordAndTotpToken,
maybeCreateUser,
testConfig,
@@ -44,6 +45,7 @@ function translateUser(ldapConfig, ldapUser) {
return {
username: ldapUser[ldapConfig.usernameField].toLowerCase(),
email: ldapUser.mail || ldapUser.mailPrimaryAddress,
twoFactorAuthenticationEnabled: !!ldapUser.twoFactorAuthenticationEnabled,
displayName: ldapUser.displayName || ldapUser.cn // user.giveName + ' ' + user.sn
};
}
@@ -254,8 +256,11 @@ async function maybeCreateUser(identifier) {
throw error;
}
// fetch the full record
return await users.get(userId);
// fetch the full record and amend potential twoFA settings
const newUser = await users.get(userId);
if (user.twoFactorAuthenticationEnabled) newUser.twoFactorAuthenticationEnabled = true;
return newUser;
}
async function verifyPassword(user, password) {
@@ -279,6 +284,32 @@ async function verifyPassword(user, password) {
return translateUser(externalLdapConfig, ldapUsers[0]);
}
async function verifyPasswordAndTotpToken(user, password, totpToken) {
assert.strictEqual(typeof user, 'object');
assert.strictEqual(typeof password, 'string');
assert.strictEqual(typeof totpToken, 'string');
const externalLdapConfig = await settings.getExternalLdapConfig();
if (externalLdapConfig.provider === 'noop') throw new BoxError(BoxError.BAD_STATE, 'not enabled');
const ldapUsers = await ldapUserSearch(externalLdapConfig, { filter: `${externalLdapConfig.usernameField}=${user.username}` });
if (ldapUsers.length === 0) throw new BoxError(BoxError.NOT_FOUND);
if (ldapUsers.length > 1) throw new BoxError(BoxError.CONFLICT);
const client = await getClient(externalLdapConfig, { bind: false });
// inject totptoken into first attribute
const rdns = ldapUsers[0].dn.split(',');
const totpTokenDn = `${rdns[0]}+totptoken=${totpToken},` + rdns.slice(1).join(',');
const [error] = await safe(util.promisify(client.bind.bind(client))(totpTokenDn, password));
client.unbind();
if (error instanceof ldap.InvalidCredentialsError) throw new BoxError(BoxError.INVALID_CREDENTIALS);
if (error) throw new BoxError(BoxError.EXTERNAL_ERROR, error);
return translateUser(externalLdapConfig, ldapUsers[0]);
}
async function startSyncer() {
const externalLdapConfig = await settings.getExternalLdapConfig();
if (externalLdapConfig.provider === 'noop') throw new BoxError(BoxError.BAD_STATE, 'not enabled');
+184
View File
@@ -0,0 +1,184 @@
'use strict';
exports = module.exports = {
getSystem,
getByApp
};
const apps = require('./apps.js'),
assert = require('assert'),
BoxError = require('./boxerror.js'),
fs = require('fs'),
safe = require('safetydance'),
superagent = require('superagent'),
system = require('./system.js');
// for testing locally: curl 'http://127.0.0.1:8417/graphite-web/render?format=json&from=-1min&target=absolute(collectd.localhost.du-docker.capacity-usage)'
// the datapoint is (value, timestamp) https://buildmedia.readthedocs.org/media/pdf/graphite/0.9.16/graphite.pdf
const GRAPHITE_RENDER_URL = 'http://127.0.0.1:8417/graphite-web/render';
// https://rootlesscontaine.rs/getting-started/common/cgroup2/#checking-whether-cgroup-v2-is-already-enabled
const CGROUP_VERSION = fs.existsSync('/sys/fs/cgroup/cgroup.controllers') ? '2' : '1';
async function getByApp(app, fromMinutes, noNullPoints) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof fromMinutes, 'number');
assert.strictEqual(typeof noNullPoints, 'boolean');
const timeBucketSize = fromMinutes > (24 * 60) ? (6*60) : 5;
const memoryQuery = {
target: null, // filled below
format: 'json',
from: `-${fromMinutes}min`,
until: 'now'
};
if (CGROUP_VERSION === '1') {
memoryQuery.target = `summarize(collectd.localhost.table-${app.id}-memory.gauge-memsw_usage_in_bytes, "${timeBucketSize}min", "avg")`;
} else {
memoryQuery.target = `summarize(sum(collectd.localhost.table-${app.id}-memory.gauge-memory_current, collectd.localhost.table-${app.id}-memory.gauge-memory_swap_current), "${timeBucketSize}min", "avg")`;
}
if (noNullPoints) memoryQuery.noNullPoints = true;
const [memoryError, memoryResponse] = await safe(superagent.get(GRAPHITE_RENDER_URL)
.query(memoryQuery)
.timeout(30 * 1000)
.ok(() => true));
if (memoryError) throw new BoxError(BoxError.NETWORK_ERROR, memoryError.message);
if (memoryResponse.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${memoryResponse.status} ${memoryResponse.text}`);
let diskDataPoints;
if (app.manifest.addons.localstorage) {
const diskQuery = {
target: `summarize(collectd.localhost.du-${app.id}.capacity-usage, "${timeBucketSize}min", "avg")`,
format: 'json',
from: `-${fromMinutes}min`,
until: 'now'
};
if (noNullPoints) diskQuery.noNullPoints = true;
const [diskError, diskResponse] = await safe(superagent.get(GRAPHITE_RENDER_URL)
.query(diskQuery)
.timeout(30 * 1000)
.ok(() => true));
if (diskError) throw new BoxError(BoxError.NETWORK_ERROR, diskError.message);
if (diskResponse.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${diskResponse.status} ${diskResponse.text}`);
// we may not have any datapoints
if (diskResponse.body.length === 0) diskDataPoints = [];
else diskDataPoints = diskResponse.body[0].datapoints;
} else {
diskDataPoints = [];
}
// app proxy instances have no container and thus no datapoints
return { memory: memoryResponse.body[0] || { datapoints: [] }, disk: { datapoints: diskDataPoints } };
}
async function getSystem(fromMinutes, noNullPoints) {
assert.strictEqual(typeof fromMinutes, 'number');
assert.strictEqual(typeof noNullPoints, 'boolean');
const timeBucketSize = fromMinutes > (24 * 60) ? (6*60) : 5;
const cpuQuery = `summarize(sum(collectd.localhost.aggregation-cpu-average.cpu-system, collectd.localhost.aggregation-cpu-average.cpu-user), "${timeBucketSize}min", "avg")`;
const memoryQuery = `summarize(sum(collectd.localhost.memory.memory-used, collectd.localhost.swap.swap-used), "${timeBucketSize}min", "avg")`;
const query = {
target: [ cpuQuery, memoryQuery ],
format: 'json',
from: `-${fromMinutes}min`,
until: 'now'
};
const [memCpuError, memCpuResponse] = await safe(superagent.get(GRAPHITE_RENDER_URL)
.query(query)
.timeout(30 * 1000)
.ok(() => true));
if (memCpuError) throw new BoxError(BoxError.NETWORK_ERROR, memCpuError.message);
if (memCpuResponse.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${memCpuResponse.status} ${memCpuResponse.text}`);
const allApps = await apps.list();
const appResponses = {};
for (const app of allApps) {
appResponses[app.id] = await getByApp(app, fromMinutes, noNullPoints);
}
const diskInfo = await system.getDisks();
// segregate locations into the correct disks based on 'filesystem'
diskInfo.disks.forEach(function (disk, index) {
disk.id = index;
disk.contains = [];
if (disk.filesystem === diskInfo.platformDataDisk) disk.contains.push({ type: 'standard', label: 'Platform data', id: 'platformdata', usage: 0 });
if (disk.filesystem === diskInfo.boxDataDisk) disk.contains.push({ type: 'standard', label: 'Box data', id: 'boxdata', usage: 0 });
if (disk.filesystem === diskInfo.dockerDataDisk) disk.contains.push({ type: 'standard', label: 'Docker images', id: 'docker', usage: 0 });
if (disk.filesystem === diskInfo.mailDataDisk) disk.contains.push({ type: 'standard', label: 'Email data', id: 'maildata', usage: 0 });
if (disk.filesystem === diskInfo.backupsDisk) disk.contains.push({ type: 'standard', label: 'Backup data', id: 'cloudron-backup', usage: 0 });
// attach appIds which reside on this disk
const apps = Object.keys(diskInfo.apps).filter(function (appId) { return diskInfo.apps[appId] === disk.filesystem; });
apps.forEach(function (appId) {
disk.contains.push({ type: 'app', id: appId, label: '', usage: 0 });
});
// attach volumeIds which reside on this disk
const volumes = Object.keys(diskInfo.volumes).filter(function (volumeId) { return diskInfo.volumes[volumeId] === disk.filesystem; });
volumes.forEach(function (volumeId) {
disk.contains.push({ type: 'volume', id: volumeId, label: '', usage: 0 });
});
});
for (const disk of diskInfo.disks) {
// /dev/sda1 -> sda1
// /dev/mapper/foo.com -> mapper_foo_com (see #348)
let diskName = disk.filesystem.slice(disk.filesystem.indexOf('/', 1) + 1);
diskName = diskName.replace(/\/|\./g, '_');
const target = [
`absolute(collectd.localhost.df-${diskName}.df_complex-free)`,
`absolute(collectd.localhost.df-${diskName}.df_complex-reserved)`, // reserved for root (default: 5%) tune2fs -l/m
`absolute(collectd.localhost.df-${diskName}.df_complex-used)`
];
const diskQuery = {
target: target,
format: 'json',
from: '-1day',
until: 'now'
};
const [diskError, diskResponse] = await safe(superagent.get(GRAPHITE_RENDER_URL).query(diskQuery).timeout(30 * 1000).ok(() => true));
if (diskError) throw new BoxError(BoxError.NETWORK_ERROR, diskError.message);
if (diskResponse.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${diskResponse.status} ${diskResponse.text}`);
disk.size = diskResponse.body[2].datapoints[0][0] + diskResponse.body[1].datapoints[0][0] + diskResponse.body[0].datapoints[0][0];
disk.free = diskResponse.body[0].datapoints[0][0];
disk.occupied = diskResponse.body[2].datapoints[0][0];
for (const content of disk.contains) {
const query = {
target: `absolute(collectd.localhost.du-${content.id}.capacity-usage)`,
format: 'json',
from: '-1day',
until: 'now'
};
const [error, response] = await safe(superagent.get(GRAPHITE_RENDER_URL).query(query).timeout(30 * 1000).ok(() => true));
if (error) throw new BoxError(BoxError.NETWORK_ERROR, error.message);
if (response.status !== 200) throw new BoxError(BoxError.EXTERNAL_ERROR, `Unknown error: ${response.status} ${response.text}`);
// we may not have any datapoints
if (response.body.length === 0) content.usage = null;
else content.usage = response.body[0].datapoints[0][0];
console.log(content)
}
}
return { cpu: memCpuResponse.body[0], memory: memCpuResponse.body[1], apps: appResponses, disks: diskInfo.disks };
}
+225
View File
@@ -0,0 +1,225 @@
'use strict';
const assert = require('assert'),
BoxError = require('./boxerror.js'),
crypto = require('crypto'),
debug = require('debug')('box:hush'),
fs = require('fs'),
progressStream = require('progress-stream'),
TransformStream = require('stream').Transform;
class EncryptStream extends TransformStream {
constructor(encryption) {
super();
this._headerPushed = false;
this._iv = crypto.randomBytes(16);
this._cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(encryption.dataKey, 'hex'), this._iv);
this._hmac = crypto.createHmac('sha256', Buffer.from(encryption.dataHmacKey, 'hex'));
}
pushHeaderIfNeeded() {
if (!this._headerPushed) {
const magic = Buffer.from('CBV2');
this.push(magic);
this._hmac.update(magic);
this.push(this._iv);
this._hmac.update(this._iv);
this._headerPushed = true;
}
}
_transform(chunk, ignoredEncoding, callback) {
this.pushHeaderIfNeeded();
try {
const crypt = this._cipher.update(chunk);
this._hmac.update(crypt);
callback(null, crypt);
} catch (error) {
callback(new BoxError(BoxError.CRYPTO_ERROR, `Encryption error when updating: ${error.message}`));
}
}
_flush(callback) {
try {
this.pushHeaderIfNeeded(); // for 0-length files
const crypt = this._cipher.final();
this.push(crypt);
this._hmac.update(crypt);
callback(null, this._hmac.digest()); // +32 bytes
} catch (error) {
callback(new BoxError(BoxError.CRYPTO_ERROR, `Encryption error when flushing: ${error.message}`));
}
}
}
class DecryptStream extends TransformStream {
constructor(encryption) {
super();
this._key = Buffer.from(encryption.dataKey, 'hex');
this._header = Buffer.alloc(0);
this._decipher = null;
this._hmac = crypto.createHmac('sha256', Buffer.from(encryption.dataHmacKey, 'hex'));
this._buffer = Buffer.alloc(0);
}
_transform(chunk, ignoredEncoding, callback) {
const needed = 20 - this._header.length; // 4 for magic, 16 for iv
if (this._header.length !== 20) { // not gotten header yet
this._header = Buffer.concat([this._header, chunk.slice(0, needed)]);
if (this._header.length !== 20) return callback();
if (!this._header.slice(0, 4).equals(new Buffer.from('CBV2'))) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid magic in header'));
const iv = this._header.slice(4);
this._decipher = crypto.createDecipheriv('aes-256-cbc', this._key, iv);
this._hmac.update(this._header);
}
this._buffer = Buffer.concat([ this._buffer, chunk.slice(needed) ]);
if (this._buffer.length < 32) return callback(); // hmac trailer length is 32
try {
const cipherText = this._buffer.slice(0, -32);
this._hmac.update(cipherText);
const plainText = this._decipher.update(cipherText);
this._buffer = this._buffer.slice(-32);
callback(null, plainText);
} catch (error) {
callback(new BoxError(BoxError.CRYPTO_ERROR, `Decryption error: ${error.message}`));
}
}
_flush (callback) {
if (this._buffer.length !== 32) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid password or tampered file (not enough data)'));
try {
if (!this._hmac.digest().equals(this._buffer)) return callback(new BoxError(BoxError.CRYPTO_ERROR, 'Invalid password or tampered file (mac mismatch)'));
const plainText = this._decipher.final();
callback(null, plainText);
} catch (error) {
callback(new BoxError(BoxError.CRYPTO_ERROR, `Invalid password or tampered file: ${error.message}`));
}
}
}
function encryptFilePath(filePath, encryption) {
assert.strictEqual(typeof filePath, 'string');
assert.strictEqual(typeof encryption, 'object');
const encryptedParts = filePath.split('/').map(function (part) {
let hmac = crypto.createHmac('sha256', Buffer.from(encryption.filenameHmacKey, 'hex'));
const iv = hmac.update(part).digest().slice(0, 16); // iv has to be deterministic, for our sync (copy) logic to work
const cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(encryption.filenameKey, 'hex'), iv);
let crypt = cipher.update(part);
crypt = Buffer.concat([ iv, crypt, cipher.final() ]);
return crypt.toString('base64') // ensures path is valid
.replace(/\//g, '-') // replace '/' of base64 since it conflicts with path separator
.replace(/=/g,''); // strip trailing = padding. this is only needed if we concat base64 strings, which we don't
});
return encryptedParts.join('/');
}
function decryptFilePath(filePath, encryption) {
assert.strictEqual(typeof filePath, 'string');
assert.strictEqual(typeof encryption, 'object');
const decryptedParts = [];
for (let part of filePath.split('/')) {
part = part + Array(part.length % 4).join('='); // add back = padding
part = part.replace(/-/g, '/'); // replace with '/'
try {
const buffer = Buffer.from(part, 'base64');
const iv = buffer.slice(0, 16);
let decrypt = crypto.createDecipheriv('aes-256-cbc', Buffer.from(encryption.filenameKey, 'hex'), iv);
const plainText = decrypt.update(buffer.slice(16));
const plainTextString = Buffer.concat([ plainText, decrypt.final() ]).toString('utf8');
const hmac = crypto.createHmac('sha256', Buffer.from(encryption.filenameHmacKey, 'hex'));
if (!hmac.update(plainTextString).digest().slice(0, 16).equals(iv)) return { error: new BoxError(BoxError.CRYPTO_ERROR, `mac error decrypting part ${part} of path ${filePath}`) };
decryptedParts.push(plainTextString);
} catch (error) {
debug(`Error decrypting part ${part} of path ${filePath}:`, error);
return { error: new BoxError(BoxError.CRYPTO_ERROR, `Error decrypting part ${part} of path ${filePath}: ${error.message}`) };
}
}
return { result: decryptedParts.join('/') };
}
function createReadStream(sourceFile, encryption) {
assert.strictEqual(typeof sourceFile, 'string');
assert.strictEqual(typeof encryption, 'object');
const stream = fs.createReadStream(sourceFile);
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
stream.on('error', function (error) {
debug(`createReadStream: read stream error at ${sourceFile}`, error);
ps.emit('error', new BoxError(BoxError.FS_ERROR, `Error reading ${sourceFile}: ${error.message} ${error.code}`));
});
stream.on('open', () => ps.emit('open'));
if (encryption) {
let encryptStream = new EncryptStream(encryption);
encryptStream.on('error', function (error) {
debug(`createReadStream: encrypt stream error ${sourceFile}`, error);
ps.emit('error', new BoxError(BoxError.CRYPTO_ERROR, `Encryption error at ${sourceFile}: ${error.message}`));
});
return stream.pipe(encryptStream).pipe(ps);
} else {
return stream.pipe(ps);
}
}
function createWriteStream(destFile, encryption) {
assert.strictEqual(typeof destFile, 'string');
assert.strictEqual(typeof encryption, 'object');
const stream = fs.createWriteStream(destFile);
const ps = progressStream({ time: 10000 }); // display a progress every 10 seconds
stream.on('error', function (error) {
debug(`createWriteStream: write stream error ${destFile}`, error);
ps.emit('error', new BoxError(BoxError.FS_ERROR, `Write error ${destFile}: ${error.message}`));
});
stream.on('finish', function () {
debug('createWriteStream: done.');
// we use a separate event because ps is a through2 stream which emits 'finish' event indicating end of inStream and not write
ps.emit('done');
});
if (encryption) {
let decrypt = new DecryptStream(encryption);
decrypt.on('error', function (error) {
debug(`createWriteStream: decrypt stream error ${destFile}`, error);
ps.emit('error', new BoxError(BoxError.CRYPTO_ERROR, `Decryption error at ${destFile}: ${error.message}`));
});
ps.pipe(decrypt).pipe(stream);
} else {
ps.pipe(stream);
}
return ps;
}
exports = module.exports = {
EncryptStream,
DecryptStream,
encryptFilePath,
decryptFilePath,
createReadStream,
createWriteStream
};
+3 -3
View File
@@ -6,7 +6,7 @@
exports = module.exports = {
// a version change recreates all containers with latest docker config
'version': '49.0.0',
'version': '49.1.0',
'baseImages': [
{ repo: 'cloudron/base', tag: 'cloudron/base:3.2.0@sha256:ba1d566164a67c266782545ea9809dc611c4152e27686fd14060332dd88263ea' }
@@ -18,9 +18,9 @@ exports = module.exports = {
'turn': { repo: 'cloudron/turn', tag: 'cloudron/turn:1.4.0@sha256:45817f1631992391d585f171498d257487d872480fd5646723a2b956cc4ef15d' },
'mysql': { repo: 'cloudron/mysql', tag: 'cloudron/mysql:3.2.1@sha256:75cef64ba4917ba9ec68bc0c9d9ba3a9eeae00a70173cd6d81cc6118038737d9' },
'postgresql': { repo: 'cloudron/postgresql', tag: 'cloudron/postgresql:4.3.1@sha256:b0c564d097b765d4a639330843e2e813d2c87fc8ed34b7df7550bf2c6df0012c' },
'mongodb': { repo: 'cloudron/mongodb', tag: 'cloudron/mongodb:4.2.0@sha256:c8ebdbe2663b26fcd58b1e6b97906b62565adbe4a06256ba0f86114f78b37e6b' },
'mongodb': { repo: 'cloudron/mongodb', tag: 'cloudron/mongodb:4.2.1@sha256:f7f689beea07b1c6a9503a48f6fb38ef66e5b22f59fc585a92842a6578b33d46' },
'redis': { repo: 'cloudron/redis', tag: 'cloudron/redis:3.3.0@sha256:89c4e8083631b6d16b5d630d9b27f8ecf301c62f81219d77bd5948a1f4a4375c' },
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:3.6.1@sha256:b8b93f007105080d4812a05648e6bc5e15c95c63f511c829cbc14a163d9ea029' },
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:3.7.0@sha256:a41a52ba45bea0b2f14be82f8480d5f4583d806dc1f9c99c3bce858d2c9f27d7' },
'graphite': { repo: 'cloudron/graphite', tag: 'cloudron/graphite:3.1.0@sha256:30ec3a01964a1e01396acf265183997c3e17fb07eac1a82b979292cc7719ff4b' },
'sftp': { repo: 'cloudron/sftp', tag: 'cloudron/sftp:3.6.1@sha256:ba4b9a1fe274c0ef0a900e5d0deeb8f3da08e118798d1d90fbf995cc0cf6e3a3' }
}
+33 -58
View File
@@ -25,9 +25,6 @@ let gServer = null;
const NOOP = function () {};
const GROUP_USERS_DN = 'cn=users,ou=groups,dc=cloudron';
const GROUP_ADMINS_DN = 'cn=admins,ou=groups,dc=cloudron';
// Will attach req.app if successful
async function authenticateApp(req, res, next) {
const sourceIp = req.connection.ldap.id.split(':')[0];
@@ -150,6 +147,9 @@ async function userSearch(req, res, next) {
const [error, result] = await safe(getUsersWithAccessToApp(req));
if (error) return next(new ldap.OperationsError(error.toString()));
const [groupsError, allGroups] = await safe(groups.listWithMembers());
if (groupsError) return next(new ldap.OperationsError(error.toString()));
let results = [];
// send user objects
@@ -159,9 +159,6 @@ async function userSearch(req, res, next) {
const dn = ldap.parseDN('cn=' + user.id + ',ou=users,dc=cloudron');
const memberof = [ GROUP_USERS_DN ];
if (users.compareRoles(user.role, users.ROLE_ADMIN) >= 0) memberof.push(GROUP_ADMINS_DN);
const displayName = user.displayName || user.username || ''; // displayName can be empty and username can be null
const nameParts = displayName.split(' ');
const firstName = nameParts[0];
@@ -181,7 +178,7 @@ async function userSearch(req, res, next) {
givenName: firstName,
username: user.username,
samaccountname: user.username, // to support ActiveDirectory clients
memberof: memberof
memberof: allGroups.filter(function (g) { return g.userIds.indexOf(user.id) !== -1; }).map(function (g) { return g.name; })
}
};
@@ -204,42 +201,8 @@ async function userSearch(req, res, next) {
async function groupSearch(req, res, next) {
debug('group search: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
const [error, usersWithAccess] = await safe(getUsersWithAccessToApp(req));
if (error) return next(new ldap.OperationsError(error.toString()));
const results = [];
// those are the old virtual groups for backwards compat
const virtualGroups = [{
name: 'users',
admin: false
}, {
name: 'admins',
admin: true
}];
virtualGroups.forEach(function (group) {
const dn = ldap.parseDN('cn=' + group.name + ',ou=groups,dc=cloudron');
const members = group.admin ? usersWithAccess.filter(function (user) { return users.compareRoles(user.role, users.ROLE_ADMIN) >= 0; }) : usersWithAccess;
const obj = {
dn: dn.toString(),
attributes: {
objectclass: ['group'],
cn: group.name,
memberuid: members.map(function(entry) { return entry.id; }).sort()
}
};
// ensure all filter values are also lowercase
const lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
results.push(obj);
}
});
let [errorGroups, resultGroups] = await safe(groups.listWithMembers());
if (errorGroups) return next(new ldap.OperationsError(errorGroups.toString()));
@@ -248,15 +211,15 @@ async function groupSearch(req, res, next) {
}
resultGroups.forEach(function (group) {
const dn = ldap.parseDN('cn=' + group.name + ',ou=groups,dc=cloudron');
const members = group.userIds.filter(function (uid) { return usersWithAccess.map(function (u) { return u.id; }).indexOf(uid) !== -1; });
const dn = ldap.parseDN(`cn=${group.name},ou=groups,dc=cloudron`);
const obj = {
dn: dn.toString(),
attributes: {
objectclass: ['group'],
cn: group.name,
memberuid: members
gidnumber: group.id,
memberuid: group.userIds
}
};
@@ -305,25 +268,35 @@ async function groupAdminsCompare(req, res, next) {
async function mailboxSearch(req, res, next) {
debug('mailbox search: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
// if cn is set we only search for one mailbox specifically
// if cn is set OR filter is mail= we only search for one mailbox specifically
let email, dn;
if (req.dn.rdns[0].attrs.cn) {
const email = req.dn.rdns[0].attrs.cn.value.toLowerCase();
email = req.dn.rdns[0].attrs.cn.value.toLowerCase();
dn = req.dn.toString();
} else if (req.filter instanceof ldap.EqualityFilter && req.filter.attribute === 'mail') {
email = req.filter.value.toLowerCase();
dn = `cn=${email},${req.dn.toString()}`;
}
if (email) {
const parts = email.split('@');
if (parts.length !== 2) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (parts.length !== 2) return next(new ldap.NoSuchObjectError(dn.toString()));
const [error, mailbox] = await safe(mail.getMailbox(parts[0], parts[1]));
if (error) return next(new ldap.OperationsError(error.toString()));
if (!mailbox) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (!mailbox.active) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (!mailbox) return next(new ldap.NoSuchObjectError(dn.toString()));
if (!mailbox.active) return next(new ldap.NoSuchObjectError(dn.toString()));
const obj = {
dn: req.dn.toString(),
dn: dn.toString(),
attributes: {
objectclass: ['mailbox'],
objectcategory: 'mailbox',
cn: `${mailbox.name}@${mailbox.domain}`,
uid: `${mailbox.name}@${mailbox.domain}`,
mail: `${mailbox.name}@${mailbox.domain}`
mail: `${mailbox.name}@${mailbox.domain}`,
storagequota: mailbox.storageQuota,
messagesquota: mailbox.messagesQuota,
}
};
@@ -336,7 +309,7 @@ async function mailboxSearch(req, res, next) {
} else {
res.end();
}
} else { // new sogo
} else { // new sogo and dovecot listing (doveadm -A)
// TODO figure out how proper pagination here could work
let [error, mailboxes] = await safe(mail.listAllMailboxes(1, 100000));
if (error) return next(new ldap.OperationsError(error.toString()));
@@ -360,7 +333,9 @@ async function mailboxSearch(req, res, next) {
displayname: mailbox.ownerType === mail.OWNERTYPE_USER ? ownerObject.displayName : ownerObject.name,
cn: `${mailbox.name}@${mailbox.domain}`,
uid: `${mailbox.name}@${mailbox.domain}`,
mail: `${mailbox.name}@${mailbox.domain}`
mail: `${mailbox.name}@${mailbox.domain}`,
storagequota: mailbox.storageQuota,
messagesquota: mailbox.messagesQuota,
}
};
@@ -390,7 +365,7 @@ async function mailAliasSearch(req, res, next) {
const parts = email.split('@');
if (parts.length !== 2) return next(new ldap.NoSuchObjectError(req.dn.toString()));
const [error, alias] = await safe(mail.getAlias(parts[0], parts[1]));
const [error, alias] = await safe(mail.searchAlias(parts[0], parts[1]));
if (error) return next(new ldap.OperationsError(error.toString()));
if (!alias) return next(new ldap.NoSuchObjectError(req.dn.toString()));
@@ -404,7 +379,7 @@ async function mailAliasSearch(req, res, next) {
attributes: {
objectclass: ['nisMailAlias'],
objectcategory: 'nisMailAlias',
cn: `${alias.name}@${alias.domain}`,
cn: `${parts[0]}@${alias.domain}`, // alias.name can contain wildcard character
rfc822MailMember: `${alias.aliasName}@${alias.aliasDomain}`
}
};
@@ -480,7 +455,7 @@ async function authorizeUserForApp(req, res, next) {
// we return no such object, to avoid leakage of a users existence
if (!canAccess) return next(new ldap.NoSuchObjectError(req.dn.toString()));
await eventlog.upsertLoginEvent(eventlog.ACTION_USER_LOGIN, { authType: 'ldap', appId: req.app.id }, { userId: req.user.id, user: users.removePrivateFields(req.user) });
await eventlog.upsertLoginEvent(req.user.ghost ? eventlog.ACTION_USER_LOGIN_GHOST : eventlog.ACTION_USER_LOGIN, { authType: 'ldap', appId: req.app.id }, { userId: req.user.id, user: users.removePrivateFields(req.user) });
res.end();
}
@@ -558,7 +533,7 @@ async function userSearchSftp(req, res, next) {
const obj = {
dn: ldap.parseDN(`cn=${username}@${appFqdn},ou=sftp,dc=cloudron`).toString(),
attributes: {
homeDirectory: app.dataDir ? `/mnt/app-${app.id}` : `/mnt/appsdata/${app.id}/data`, // see also sftp.js
homeDirectory: app.storageVolumeId ? `/mnt/app-${app.id}` : `/mnt/appsdata/${app.id}/data`, // see also sftp.js
objectclass: ['user'],
objectcategory: 'person',
cn: user.id,
@@ -625,7 +600,7 @@ async function authenticateService(serviceId, dn, req, res, next) {
if (verifyError && verifyError.reason === BoxError.INVALID_CREDENTIALS) return next(new ldap.InvalidCredentialsError(dn.toString()));
if (verifyError) return next(new ldap.OperationsError(verifyError.message));
eventlog.upsertLoginEvent(eventlog.ACTION_USER_LOGIN, { authType: 'ldap', mailboxId: email }, { userId: result.id, user: users.removePrivateFields(result) });
eventlog.upsertLoginEvent(result.ghost ? eventlog.ACTION_USER_LOGIN_GHOST : eventlog.ACTION_USER_LOGIN, { authType: 'ldap', mailboxId: email }, { userId: result.id, user: users.removePrivateFields(result) });
res.end();
}
+74 -26
View File
@@ -22,6 +22,7 @@ exports = module.exports = {
setDnsRecords,
validateName,
validateDisplayName,
setMailFromValidation,
setCatchAllAddress,
@@ -47,6 +48,7 @@ exports = module.exports = {
getAlias,
getAliases,
setAliases,
searchAlias,
getLists,
getList,
@@ -65,7 +67,6 @@ exports = module.exports = {
TYPE_LIST: 'list',
TYPE_ALIAS: 'alias',
_validateName: validateName,
_delByDomain: delByDomain,
_updateDomain: updateDomain
};
@@ -96,13 +97,11 @@ const assert = require('assert'),
services = require('./services.js'),
settings = require('./settings.js'),
shell = require('./shell.js'),
smtpTransport = require('nodemailer-smtp-transport'),
superagent = require('superagent'),
sysinfo = require('./sysinfo.js'),
system = require('./system.js'),
tasks = require('./tasks.js'),
users = require('./users.js'),
util = require('util'),
validator = require('validator'),
_ = require('underscore');
@@ -112,7 +111,7 @@ const REMOVE_MAILBOX_CMD = path.join(__dirname, 'scripts/rmmailbox.sh');
const OWNERTYPES = [ exports.OWNERTYPE_USER, exports.OWNERTYPE_GROUP, exports.OWNERTYPE_APP ];
// if you add a field here, listMailboxes has to be updated
const MAILBOX_FIELDS = [ 'name', 'type', 'ownerId', 'ownerType', 'aliasName', 'aliasDomain', 'creationTime', 'membersJson', 'membersOnly', 'domain', 'active', 'enablePop3' ].join(',');
const MAILBOX_FIELDS = [ 'name', 'type', 'ownerId', 'ownerType', 'aliasName', 'aliasDomain', 'creationTime', 'membersJson', 'membersOnly', 'domain', 'active', 'enablePop3', 'storageQuota', 'messagesQuota' ].join(',');
const MAILDB_FIELDS = [ 'domain', 'enabled', 'mailFromValidation', 'catchAllJson', 'relayJson', 'dkimKeyJson', 'dkimSelector', 'bannerJson' ].join(',');
function postProcessMailbox(data) {
@@ -169,6 +168,28 @@ function validateName(name) {
return null;
}
function validateAlias(name) {
assert.strictEqual(typeof name, 'string');
if (name.length < 1) return new BoxError(BoxError.BAD_FIELD, 'mailbox name must be atleast 1 char');
if (name.length >= 200) return new BoxError(BoxError.BAD_FIELD, 'mailbox name too long');
// also need to consider valid LDAP characters here (e.g '+' is reserved). keep hyphen at the end so it doesn't become a range.
if (/[^a-zA-Z0-9._*-]/.test(name)) return new BoxError(BoxError.BAD_FIELD, 'mailbox name can only contain alphanumerals, dot, hyphen, asterisk or underscore');
return null;
}
function validateDisplayName(name) {
assert.strictEqual(typeof name, 'string');
if (name.length < 1) return new BoxError(BoxError.BAD_FIELD, 'mailbox display name must be atleast 1 char');
if (name.length >= 100) return new BoxError(BoxError.BAD_FIELD, 'mailbox display name too long');
if (/["<>@]/.test(name)) return new BoxError(BoxError.BAD_FIELD, 'mailbox display name is not valid');
return null;
}
async function checkOutboundPort25() {
const relay = {
value: 'OK',
@@ -214,7 +235,8 @@ async function checkSmtpRelay(relay) {
connectionTimeout: 5000,
greetingTimeout: 5000,
host: relay.host,
port: relay.port
port: relay.port,
secure: false // haraka relay only supports STARTTLS
};
// only set auth if either username or password is provided, some relays auth based on IP (range)
@@ -227,9 +249,9 @@ async function checkSmtpRelay(relay) {
if (relay.acceptSelfSignedCerts) options.tls = { rejectUnauthorized: false };
const transporter = nodemailer.createTransport(smtpTransport(options));
const transporter = nodemailer.createTransport(options);
const [error] = await safe(util.promisify(transporter.verify)());
const [error] = await safe(transporter.verify());
result.status = !error;
if (error) {
result.value = result.errorMessage = error.message;
@@ -632,7 +654,7 @@ async function createMailConfig(mailFqdn) {
// create sections for per-domain configuration
for (const domain of mailDomains) {
const catchAll = domain.catchAll.map(function (c) { return `${c}@${domain.domain}`; }).join(',');
const catchAll = domain.catchAll.join(',');
const mailFromValidation = domain.mailFromValidation;
if (!safe.fs.appendFileSync(`${paths.MAIL_CONFIG_DIR}/mail.ini`,
@@ -688,15 +710,15 @@ async function configureMail(mailFqdn, mailDomain, serviceConfig) {
const memory = system.getMemoryAllocation(memoryLimit);
const cloudronToken = hat(8 * 128), relayToken = hat(8 * 128);
const bundle = await reverseProxy.getCertificatePath(mailFqdn, mailDomain);
const certificatePath = await reverseProxy.getCertificatePath(mailFqdn, mailDomain);
const dhparamsFilePath = `${paths.MAIL_CONFIG_DIR}/dhparams.pem`;
const mailCertFilePath = `${paths.MAIL_CONFIG_DIR}/tls_cert.pem`;
const mailKeyFilePath = `${paths.MAIL_CONFIG_DIR}/tls_key.pem`;
if (!safe.child_process.execSync(`cp ${paths.DHPARAMS_FILE} ${dhparamsFilePath}`)) throw new BoxError(BoxError.FS_ERROR, 'Could not copy dhparams:' + safe.error.message);
if (!safe.child_process.execSync(`cp ${bundle.certFilePath} ${mailCertFilePath}`)) throw new BoxError(BoxError.FS_ERROR, 'Could not create cert file:' + safe.error.message);
if (!safe.child_process.execSync(`cp ${bundle.keyFilePath} ${mailKeyFilePath}`)) throw new BoxError(BoxError.FS_ERROR, 'Could not create key file:' + safe.error.message);
if (!safe.child_process.execSync(`cp ${certificatePath.certFilePath} ${mailCertFilePath}`)) throw new BoxError(BoxError.FS_ERROR, 'Could not create cert file:' + safe.error.message);
if (!safe.child_process.execSync(`cp ${certificatePath.keyFilePath} ${mailKeyFilePath}`)) throw new BoxError(BoxError.FS_ERROR, 'Could not create key file:' + safe.error.message);
// if the 'yellowtent' user of OS and the 'cloudron' user of mail container don't match, the keys become inaccessible by mail code
if (!safe.fs.chmodSync(mailKeyFilePath, 0o644)) throw new BoxError(BoxError.FS_ERROR, `Could not chmod key file: ${safe.error.message}`);
@@ -1019,6 +1041,10 @@ async function setCatchAllAddress(domain, addresses) {
assert.strictEqual(typeof domain, 'string');
assert(Array.isArray(addresses));
for (const address of addresses) {
if (!validator.isEmail(address)) throw new BoxError(BoxError.BAD_FIELD, `Invalid catch all address: ${address}`);
}
await updateDomain(domain, { catchAll: addresses });
safe(restartMail(), { debug }); // have to restart mail container since haraka cannot watch symlinked config files (mail.ini)
@@ -1079,7 +1105,7 @@ async function listMailboxes(domain, search, page, perPage) {
const escapedSearch = mysql.escape('%' + search + '%'); // this also quotes the string
const searchQuery = search ? ` HAVING (name LIKE ${escapedSearch} OR aliasNames LIKE ${escapedSearch} OR aliasDomains LIKE ${escapedSearch})` : ''; // having instead of where because of aggregated columns use
const query = 'SELECT m1.name AS name, m1.domain AS domain, m1.ownerId AS ownerId, m1.ownerType as ownerType, m1.active as active, JSON_ARRAYAGG(m2.name) AS aliasNames, JSON_ARRAYAGG(m2.domain) AS aliasDomains, m1.enablePop3 AS enablePop3 '
const query = 'SELECT m1.name AS name, m1.domain AS domain, m1.ownerId AS ownerId, m1.ownerType as ownerType, m1.active as active, JSON_ARRAYAGG(m2.name) AS aliasNames, JSON_ARRAYAGG(m2.domain) AS aliasDomains, m1.enablePop3 AS enablePop3, m1.storageQuota AS storageQuota, m1.messagesQuota AS messagesQuota '
+ ` FROM (SELECT * FROM mailboxes WHERE type='${exports.TYPE_MAILBOX}') AS m1`
+ ` LEFT JOIN (SELECT * FROM mailboxes WHERE type='${exports.TYPE_ALIAS}') AS m2`
+ ' ON m1.name=m2.aliasName AND m1.domain=m2.aliasDomain AND m1.ownerId=m2.ownerId'
@@ -1100,7 +1126,7 @@ async function listAllMailboxes(page, perPage) {
assert.strictEqual(typeof page, 'number');
assert.strictEqual(typeof perPage, 'number');
const query = 'SELECT m1.name AS name, m1.domain AS domain, m1.ownerId AS ownerId, m1.ownerType as ownerType, m1.active as active, JSON_ARRAYAGG(m2.name) AS aliasNames, JSON_ARRAYAGG(m2.domain) AS aliasDomains, m1.enablePop3 AS enablePop3 '
const query = 'SELECT m1.name AS name, m1.domain AS domain, m1.ownerId AS ownerId, m1.ownerType as ownerType, m1.active as active, JSON_ARRAYAGG(m2.name) AS aliasNames, JSON_ARRAYAGG(m2.domain) AS aliasDomains, m1.enablePop3 AS enablePop3, m1.storageQuota AS storageQuota, m1.messagesQuota AS messagesQuota '
+ ` FROM (SELECT * FROM mailboxes WHERE type='${exports.TYPE_MAILBOX}') AS m1`
+ ` LEFT JOIN (SELECT * FROM mailboxes WHERE type='${exports.TYPE_ALIAS}') AS m2`
+ ' ON m1.name=m2.aliasName AND m1.domain=m2.aliasDomain AND m1.ownerId=m2.ownerId'
@@ -1154,10 +1180,12 @@ async function addMailbox(name, domain, data, auditSource) {
assert.strictEqual(typeof data, 'object');
assert.strictEqual(typeof auditSource, 'object');
const { ownerId, ownerType, active } = data;
const { ownerId, ownerType, active, storageQuota, messagesQuota } = data;
assert.strictEqual(typeof ownerId, 'string');
assert.strictEqual(typeof ownerType, 'string');
assert.strictEqual(typeof active, 'boolean');
assert(Number.isInteger(storageQuota) && storageQuota >= 0);
assert(Number.isInteger(messagesQuota) && messagesQuota >= 0);
name = name.toLowerCase();
@@ -1166,12 +1194,13 @@ async function addMailbox(name, domain, data, auditSource) {
if (!OWNERTYPES.includes(ownerType)) throw new BoxError(BoxError.BAD_FIELD, 'bad owner type');
[error] = await safe(database.query('INSERT INTO mailboxes (name, type, domain, ownerId, ownerType, active) VALUES (?, ?, ?, ?, ?, ?)', [ name, exports.TYPE_MAILBOX, domain, ownerId, ownerType, active ]));
[error] = await safe(database.query('INSERT INTO mailboxes (name, type, domain, ownerId, ownerType, active, storageQuota, messagesQuota) VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
[ name, exports.TYPE_MAILBOX, domain, ownerId, ownerType, active, storageQuota, messagesQuota ]));
if (error && error.code === 'ER_DUP_ENTRY') throw new BoxError(BoxError.ALREADY_EXISTS, 'mailbox already exists');
if (error && error.code === 'ER_NO_REFERENCED_ROW_2' && error.sqlMessage.includes('mailboxes_domain_constraint')) throw new BoxError(BoxError.NOT_FOUND, `no such domain '${domain}'`);
if (error) throw error;
await eventlog.add(eventlog.ACTION_MAIL_MAILBOX_ADD, auditSource, { name, domain, ownerId, ownerType, active });
await eventlog.add(eventlog.ACTION_MAIL_MAILBOX_ADD, auditSource, { name, domain, ownerId, ownerType, active, storageQuota, messageQuota: messagesQuota });
}
async function updateMailbox(name, domain, data, auditSource) {
@@ -1180,23 +1209,30 @@ async function updateMailbox(name, domain, data, auditSource) {
assert.strictEqual(typeof data, 'object');
assert.strictEqual(typeof auditSource, 'object');
const { ownerId, ownerType, active, enablePop3 } = data;
assert.strictEqual(typeof ownerId, 'string');
assert.strictEqual(typeof ownerType, 'string');
assert.strictEqual(typeof active, 'boolean');
assert.strictEqual(typeof enablePop3, 'boolean');
const args = [];
const fields = [];
for (const k in data) {
if (k === 'enablePop3' || k === 'active') {
fields.push(k + ' = ?');
args.push(data[k] ? 1 : 0);
continue;
}
name = name.toLowerCase();
if (k === 'ownerType' && !OWNERTYPES.includes(data[k])) throw new BoxError(BoxError.BAD_FIELD, 'bad owner type');
if (!OWNERTYPES.includes(ownerType)) throw new BoxError(BoxError.BAD_FIELD, 'bad owner type');
fields.push(k + ' = ?');
args.push(data[k]);
}
args.push(name.toLowerCase());
args.push(domain);
const mailbox = await getMailbox(name, domain);
if (!mailbox) throw new BoxError(BoxError.NOT_FOUND, 'No such mailbox');
const result = await database.query('UPDATE mailboxes SET ownerId = ?, ownerType = ?, active = ?, enablePop3 = ? WHERE name = ? AND domain = ?', [ ownerId, ownerType, active, enablePop3, name, domain ]);
const result = await safe(database.query('UPDATE mailboxes SET ' + fields.join(', ') + ' WHERE name = ? AND domain = ?', args));
if (result.affectedRows === 0) throw new BoxError(BoxError.NOT_FOUND, 'Mailbox not found');
await eventlog.add(eventlog.ACTION_MAIL_MAILBOX_UPDATE, auditSource, { name, domain, oldUserId: mailbox.userId, ownerId, ownerType, active });
await eventlog.add(eventlog.ACTION_MAIL_MAILBOX_UPDATE, auditSource, Object.assign(data, { name, domain, oldUserId: mailbox.userId }) );
}
async function removeSolrIndex(mailbox) {
@@ -1248,6 +1284,18 @@ async function getAlias(name, domain) {
return results[0];
}
async function searchAlias(name, domain) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof domain, 'string');
const results = await database.query(`SELECT ${MAILBOX_FIELDS} FROM mailboxes WHERE ? LIKE REPLACE(REPLACE(name, '*', '%'), '_', '\\_') AND type = ? AND domain = ?`, [ name, exports.TYPE_ALIAS, domain ]);
if (results.length === 0) return null;
results.forEach(function (result) { postProcessMailbox(result); });
return results[0];
}
async function getAliases(name, domain) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof domain, 'string');
@@ -1267,7 +1315,7 @@ async function setAliases(name, domain, aliases, auditSource) {
const name = aliases[i].name.toLowerCase();
const domain = aliases[i].domain.toLowerCase();
const error = validateName(name);
const error = validateAlias(name);
if (error) throw error;
const mailDomain = await getDomain(domain);
+4 -3
View File
@@ -6,10 +6,11 @@
<p>{{ passwordResetEmail.description }}</p>
<p>
<a href="<%= resetLink %>">{{ passwordResetEmail.resetAction }}</a>
</p>
<br/>
<a style="border-radius: 2px; background-color: #2196f3; color: white; padding: 6px 12px; text-decoration: none;" href="<%= resetLink %>">{{ passwordResetEmail.resetAction }}</a>
<br/>
<br/>
{{ passwordResetEmail.expireNote }}
+3 -3
View File
@@ -5,9 +5,9 @@
<h3>{{ welcomeEmail.salutation }}</h3>
<h2>{{ welcomeEmail.welcomeTo }}</h2>
<p>
<a href="<%= inviteLink %>">{{ welcomeEmail.inviteLinkAction }}</a>
</p>
<br/>
<a style="border-radius: 2px; background-color: #2196f3; color: white; padding: 6px 12px; text-decoration: none;" href="<%= inviteLink %>">{{ welcomeEmail.inviteLinkAction }}</a>
<br/>
<br/>
+2 -3
View File
@@ -25,7 +25,6 @@ const assert = require('assert'),
safe = require('safetydance'),
settings = require('./settings.js'),
translation = require('./translation.js'),
smtpTransport = require('nodemailer-smtp-transport'),
util = require('util');
const MAIL_TEMPLATES_DIR = path.join(__dirname, 'mail_templates');
@@ -52,14 +51,14 @@ async function sendMail(mailOptions) {
const data = await mail.getMailAuth();
const transport = nodemailer.createTransport(smtpTransport({
const transport = nodemailer.createTransport({
host: data.ip,
port: data.port,
auth: {
user: mailOptions.authUser || `no-reply@${settings.dashboardDomain()}`,
pass: data.relayToken
}
}));
});
const transportSendMail = util.promisify(transport.sendMail.bind(transport));
const [error] = await safe(transportSendMail(mailOptions));
+7 -1
View File
@@ -54,6 +54,7 @@ function validateMountOptions(type, options) {
if (typeof options.remoteDir !== 'string') return new BoxError(BoxError.BAD_FIELD, 'remoteDir is not a string');
return null;
case 'ext4':
case 'xfs':
if (typeof options.diskPath !== 'string') return new BoxError(BoxError.BAD_FIELD, 'diskPath is not a string');
return null;
default:
@@ -62,7 +63,7 @@ function validateMountOptions(type, options) {
}
function isManagedProvider(provider) {
return provider === 'sshfs' || provider === 'cifs' || provider === 'nfs' || provider === 'ext4';
return provider === 'sshfs' || provider === 'cifs' || provider === 'nfs' || provider === 'ext4' || provider === 'xfs';
}
function mountObjectFromBackupConfig(backupConfig) {
@@ -108,6 +109,11 @@ function renderMountFile(mount) {
what = mountOptions.diskPath; // like /dev/disk/by-uuid/uuid or /dev/disk/by-id/scsi-id
options = 'discard,defaults,noatime';
break;
case 'xfs':
type = 'xfs';
what = mountOptions.diskPath; // like /dev/disk/by-uuid/uuid or /dev/disk/by-id/scsi-id
options = 'discard,defaults,noatime';
break;
case 'sshfs': {
const keyFilePath = path.join(paths.SSHFS_KEYS_DIR, `id_rsa_${mountOptions.host}`);
if (!safe.fs.writeFileSync(keyFilePath, `${mount.mountOptions.privateKey}\n`, { mode: 0o600 })) throw new BoxError(BoxError.FS_ERROR, `Could not write private key: ${safe.error.message}`);
+21 -2
View File
@@ -45,7 +45,7 @@ server {
location / {
<% if ( endpoint === 'dashboard' || endpoint === 'setup' ) { %>
return 301 https://$host$request_uri;
<% } else if ( endpoint === 'app' ) { %>
<% } else if ( endpoint === 'app' || endpoint === 'external' ) { %>
return 301 https://$host$request_uri;
<% } else if ( endpoint === 'redirect' ) { %>
return 301 https://<%= redirectTo %>$request_uri;
@@ -175,6 +175,12 @@ server {
proxy_pass http://127.0.0.1:3000;
<% } else if ( endpoint === 'app' ) { %>
proxy_pass http://<%= ip %>:<%= port %>;
<% } else if ( endpoint === 'external' ) { %>
# without a variable, nginx will not start if upstream is down or
resolver 127.0.0.1 valid=30s;
set $upstream <%= upstreamUri %>;
proxy_ssl_verify off;
proxy_pass $upstream;
<% } else if ( endpoint === 'redirect' ) { %>
return 302 https://<%= redirectTo %>$request_uri;
<% } %>
@@ -216,7 +222,7 @@ server {
}
# the read timeout is between successive reads and not the whole connection
location ~ ^/api/v1/apps/.*/exec$ {
location ~ ^/api/v1/apps/.*/exec/.*/start$ {
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 30m;
}
@@ -236,6 +242,11 @@ server {
client_max_body_size 0;
}
location ~ ^/api/v1/profile/backgroundImage {
proxy_pass http://127.0.0.1:3000;
client_max_body_size 0;
}
# graphite paths (uncomment block below and visit /graphite-web/dashboard)
# remember to comment out the CSP policy as well to access the graphite dashboard
# location ~ ^/graphite-web/ {
@@ -321,6 +332,14 @@ server {
# to clear a permanent redirect on the browser
return 302 https://<%= redirectTo %>$request_uri;
}
<% } else if ( endpoint === 'external' ) { %>
location / {
# without a variable, nginx will not start if upstream is down or unavailable
resolver 127.0.0.1 valid=30s;
set $upstream <%= upstreamUri %>;
proxy_ssl_verify off;
proxy_pass $upstream;
}
<% } else if ( endpoint === 'ip' ) { %>
location /notfound.html {
root <%= sourceDir %>/dashboard/dist;
+5 -4
View File
@@ -14,6 +14,7 @@ exports = module.exports = {
ALERT_REBOOT: 'reboot',
ALERT_BOX_UPDATE: 'boxUpdate',
ALERT_UPDATE_UBUNTU: 'ubuntuUpdate',
ALERT_MANUAL_APP_UPDATE: 'manualAppUpdate',
alert,
@@ -189,16 +190,16 @@ async function boxUpdateError(eventId, errorMessage) {
await add(eventId, 'Cloudron update failed', `Failed to update Cloudron: ${errorMessage}.`);
}
async function certificateRenewalError(eventId, vhost, errorMessage) {
async function certificateRenewalError(eventId, fqdn, errorMessage) {
assert.strictEqual(typeof eventId, 'string');
assert.strictEqual(typeof vhost, 'string');
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof errorMessage, 'string');
await add(eventId, `Certificate renewal of ${vhost} failed`, `Failed to renew certs of ${vhost}: ${errorMessage}. Renewal will be retried in 12 hours.`);
await add(eventId, `Certificate renewal of ${fqdn} failed`, `Failed to renew certs of ${fqdn}: ${errorMessage}. Renewal will be retried in 12 hours.`);
const admins = await users.getAdmins();
for (const admin of admins) {
await mailer.certificateRenewalError(admin.email, vhost, errorMessage);
await mailer.certificateRenewalError(admin.email, fqdn, errorMessage);
}
}
+1
View File
@@ -16,6 +16,7 @@ exports = module.exports = {
CLOUDRON_DEFAULT_AVATAR_FILE: path.join(__dirname + '/../assets/avatar.png'),
INFRA_VERSION_FILE: path.join(baseDir(), 'platformdata/INFRA_VERSION'),
CRON_SEED_FILE: path.join(baseDir(), 'platformdata/CRON_SEED'),
DASHBOARD_DIR: constants.TEST ? path.join(__dirname, '../../dashboard/src') : path.join(baseDir(), 'box/dashboard/dist'),
PROVIDER_FILE: '/etc/cloudron/PROVIDER',
+3 -3
View File
@@ -70,7 +70,7 @@ async function setupTask(domain, auditSource) {
assert.strictEqual(typeof auditSource, 'object');
try {
await cloudron.setupDnsAndCert(constants.DASHBOARD_LOCATION, domain, auditSource, (progress) => setProgress('setup', progress.message));
await cloudron.setupDnsAndCert(constants.DASHBOARD_SUBDOMAIN, domain, auditSource, (progress) => setProgress('setup', progress.message));
await ensureDhparams();
await cloudron.setDashboardDomain(domain, auditSource);
setProgress('setup', 'Done'),
@@ -111,7 +111,7 @@ async function setup(domainConfig, sysinfoConfig, auditSource) {
dkimSelector: 'cloudron'
};
await settings.setMailLocation(domain, `${constants.DASHBOARD_LOCATION}.${domain}`); // default mail location. do this before we add the domain for upserting mail DNS
await settings.setMailLocation(domain, `${constants.DASHBOARD_SUBDOMAIN}.${domain}`); // default mail location. do this before we add the domain for upserting mail DNS
await domains.add(domain, data, auditSource);
await settings.setSysinfoConfig(sysinfoConfig);
@@ -174,7 +174,7 @@ async function restoreTask(backupConfig, remotePath, sysinfoConfig, options, aud
await reverseProxy.restoreFallbackCertificates();
const dashboardDomain = settings.dashboardDomain(); // load this fresh from after the backup.restore
if (!options.skipDnsSetup) await cloudron.setupDnsAndCert(constants.DASHBOARD_LOCATION, dashboardDomain, auditSource, (progress) => setProgress('restore', progress.message));
if (!options.skipDnsSetup) await cloudron.setupDnsAndCert(constants.DASHBOARD_SUBDOMAIN, dashboardDomain, auditSource, (progress) => setProgress('restore', progress.message));
await cloudron.setDashboardDomain(dashboardDomain, auditSource);
await settings.setBackupCredentials(backupConfig); // update just the credentials and not the policy and flags
await eventlog.add(eventlog.ACTION_RESTORE, auditSource, { remotePath });
+15 -12
View File
@@ -50,15 +50,23 @@ function jwtVerify(req, res, next) {
});
}
async function basicAuthVerify(req, res, next) {
async function authorizationHeader(req, res, next) {
const appId = req.headers['x-app-id'] || '';
const credentials = basicAuth(req);
if (!appId || !credentials) return next();
if (!appId) return next();
if (!req.headers.authorization) return next();
const [error, app] = await safe(apps.get(appId));
if (error) return next(new HttpError(503, error.message));
if (!app) return next(new HttpError(503, 'Error getting app'));
if (!app.manifest.addons.proxyAuth.basicAuth) return next();
// only if the app supports bearer auth, pass it through to the app. without this flag, anyone can access the app with Bearer auth!
if (req.headers.authorization.startsWith('Bearer ') && app.manifest.addons.proxyAuth.supportsBearerAuth) return next(new HttpSuccess(200, {}));
const credentials = basicAuth(req);
if (!credentials) return next();
if (!app.manifest.addons.proxyAuth.basicAuth) return next(); // this is a flag because this allows auth to bypass 2FA
const verifyFunc = credentials.name.indexOf('@') !== -1 ? users.verifyWithEmail : users.verifyWithUsername;
const [verifyError, user] = await safe(verifyFunc(credentials.name, credentials.pass, appId));
@@ -139,7 +147,7 @@ function auth(req, res, next) {
res.set('x-remote-email', req.user.email);
res.set('x-remote-name', req.user.displayName);
return next(new HttpSuccess(200, {}));
next(new HttpSuccess(200, {}));
}
// endpoint called by login page, username and password posted as JSON body
@@ -204,11 +212,6 @@ async function logoutPage(req, res, next) {
res.redirect(302, app.manifest.addons.proxyAuth.path ? '/' : '/login');
}
function logout(req, res, next) {
res.clearCookie('authToken');
next(new HttpSuccess(200, {}));
}
// provides webhooks for the auth wall
function initializeAuthwallExpressSync() {
const app = express();
@@ -248,10 +251,10 @@ function initializeAuthwallExpressSync() {
.use(middleware.lastMile());
router.get ('/login', loginPage);
router.get ('/auth', jwtVerify, basicAuthVerify, auth); // called by nginx before accessing protected page
router.get ('/auth', jwtVerify, authorizationHeader, auth); // called by nginx before accessing protected page
router.post('/login', json, passwordAuth, authorize);
router.get ('/logout', logoutPage);
router.post('/logout', json, logout);
router.post('/logout', json, logoutPage);
return httpServer;
}
+132 -129
View File
@@ -1,26 +1,26 @@
'use strict';
exports = module.exports = {
setAppCertificate,
setFallbackCertificate,
setUserCertificate, // per location certificate
setFallbackCertificate, // per domain certificate
generateFallbackCertificate,
validateCertificate,
getCertificatePath,
getCertificatePath, // resolved cert path
ensureCertificate,
checkCerts,
// the 'configure' ensure a certificate and generate nginx config
// the 'configure' functions ensure a certificate and generate nginx config
configureApp,
unconfigureApp,
// these only generate nginx config
writeDefaultConfig,
writeDashboardConfig,
writeAppConfig,
writeAppConfigs,
removeAppConfigs,
restoreFallbackCertificates,
@@ -59,17 +59,14 @@ const RESTART_SERVICE_CMD = path.join(__dirname, 'scripts/restartservice.sh');
function nginxLocation(s) {
if (!s.startsWith('!')) return s;
let re = s.replace(/[\^$\\.*+?()[\]{}|]/g, '\\$&'); // https://github.com/es-shims/regexp.escape/blob/master/implementation.js
const re = s.replace(/[\^$\\.*+?()[\]{}|]/g, '\\$&'); // https://github.com/es-shims/regexp.escape/blob/master/implementation.js
return `~ ^(?!(${re.slice(1)}))`; // negative regex assertion - https://stackoverflow.com/questions/16302897/nginx-location-not-equal-to-regex
}
async function getAcmeApi(domainObject) {
assert.strictEqual(typeof domainObject, 'object');
const acmeApi = acme2;
let apiOptions = { prod: false, performHttpAuthorization: false, wildcard: false, email: '' };
const apiOptions = { prod: false, performHttpAuthorization: false, wildcard: false, email: '' };
apiOptions.prod = domainObject.tlsConfig.provider.match(/.*-prod/) !== null; // matches 'le-prod' or 'letsencrypt-prod'
apiOptions.performHttpAuthorization = domainObject.provider.match(/noop|manual|wildcard/) !== null;
apiOptions.wildcard = !!domainObject.tlsConfig.wildcard;
@@ -81,7 +78,7 @@ async function getAcmeApi(domainObject) {
const [error, owner] = await safe(users.getOwner());
apiOptions.email = (error || !owner) ? 'webmaster@cloudron.io' : owner.email; // can error if not activated yet
return { acmeApi, apiOptions };
return { acme2, apiOptions };
}
function getExpiryDate(certFilePath) {
@@ -144,19 +141,19 @@ function providerMatchesSync(domainObject, certFilePath, apiOptions) {
// note: https://tools.ietf.org/html/rfc4346#section-7.4.2 (certificate_list) requires that the
// servers certificate appears first (and not the intermediate cert)
function validateCertificate(location, domainObject, certificate) {
assert.strictEqual(typeof location, 'string');
function validateCertificate(subdomain, domainObject, certificate) {
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof domainObject, 'object');
assert(certificate && typeof certificate, 'object');
const cert = certificate.cert, key = certificate.key;
const { cert, key } = certificate;
// check for empty cert and key strings
if (!cert && key) return new BoxError(BoxError.BAD_FIELD, 'missing cert');
if (cert && !key) return new BoxError(BoxError.BAD_FIELD, 'missing key');
// -checkhost checks for SAN or CN exclusively. SAN takes precedence and if present, ignores the CN.
const fqdn = dns.fqdn(location, domainObject);
const fqdn = dns.fqdn(subdomain, domainObject);
let result = safe.child_process.execSync(`openssl x509 -noout -checkhost "${fqdn}"`, { encoding: 'utf8', input: cert });
if (result === null) return new BoxError(BoxError.BAD_FIELD, 'Unable to get certificate subject:' + safe.error.message);
@@ -198,7 +195,7 @@ async function generateFallbackCertificate(domain) {
let opensslConfWithSan;
const cn = domain;
debug(`generateFallbackCertificateSync: domain=${domain} cn=${cn}`);
debug(`generateFallbackCertificate: domain=${domain} cn=${cn}`);
opensslConfWithSan = `${opensslConf}\n[SAN]\nsubjectAltName=DNS:${domain},DNS:*.${cn}\n`;
const configFile = path.join(os.tmpdir(), 'openssl-' + crypto.randomBytes(4).readUInt32LE(0) + '.conf');
@@ -219,14 +216,13 @@ async function generateFallbackCertificate(domain) {
return { cert, key };
}
async function setFallbackCertificate(domain, fallback) {
async function setFallbackCertificate(domain, certificate) {
assert.strictEqual(typeof domain, 'string');
assert(fallback && typeof fallback === 'object');
assert.strictEqual(typeof fallback, 'object');
assert(certificate && typeof certificate === 'object');
debug(`setFallbackCertificate: setting certs for domain ${domain}`);
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, `${domain}.host.cert`), fallback.cert)) throw new BoxError(BoxError.FS_ERROR, safe.error.message);
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, `${domain}.host.key`), fallback.key)) throw new BoxError(BoxError.FS_ERROR, safe.error.message);
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, `${domain}.host.cert`), certificate.cert)) throw new BoxError(BoxError.FS_ERROR, safe.error.message);
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, `${domain}.host.key`), certificate.key)) throw new BoxError(BoxError.FS_ERROR, safe.error.message);
// TODO: maybe the cert is being used by the mail container
await reload();
@@ -250,55 +246,36 @@ function getFallbackCertificatePathSync(domain) {
return { certFilePath, keyFilePath };
}
function getAppCertificatePathSync(vhost) {
assert.strictEqual(typeof vhost, 'string');
function getUserCertificatePathSync(fqdn) {
assert.strictEqual(typeof fqdn, 'string');
const certFilePath = path.join(paths.NGINX_CERT_DIR, `${vhost}.user.cert`);
const keyFilePath = path.join(paths.NGINX_CERT_DIR, `${vhost}.user.key`);
const certFilePath = path.join(paths.NGINX_CERT_DIR, `${fqdn}.user.cert`);
const keyFilePath = path.join(paths.NGINX_CERT_DIR, `${fqdn}.user.key`);
return { certFilePath, keyFilePath };
}
function getAcmeCertificatePathSync(vhost, domainObject) {
assert.strictEqual(typeof vhost, 'string'); // this can contain wildcard domain (for alias domains)
function getAcmeCertificatePathSync(fqdn, domainObject) {
assert.strictEqual(typeof fqdn, 'string'); // this can contain wildcard domain (for alias domains)
assert.strictEqual(typeof domainObject, 'object');
let certName, certFilePath, keyFilePath, csrFilePath, acmeChallengesDir = paths.ACME_CHALLENGES_DIR;
if (vhost !== domainObject.domain && domainObject.tlsConfig.wildcard) { // bare domain is not part of wildcard SAN
certName = dns.makeWildcard(vhost).replace('*.', '_.');
if (fqdn !== domainObject.domain && domainObject.tlsConfig.wildcard) { // bare domain is not part of wildcard SAN
certName = dns.makeWildcard(fqdn).replace('*.', '_.');
certFilePath = path.join(paths.NGINX_CERT_DIR, `${certName}.cert`);
keyFilePath = path.join(paths.NGINX_CERT_DIR, `${certName}.key`);
csrFilePath = path.join(paths.NGINX_CERT_DIR, `${certName}.csr`);
} else {
certName = vhost;
certFilePath = path.join(paths.NGINX_CERT_DIR, `${vhost}.cert`);
keyFilePath = path.join(paths.NGINX_CERT_DIR, `${vhost}.key`);
csrFilePath = path.join(paths.NGINX_CERT_DIR, `${vhost}.csr`);
certName = fqdn;
certFilePath = path.join(paths.NGINX_CERT_DIR, `${fqdn}.cert`);
keyFilePath = path.join(paths.NGINX_CERT_DIR, `${fqdn}.key`);
csrFilePath = path.join(paths.NGINX_CERT_DIR, `${fqdn}.csr`);
}
return { certName, certFilePath, keyFilePath, csrFilePath, acmeChallengesDir };
}
async function setAppCertificate(location, domainObject, certificate) {
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof domainObject, 'object');
assert.strictEqual(typeof certificate, 'object');
const fqdn = dns.fqdn(location, domainObject);
const { certFilePath, keyFilePath } = getAppCertificatePathSync(fqdn);
if (certificate.cert && certificate.key) {
if (!safe.fs.writeFileSync(certFilePath, certificate.cert)) throw safe.error;
if (!safe.fs.writeFileSync(keyFilePath, certificate.key)) throw safe.error;
} else { // remove existing cert/key
if (!safe.fs.unlinkSync(certFilePath)) debug(`Error removing cert: ${safe.error.message}`);
if (!safe.fs.unlinkSync(keyFilePath)) debug(`Error removing key: ${safe.error.message}`);
}
await reload();
}
async function getCertificatePath(fqdn, domain) {
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof domain, 'string');
@@ -309,38 +286,38 @@ async function getCertificatePath(fqdn, domain) {
const domainObject = await domains.get(domain);
const appCertPath = getAppCertificatePathSync(fqdn); // user cert always wins
if (fs.existsSync(appCertPath.certFilePath) && fs.existsSync(appCertPath.keyFilePath)) return appCertPath;
const userPath = getUserCertificatePathSync(fqdn); // user cert always wins
if (fs.existsSync(userPath.certFilePath) && fs.existsSync(userPath.keyFilePath)) return userPath;
if (domainObject.tlsConfig.provider === 'fallback') return getFallbackCertificatePathSync(domain);
const acmeCertPath = getAcmeCertificatePathSync(fqdn, domainObject);
if (fs.existsSync(acmeCertPath.certFilePath) && fs.existsSync(acmeCertPath.keyFilePath)) return acmeCertPath;
const acmePath = getAcmeCertificatePathSync(fqdn, domainObject);
if (fs.existsSync(acmePath.certFilePath) && fs.existsSync(acmePath.keyFilePath)) return acmePath;
return getFallbackCertificatePathSync(domain);
}
async function checkAppCertificate(vhost, domainObject) {
assert.strictEqual(typeof vhost, 'string'); // this can contain wildcard domain (for alias domains)
async function syncUserCertificate(fqdn, domainObject) {
assert.strictEqual(typeof fqdn, 'string'); // this can contain wildcard domain (for alias domains)
assert.strictEqual(typeof domainObject, 'object');
const subdomain = vhost.substr(0, vhost.length - domainObject.domain.length - 1);
const certificate = await apps.getCertificate(subdomain, domainObject.domain);
if (!certificate) return null;
const subdomain = fqdn.substr(0, fqdn.length - domainObject.domain.length - 1);
const userCertificate = await apps.getCertificate(subdomain, domainObject.domain);
if (!userCertificate) return null;
const { certFilePath, keyFilePath } = getAppCertificatePathSync(vhost);
const { certFilePath, keyFilePath } = getUserCertificatePathSync(fqdn);
if (!safe.fs.writeFileSync(certFilePath, certificate.cert)) throw new BoxError(BoxError.FS_ERROR, `Failed to write certificate: ${safe.error.message}`);
if (!safe.fs.writeFileSync(keyFilePath, certificate.key)) throw new BoxError(BoxError.FS_ERROR, `Failed to write key: ${safe.error.message}`);
if (!safe.fs.writeFileSync(certFilePath, userCertificate.cert)) throw new BoxError(BoxError.FS_ERROR, `Failed to write certificate: ${safe.error.message}`);
if (!safe.fs.writeFileSync(keyFilePath, userCertificate.key)) throw new BoxError(BoxError.FS_ERROR, `Failed to write key: ${safe.error.message}`);
return { certFilePath, keyFilePath };
}
async function checkAcmeCertificate(vhost, domainObject) {
assert.strictEqual(typeof vhost, 'string'); // this can contain wildcard domain (for alias domains)
async function syncAcmeCertificate(fqdn, domainObject) {
assert.strictEqual(typeof fqdn, 'string'); // this can contain wildcard domain (for alias domains)
assert.strictEqual(typeof domainObject, 'object');
const { certName, certFilePath, keyFilePath, csrFilePath } = getAcmeCertificatePathSync(vhost, domainObject);
const { certName, certFilePath, keyFilePath, csrFilePath } = getAcmeCertificatePathSync(fqdn, domainObject);
const privateKey = await blobs.get(`${blobs.CERT_PREFIX}-${certName}.key`);
const cert = await blobs.get(`${blobs.CERT_PREFIX}-${certName}.cert`);
@@ -356,11 +333,11 @@ async function checkAcmeCertificate(vhost, domainObject) {
return { certFilePath, keyFilePath };
}
async function updateCertBlobs(vhost, domainObject) {
assert.strictEqual(typeof vhost, 'string'); // this can contain wildcard domain (for alias domains)
async function updateCertBlobs(fqdn, domainObject) {
assert.strictEqual(typeof fqdn, 'string'); // this can contain wildcard domain (for alias domains)
assert.strictEqual(typeof domainObject, 'object');
const { certName, certFilePath, keyFilePath, csrFilePath } = getAcmeCertificatePathSync(vhost, domainObject);
const { certName, certFilePath, keyFilePath, csrFilePath } = getAcmeCertificatePathSync(fqdn, domainObject);
const privateKey = safe.fs.readFileSync(keyFilePath);
if (!privateKey) throw new BoxError(BoxError.FS_ERROR, `Failed to read private key: ${safe.error.message}`);
@@ -376,76 +353,76 @@ async function updateCertBlobs(vhost, domainObject) {
await blobs.set(`${blobs.CERT_PREFIX}-${certName}.csr`, csr);
}
async function ensureCertificate(vhost, domain, auditSource) {
assert.strictEqual(typeof vhost, 'string');
async function ensureCertificate(subdomain, domain, auditSource) {
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof auditSource, 'object');
const domainObject = await domains.get(domain);
let bundle = await checkAppCertificate(vhost, domainObject);
if (bundle) return { bundle, renewed: false };
const userCertificatePath = await syncUserCertificate(subdomain, domainObject);
if (userCertificatePath) return { certificatePath: userCertificatePath, renewed: false };
if (domainObject.tlsConfig.provider === 'fallback') {
debug(`ensureCertificate: ${vhost} will use fallback certs`);
debug(`ensureCertificate: ${subdomain} will use fallback certs`);
return { bundle: getFallbackCertificatePathSync(domain), renewed: false };
return { certificatePath: getFallbackCertificatePathSync(domain), renewed: false };
}
const { acmeApi, apiOptions } = await getAcmeApi(domainObject);
const { acme2, apiOptions } = await getAcmeApi(domainObject);
let notAfter = null;
const [, currentBundle] = await safe(checkAcmeCertificate(vhost, domainObject));
if (currentBundle) {
debug(`ensureCertificate: ${vhost} certificate already exists at ${currentBundle.keyFilePath}`);
notAfter = getExpiryDate(currentBundle.certFilePath);
const [, acmeCertificatePath] = await safe(syncAcmeCertificate(subdomain, domainObject));
if (acmeCertificatePath) {
debug(`ensureCertificate: ${subdomain} certificate already exists at ${acmeCertificatePath.keyFilePath}`);
notAfter = getExpiryDate(acmeCertificatePath.certFilePath);
const isExpiring = (notAfter - new Date()) <= (30 * 24 * 60 * 60 * 1000); // expiring in a month
if (!isExpiring && providerMatchesSync(domainObject, currentBundle.certFilePath, apiOptions)) return { bundle: currentBundle, renewed: false };
debug(`ensureCertificate: ${vhost} cert requires renewal`);
if (!isExpiring && providerMatchesSync(domainObject, acmeCertificatePath.certFilePath, apiOptions)) return { certificatePath: acmeCertificatePath, renewed: false };
debug(`ensureCertificate: ${subdomain} cert requires renewal`);
} else {
debug(`ensureCertificate: ${vhost} cert does not exist`);
debug(`ensureCertificate: ${subdomain} cert does not exist`);
}
debug('ensureCertificate: getting certificate for %s with options %j', vhost, apiOptions);
debug(`ensureCertificate: getting certificate for ${subdomain} with options ${JSON.stringify(apiOptions)}`);
const acmePaths = getAcmeCertificatePathSync(vhost, domainObject);
let [error] = await safe(acmeApi.getCertificate(vhost, domain, acmePaths, apiOptions));
const acmePaths = getAcmeCertificatePathSync(subdomain, domainObject);
const [error] = await safe(acme2.getCertificate(subdomain, domain, acmePaths, apiOptions));
debug(`ensureCertificate: error: ${error ? error.message : 'null'} cert: ${acmePaths.certFilePath || 'null'}`);
await safe(eventlog.add(currentBundle ? eventlog.ACTION_CERTIFICATE_RENEWAL : eventlog.ACTION_CERTIFICATE_NEW, auditSource, { domain: vhost, errorMessage: error ? error.message : '', notAfter }));
await safe(eventlog.add(acmeCertificatePath ? eventlog.ACTION_CERTIFICATE_RENEWAL : eventlog.ACTION_CERTIFICATE_NEW, auditSource, { domain: subdomain, errorMessage: error ? error.message : '', notAfter }));
if (error && currentBundle && (notAfter - new Date() > 0)) { // still some life left in this certificate
debug('ensureCertificate: continue using existing bundle since renewal failed');
return { bundle: currentBundle, renewed: false };
if (error && acmeCertificatePath && (notAfter - new Date() > 0)) { // still some life left in this certificate
debug('ensureCertificate: continue using existing certificate since renewal failed');
return { certificatePath: acmeCertificatePath, renewed: false };
}
if (!error) {
[error] = await safe(updateCertBlobs(vhost, domainObject));
if (!error) return { bundle: { certFilePath: acmePaths.certFilePath, keyFilePath: acmePaths.keyFilePath }, renewed: true };
const [updateCertError] = await safe(updateCertBlobs(subdomain, domainObject));
if (!updateCertError) return { certificatePath: { certFilePath: acmePaths.certFilePath, keyFilePath: acmePaths.keyFilePath }, renewed: true };
}
debug(`ensureCertificate: renewal of ${vhost} failed. using fallback certificates for ${domain}`);
debug(`ensureCertificate: renewal of ${subdomain} failed. using fallback certificates for ${domain}`);
return { bundle: getFallbackCertificatePathSync(domain), renewed: false };
return { certificatePath: getFallbackCertificatePathSync(domain), renewed: false };
}
async function writeDashboardNginxConfig(vhost, bundle) {
assert.strictEqual(typeof vhost, 'string');
assert.strictEqual(typeof bundle, 'object');
async function writeDashboardNginxConfig(fqdn, certificatePath) {
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof certificatePath, 'object');
const data = {
sourceDir: path.resolve(__dirname, '..'),
vhost: vhost,
vhost: fqdn,
hasIPv6: sysinfo.hasIPv6(),
endpoint: 'dashboard',
certFilePath: bundle.certFilePath,
keyFilePath: bundle.keyFilePath,
certFilePath: certificatePath.certFilePath,
keyFilePath: certificatePath.keyFilePath,
robotsTxtQuoted: JSON.stringify('User-agent: *\nDisallow: /\n'),
proxyAuth: { enabled: false, id: null, location: nginxLocation('/') },
ocsp: await isOcspEnabled(bundle.certFilePath)
ocsp: await isOcspEnabled(certificatePath.certFilePath)
};
const nginxConf = ejs.render(NGINX_APPCONFIG_EJS, data);
const nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, `${vhost}.conf`);
const nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, `${fqdn}.conf`);
if (!safe.fs.writeFileSync(nginxConfigFilename, nginxConf)) throw new BoxError(BoxError.FS_ERROR, safe.error);
@@ -457,10 +434,10 @@ async function writeDashboardConfig(domainObject) {
debug(`writeDashboardConfig: writing admin config for ${domainObject.domain}`);
const dashboardFqdn = dns.fqdn(constants.DASHBOARD_LOCATION, domainObject);
const bundle = await getCertificatePath(dashboardFqdn, domainObject.domain);
const dashboardFqdn = dns.fqdn(constants.DASHBOARD_SUBDOMAIN, domainObject);
const certificatePath = await getCertificatePath(dashboardFqdn, domainObject.domain);
await writeDashboardNginxConfig(dashboardFqdn, bundle);
await writeDashboardNginxConfig(dashboardFqdn, certificatePath);
}
function getNginxConfigFilename(app, fqdn, type) {
@@ -481,11 +458,11 @@ function getNginxConfigFilename(app, fqdn, type) {
return path.join(paths.NGINX_APPCONFIG_DIR, `${app.id}${nginxConfigFilenameSuffix}.conf`);
}
async function writeAppNginxConfig(app, fqdn, type, bundle) {
async function writeAppNginxConfig(app, fqdn, type, certificatePath) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof bundle, 'object');
assert.strictEqual(typeof certificatePath, 'object');
const data = {
sourceDir: path.resolve(__dirname, '..'),
@@ -495,17 +472,28 @@ async function writeAppNginxConfig(app, fqdn, type, bundle) {
port: null,
endpoint: null,
redirectTo: null,
certFilePath: bundle.certFilePath,
keyFilePath: bundle.keyFilePath,
certFilePath: certificatePath.certFilePath,
keyFilePath: certificatePath.keyFilePath,
robotsTxtQuoted: null,
cspQuoted: null,
hideHeaders: [],
proxyAuth: { enabled: false },
ocsp: await isOcspEnabled(bundle.certFilePath)
upstreamUri: '', // only for endpoint === external
ocsp: await isOcspEnabled(certificatePath.certFilePath)
};
if (type === apps.LOCATION_TYPE_PRIMARY || type === apps.LOCATION_TYPE_ALIAS || type === apps.LOCATION_TYPE_SECONDARY) {
data.endpoint = 'app';
if (app.manifest.id === constants.PROXY_APP_APPSTORE_ID) {
data.endpoint = 'external';
// prevent generating invalid nginx configs
if (!app.upstreamUri) throw new BoxError(BoxError.BAD_FIELD, 'upstreamUri cannot be empty');
data.upstreamUri = app.upstreamUri;
}
// maybe these should become per domain at some point
const reverseProxyConfig = app.reverseProxyConfig || {}; // some of our code uses fake app objects
if (reverseProxyConfig.robotsTxt) data.robotsTxtQuoted = JSON.stringify(app.reverseProxyConfig.robotsTxt);
@@ -544,20 +532,34 @@ async function writeAppNginxConfig(app, fqdn, type, bundle) {
await reload();
}
async function writeAppConfig(app) {
async function writeAppConfigs(app) {
assert.strictEqual(typeof app, 'object');
const appDomains = [{ domain: app.domain, fqdn: app.fqdn, type: apps.LOCATION_TYPE_PRIMARY }]
.concat(app.secondaryDomains.map(sd => { return { domain: sd.domain, fqdn: sd.fqdn, type: apps.LOCATION_TYPE_SECONDARY }; }))
.concat(app.redirectDomains.map(rd => { return { domain: rd.domain, fqdn: rd.fqdn, type: apps.LOCATION_TYPE_REDIRECT }; }))
.concat(app.aliasDomains.map(ad => { return { domain: ad.domain, fqdn: ad.fqdn, type: apps.LOCATION_TYPE_ALIAS }; }));
const appDomains = [{ domain: app.domain, fqdn: app.fqdn, certificate: app.certificate, type: apps.LOCATION_TYPE_PRIMARY }]
.concat(app.secondaryDomains.map(sd => { return { domain: sd.domain, certificate: sd.certificate, fqdn: sd.fqdn, type: apps.LOCATION_TYPE_SECONDARY }; }))
.concat(app.redirectDomains.map(rd => { return { domain: rd.domain, certificate: rd.certificate, fqdn: rd.fqdn, type: apps.LOCATION_TYPE_REDIRECT }; }))
.concat(app.aliasDomains.map(ad => { return { domain: ad.domain, certificate: ad.certificate, fqdn: ad.fqdn, type: apps.LOCATION_TYPE_ALIAS }; }));
for (const appDomain of appDomains) {
const bundle = await getCertificatePath(appDomain.fqdn, appDomain.domain);
await writeAppNginxConfig(app, appDomain.fqdn, appDomain.type, bundle);
const certificatePath = await getCertificatePath(appDomain.fqdn, appDomain.domain);
await writeAppNginxConfig(app, appDomain.fqdn, appDomain.type, certificatePath);
}
}
async function setUserCertificate(app, fqdn, certificate) {
const { certFilePath, keyFilePath } = getUserCertificatePathSync(fqdn);
if (certificate !== null) {
if (!safe.fs.writeFileSync(certFilePath, certificate.cert)) throw safe.error;
if (!safe.fs.writeFileSync(keyFilePath, certificate.key)) throw safe.error;
} else { // remove existing cert/key
if (!safe.fs.unlinkSync(certFilePath)) debug(`Error removing cert: ${safe.error.message}`);
if (!safe.fs.unlinkSync(keyFilePath)) debug(`Error removing key: ${safe.error.message}`);
}
await writeAppConfigs(app);
}
async function configureApp(app, auditSource) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof auditSource, 'object');
@@ -571,7 +573,7 @@ async function configureApp(app, auditSource) {
await ensureCertificate(appDomain.fqdn, appDomain.domain, auditSource);
}
await writeAppConfig(app);
await writeAppConfigs(app);
}
async function unconfigureApp(app) {
@@ -623,7 +625,7 @@ async function renewCerts(options, auditSource, progressCallback) {
progressCallback({ percent: progress, message: `Ensuring certs of ${appDomain.fqdn}` });
progress += Math.round(100/appDomains.length);
const { bundle, renewed } = await ensureCertificate(appDomain.fqdn, appDomain.domain, auditSource);
const { certificatePath, renewed } = await ensureCertificate(appDomain.fqdn, appDomain.domain, auditSource);
if (renewed) renewedCerts.push(appDomain.fqdn);
@@ -631,15 +633,15 @@ async function renewCerts(options, auditSource, progressCallback) {
// hack to check if the app's cert changed or not. this doesn't handle prod/staging le change since they use same file name
let currentNginxConfig = safe.fs.readFileSync(appDomain.nginxConfigFilename, 'utf8') || '';
if (currentNginxConfig.includes(bundle.certFilePath)) continue;
if (currentNginxConfig.includes(certificatePath.certFilePath)) continue;
debug(`renewCerts: creating new nginx config since ${appDomain.nginxConfigFilename} does not have ${bundle.certFilePath}`);
debug(`renewCerts: creating new nginx config since ${appDomain.nginxConfigFilename} does not have ${certificatePath.certFilePath}`);
// reconfigure since the cert changed
if (appDomain.type === 'webadmin' || appDomain.type === 'webadmin+mail') {
await writeDashboardNginxConfig(settings.dashboardFqdn(), bundle);
await writeDashboardNginxConfig(settings.dashboardFqdn(), certificatePath);
} else {
await writeAppNginxConfig(appDomain.app, appDomain.fqdn, appDomain.type, bundle);
await writeAppNginxConfig(appDomain.app, appDomain.fqdn, appDomain.type, certificatePath);
}
}
@@ -657,14 +659,15 @@ async function renewCerts(options, auditSource, progressCallback) {
}
}
async function cleanupCerts(auditSource) {
async function cleanupCerts(auditSource, progressCallback) {
assert.strictEqual(typeof auditSource, 'object');
assert.strictEqual(typeof progressCallback, 'function');
const filenames = await fs.promises.readdir(paths.NGINX_CERT_DIR);
const certFilenames = filenames.filter(f => f.endsWith('.cert'));
const now = new Date();
debug('cleanupCerts: start');
progressCallback({ message: 'Checking expired certs for removal' });
const fqdns = [];
@@ -675,7 +678,7 @@ async function cleanupCerts(auditSource) {
if (now - notAfter >= (60 * 60 * 24 * 30 * 6 * 1000)) { // expired 6 months ago
const fqdn = certFilename.replace(/\.cert$/, '');
debug(`cleanupCerts: deleting certs of ${fqdn}`);
progressCallback({ message: `deleting certs of ${fqdn}` });
// it is safe to delete the certs of stopped apps because their nginx configs are removed
safe.fs.unlinkSync(certFilePath);
@@ -701,7 +704,7 @@ async function checkCerts(options, auditSource, progressCallback) {
assert.strictEqual(typeof progressCallback, 'function');
await renewCerts(options, auditSource, progressCallback);
await cleanupCerts(auditSource);
await cleanupCerts(auditSource, progressCallback);
}
function removeAppConfigs() {
@@ -710,7 +713,7 @@ function removeAppConfigs() {
debug('removeAppConfigs: reomving nginx configs of apps');
// remove all configs which are not the default or current dashboard
for (let appConfigFile of fs.readdirSync(paths.NGINX_APPCONFIG_DIR)) {
for (const appConfigFile of fs.readdirSync(paths.NGINX_APPCONFIG_DIR)) {
if (appConfigFile !== constants.NGINX_DEFAULT_CONFIG_FILE_NAME && appConfigFile !== dashboardConfigFilename) {
fs.unlinkSync(path.join(paths.NGINX_APPCONFIG_DIR, appConfigFile));
}
+27 -13
View File
@@ -8,8 +8,8 @@ exports = module.exports = {
authorizeOperator,
};
const accesscontrol = require('../accesscontrol.js'),
apps = require('../apps.js'),
const apps = require('../apps.js'),
tokens = require('../tokens.js'),
assert = require('assert'),
BoxError = require('../boxerror.js'),
externalLdap = require('../externalldap.js'),
@@ -43,8 +43,13 @@ async function passwordAuth(req, res, next) {
if (!user.ghost && !user.appPassword && user.twoFactorAuthenticationEnabled) {
if (!totpToken) return next(new HttpError(401, 'A totpToken must be provided'));
const verified = speakeasy.totp.verify({ secret: user.twoFactorAuthenticationSecret, encoding: 'base32', token: totpToken, window: 2 });
if (!verified) return next(new HttpError(401, 'Invalid totpToken'));
if (user.source === 'ldap') {
const [error] = await safe(externalLdap.verifyPasswordAndTotpToken(user, password, totpToken));
if (error) return next(new HttpError(401, 'Invalid totpToken'));
} else {
const verified = speakeasy.totp.verify({ secret: user.twoFactorAuthenticationSecret, encoding: 'base32', token: totpToken, window: 2 });
if (!verified) return next(new HttpError(401, 'Invalid totpToken'));
}
}
req.user = user;
@@ -53,27 +58,32 @@ async function passwordAuth(req, res, next) {
}
async function tokenAuth(req, res, next) {
let token;
let accessToken;
// this determines the priority
if (req.body && req.body.access_token) token = req.body.access_token;
if (req.query && req.query.access_token) token = req.query.access_token;
if (req.body && req.body.access_token) accessToken = req.body.access_token;
if (req.query && req.query.access_token) accessToken = req.query.access_token;
if (req.headers && req.headers.authorization) {
const parts = req.headers.authorization.split(' ');
if (parts.length == 2) {
const [scheme, credentials] = parts;
if (/^Bearer$/i.test(scheme)) token = credentials;
if (/^Bearer$/i.test(scheme)) accessToken = credentials;
}
}
if (!token) return next(new HttpError(401, 'Token required'));
if (!accessToken) return next(new HttpError(401, 'Token required'));
const [error, user] = await safe(accesscontrol.verifyToken(token));
if (error && error.reason === BoxError.INVALID_CREDENTIALS) return next(new HttpError(401, error.message));
if (error) return next(new HttpError(500, error.message));
const token = await tokens.getByAccessToken(accessToken);
if (!token) return next(new HttpError(401, 'No such token'));
req.access_token = token; // used in logout route
const user = await users.get(token.identifier);
if (!user) return next(new HttpError(401,'User not found'));
if (!user.active) return next(new HttpError(401,'User not active'));
await safe(tokens.update(token.id, { lastUsedTime: new Date() })); // ignore any error
req.token = token;
req.user = user;
next();
@@ -84,8 +94,10 @@ function authorize(requiredRole) {
return function (req, res, next) {
assert.strictEqual(typeof req.user, 'object');
assert.strictEqual(typeof req.token, 'object');
if (users.compareRoles(req.user.role, requiredRole) < 0) return next(new HttpError(403, `role '${requiredRole}' is required but user has only '${req.user.role}'`));
if (!tokens.hasScope(req.token, req.method, req.path)) return next(new HttpError(403, 'access token does not have this scope'));
next();
};
@@ -95,7 +107,9 @@ async function authorizeOperator(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
assert.strictEqual(typeof req.user, 'object');
assert.strictEqual(typeof req.app, 'object');
assert.strictEqual(typeof req.token, 'object');
if (!tokens.hasScope(req.token, req.method, req.path)) return next(new HttpError(403, 'access token does not have this scope'));
if (apps.isOperator(req.app, req.user)) return next();
return next(new HttpError(403, 'user is not an operator'));
+90
View File
@@ -0,0 +1,90 @@
'use strict';
exports = module.exports = {
listByUser,
add,
get,
update,
remove,
getIcon
};
const assert = require('assert'),
applinks = require('../applinks.js'),
BoxError = require('../boxerror.js'),
safe = require('safetydance'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess;
async function listByUser(req, res, next) {
assert.strictEqual(typeof req.user, 'object');
const [error, result] = await safe(applinks.listByUser(req.user));
if (error) return next(BoxError.toHttpError(error));
// we have a separate route for this
result.forEach(function (a) { delete a.icon; });
next(new HttpSuccess(200, { applinks: result }));
}
async function add(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.upstreamUri || typeof req.body.upstreamUri !== 'string') return next(new HttpError(400, 'upstreamUri must be a non-empty string'));
if ('label' in req.body && typeof req.body.label !== 'string') return next(new HttpError(400, 'label must be a string'));
if ('tags' in req.body && !Array.isArray(req.body.tags)) return next(new HttpError(400, 'tags must be an array with strings'));
if ('accessRestriction' in req.body && typeof req.body.accessRestriction !== 'object') return next(new HttpError(400, 'accessRestriction must be an object'));
const [error] = await safe(applinks.add(req.body));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(201, {}));
}
async function get(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
const [error, result] = await safe(applinks.get(req.params.id));
if (error) return next(BoxError.toHttpError(error));
// we have a separate route for this
delete result.icon;
next(new HttpSuccess(200, result));
}
async function update(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
assert.strictEqual(typeof req.body, 'object');
if (!req.body.upstreamUri || typeof req.body.upstreamUri !== 'string') return next(new HttpError(400, 'upstreamUri must be a non-empty string'));
if ('label' in req.body && typeof req.body.label !== 'string') return next(new HttpError(400, 'label must be a string'));
if ('tags' in req.body && !Array.isArray(req.body.tags)) return next(new HttpError(400, 'tags must be an array with strings'));
if ('accessRestriction' in req.body && typeof req.body.accessRestriction !== 'object') return next(new HttpError(400, 'accessRestriction must be an object'));
if ('icon' in req.body && typeof req.body.icon !== 'string') return next(new HttpError(400, 'icon must be a string'));
const [error] = await safe(applinks.update(req.params.id, req.body));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(202, {}));
}
async function remove(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
const [error] = await safe(applinks.remove(req.params.id));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(204));
}
async function getIcon(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
const [error, icon] = await safe(applinks.getIcon(req.params.id, { original: req.query.original }));
if (error) return next(BoxError.toHttpError(error));
if (!icon) return next(new HttpError(404, 'no such icon'));
res.send(icon);
}
+68 -21
View File
@@ -35,14 +35,19 @@ exports = module.exports = {
setMailbox,
setInbox,
setLocation,
setDataDir,
setStorage,
setMounts,
setUpstreamUri,
stop,
start,
restart,
exec,
execWebSocket,
createExec,
startExec,
startExecWebSocket,
getExec,
checkForUpdates,
clone,
@@ -61,6 +66,7 @@ const apps = require('../apps.js'),
assert = require('assert'),
AuditSource = require('../auditsource.js'),
BoxError = require('../boxerror.js'),
constants = require('../constants.js'),
debug = require('debug')('box:routes/apps'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess,
@@ -167,9 +173,13 @@ async function install(req, res, next) {
if ('skipDnsSetup' in data && typeof data.skipDnsSetup !== 'boolean') return next(new HttpError(400, 'skipDnsSetup must be boolean'));
if ('enableMailbox' in data && typeof data.enableMailbox !== 'boolean') return next(new HttpError(400, 'enableMailbox must be boolean'));
if ('upstreamUri' in data && (typeof data.upstreamUri !== 'string' || !data.upstreamUri)) return next(new HttpError(400, 'upstreamUri must be a non emptry string'));
let [error, result] = await safe(apps.downloadManifest(data.appStoreId, data.manifest));
if (error) return next(BoxError.toHttpError(error));
if (result.manifest.appStoreId === constants.PROXY_APP_APPSTORE_ID && (typeof data.upstreamUri !== 'string' || !data.upstreamUri)) return next(new HttpError(400, 'upstreamUri must be a non empty string'));
if (safe.query(result.manifest, 'addons.docker') && req.user.role !== users.ROLE_OWNER) return next(new HttpError(403, '"owner" role is required to install app with docker addon'));
data.appStoreId = result.appStoreId;
@@ -369,6 +379,7 @@ async function setMailbox(req, res, next) {
if (req.body.enable) {
if (req.body.mailboxName !== null && typeof req.body.mailboxName !== 'string') return next(new HttpError(400, 'mailboxName must be a string'));
if (typeof req.body.mailboxDomain !== 'string') return next(new HttpError(400, 'mailboxDomain must be a string'));
if ('mailboxDisplayName' in req.body && typeof req.body.mailboxDisplayName !== 'string') return next(new HttpError(400, 'mailboxDisplayName must be a string'));
}
const [error, result] = await safe(apps.setMailbox(req.app, req.body, AuditSource.fromRequest(req)));
@@ -398,7 +409,6 @@ async function setLocation(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
if (typeof req.body.subdomain !== 'string') return next(new HttpError(400, 'subdomain must be string')); // subdomain may be an empty string
if (!req.body.domain) return next(new HttpError(400, 'domain is required'));
if (typeof req.body.domain !== 'string') return next(new HttpError(400, 'domain must be string'));
if ('portBindings' in req.body && typeof req.body.portBindings !== 'object') return next(new HttpError(400, 'portBindings must be an object'));
@@ -427,13 +437,18 @@ async function setLocation(req, res, next) {
next(new HttpSuccess(202, { taskId: result.taskId }));
}
async function setDataDir(req, res, next) {
async function setStorage(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
assert.strictEqual(typeof req.app, 'object');
if (req.body.dataDir !== null && typeof req.body.dataDir !== 'string') return next(new HttpError(400, 'dataDir must be a string'));
const { storageVolumeId, storageVolumePrefix } = req.body;
const [error, result] = await safe(apps.setDataDir(req.app, req.body.dataDir, AuditSource.fromRequest(req)));
if (storageVolumeId !== null) {
if (typeof storageVolumeId !== 'string') return next(new HttpError(400, 'storageVolumeId must be a string'));
if (typeof storageVolumePrefix !== 'string') return next(new HttpError(400, 'storageVolumePrefix must be a string'));
}
const [error, result] = await safe(apps.setStorage(req.app, storageVolumeId, storageVolumePrefix, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(202, { taskId: result.taskId }));
@@ -492,6 +507,7 @@ async function importApp(req, res, next) {
if (req.body.backupConfig) {
if (typeof backupConfig.provider !== 'string') return next(new HttpError(400, 'provider is required'));
if ('password' in backupConfig && typeof backupConfig.password !== 'string') return next(new HttpError(400, 'password must be a string'));
if ('encryptedFilenames' in backupConfig && typeof backupConfig.encryptedFilenames !== 'boolean') return next(new HttpError(400, 'encryptedFilenames must be a boolean'));
if ('acceptSelfSignedCerts' in backupConfig && typeof backupConfig.acceptSelfSignedCerts !== 'boolean') return next(new HttpError(400, 'format must be a boolean'));
// testing backup config can take sometime
@@ -697,14 +713,29 @@ function demuxStream(stream, stdin) {
});
}
async function exec(req, res, next) {
async function createExec(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
assert.strictEqual(typeof req.body, 'object');
let cmd = null;
if (req.query.cmd) {
cmd = safe.JSON.parse(req.query.cmd);
if (!Array.isArray(cmd) || cmd.length < 1) return next(new HttpError(400, 'cmd must be array with atleast size 1'));
if ('cmd' in req.body) {
if (!Array.isArray(req.body.cmd) || req.body.cmd.length < 1) return next(new HttpError(400, 'cmd must be array with atleast size 1'));
}
const cmd = req.body.cmd || null;
if ('tty' in req.body && typeof req.body.tty !== 'boolean') return next(new HttpError(400, 'tty must be boolean'));
const tty = !!req.body.tty;
if (safe.query(req.app, 'manifest.addons.docker') && req.user.role !== users.ROLE_OWNER) return next(new HttpError(403, '"owner" role is requied to exec app with docker addon'));
const [error, id] = await safe(apps.createExec(req.app, { cmd, tty }));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, { id }));
}
async function startExec(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
assert.strictEqual(typeof req.params.execId, 'string');
const columns = req.query.columns ? parseInt(req.query.columns, 10) : null;
if (isNaN(columns)) return next(new HttpError(400, 'columns must be a number'));
@@ -719,7 +750,7 @@ async function exec(req, res, next) {
// in a badly configured reverse proxy, we might be here without an upgrade
if (req.headers['upgrade'] !== 'tcp') return next(new HttpError(404, 'exec requires TCP upgrade'));
const [error, duplexStream] = await safe(apps.exec(req.app, { cmd: cmd, rows: rows, columns: columns, tty: tty }));
const [error, duplexStream] = await safe(apps.startExec(req.app, req.params.execId, { rows, columns, tty }));
if (error) return next(BoxError.toHttpError(error));
req.clearTimeout();
@@ -737,14 +768,9 @@ async function exec(req, res, next) {
}
}
async function execWebSocket(req, res, next) {
async function startExecWebSocket(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
let cmd = null;
if (req.query.cmd) {
cmd = safe.JSON.parse(req.query.cmd);
if (!Array.isArray(cmd) || cmd.length < 1) return next(new HttpError(400, 'cmd must be array with atleast size 1'));
}
assert.strictEqual(typeof req.params.execId, 'string');
const columns = req.query.columns ? parseInt(req.query.columns, 10) : null;
if (isNaN(columns)) return next(new HttpError(400, 'columns must be a number'));
@@ -757,7 +783,7 @@ async function execWebSocket(req, res, next) {
// in a badly configured reverse proxy, we might be here without an upgrade
if (req.headers['upgrade'] !== 'websocket') return next(new HttpError(404, 'exec requires websocket'));
const [error, duplexStream] = await safe(apps.exec(req.app, { cmd: cmd, rows: rows, columns: columns, tty: tty }));
const [error, duplexStream] = await safe(apps.startExec(req.app, req.params.execId, { rows, columns, tty }));
if (error) return next(BoxError.toHttpError(error));
req.clearTimeout();
@@ -785,6 +811,15 @@ async function execWebSocket(req, res, next) {
});
}
async function getExec(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
assert.strictEqual(typeof req.params.execId, 'string');
const [error, result] = await safe(apps.getExec(req.app, req.params.execId));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, result)); // { exitCode, running }
}
async function listBackups(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
@@ -869,6 +904,18 @@ async function setMounts(req, res, next) {
next(new HttpSuccess(202, { taskId: result.taskId }));
}
async function setUpstreamUri(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
assert.strictEqual(typeof req.app, 'object');
if (typeof req.body.upstreamUri !== 'string') return next(new HttpError(400, 'upstreamUri must be a string'));
const [error] = await safe(apps.setUpstreamUri(req.app, req.body.upstreamUri, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, {}));
}
async function listEventlog(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
+3 -3
View File
@@ -63,7 +63,7 @@ async function login(req, res, next) {
[error, token] = await safe(tokens.add({ clientId: type, identifier: req.user.id, expires: Date.now() + constants.DEFAULT_TOKEN_EXPIRATION_MSECS }));
if (error) return next(new HttpError(500, error));
await eventlog.add(eventlog.ACTION_USER_LOGIN, auditSource, { userId: req.user.id, user: users.removePrivateFields(req.user) });
await eventlog.add(req.user.ghost ? eventlog.ACTION_USER_LOGIN_GHOST : eventlog.ACTION_USER_LOGIN, auditSource, { userId: req.user.id, user: users.removePrivateFields(req.user) });
if (!req.user.ghost) safe(users.notifyLoginLocation(req.user, ip, userAgent, auditSource), { debug });
@@ -71,11 +71,11 @@ async function login(req, res, next) {
}
async function logout(req, res) {
assert.strictEqual(typeof req.access_token, 'string');
assert.strictEqual(typeof req.token, 'object');
await eventlog.add(eventlog.ACTION_USER_LOGOUT, AuditSource.fromRequest(req), { userId: req.user.id, user: users.removePrivateFields(req.user) });
await safe(tokens.delByAccessToken(req.access_token));
await safe(tokens.delByAccessToken(req.token.accessToken));
res.redirect('/login.html');
}
+25 -27
View File
@@ -1,38 +1,36 @@
'use strict';
exports = module.exports = {
getGraphs
getSystemGraphs,
getAppGraphs
};
const middleware = require('../middleware/index.js'),
const assert = require('assert'),
graphs = require('../graphs.js'),
HttpError = require('connect-lastmile').HttpError,
url = require('url');
HttpSuccess = require('connect-lastmile').HttpSuccess,
safe = require('safetydance');
// for testing locally: curl 'http://127.0.0.1:8417/graphite-web/render?format=json&from=-1min&target=absolute(collectd.localhost.du-docker.capacity-usage)'
// the datapoint is (value, timestamp) https://buildmedia.readthedocs.org/media/pdf/graphite/0.9.16/graphite.pdf
const graphiteProxy = middleware.proxy(url.parse('http://127.0.0.1:8417'));
async function getSystemGraphs(req, res, next) {
if (!req.query.fromMinutes || !parseInt(req.query.fromMinutes)) return next(new HttpError(400, 'fromMinutes must be a number'));
function getGraphs(req, res, next) {
const parsedUrl = url.parse(req.url, true /* parseQueryString */);
delete parsedUrl.query['access_token'];
delete req.headers['authorization'];
delete req.headers['cookies'];
const fromMinutes = parseInt(req.query.fromMinutes);
const noNullPoints = !!req.query.noNullPoints;
const [error, result] = await safe(graphs.getSystem(fromMinutes, noNullPoints));
if (error) return next(new HttpError(500, error));
// 'graphite-web' is the URL_PREFIX in docker-graphite
req.url = url.format({ pathname: 'graphite-web/render', query: parsedUrl.query });
// graphs may take very long to respond so we run into headers already sent issues quite often
// nginx still has a request timeout which can deal with this then.
req.clearTimeout();
graphiteProxy(req, res, function (error) {
if (!error) return next();
if (error.code === 'ECONNREFUSED') return next(new HttpError(424, 'Unable to connect to graphite'));
// ECONNRESET here is most likely because of a bug in the query or the uwsgi buffer size is too small
if (error.code === 'ECONNRESET') return next(new HttpError(424, 'Unable to query graphite'));
next(new HttpError(500, error));
});
next(new HttpSuccess(200, result));
}
async function getAppGraphs(req, res, next) {
assert.strictEqual(typeof req.app, 'object');
if (!req.query.fromMinutes || !parseInt(req.query.fromMinutes)) return next(new HttpError(400, 'fromMinutes must be a number'));
const fromMinutes = parseInt(req.query.fromMinutes);
const noNullPoints = !!req.query.noNullPoints;
const [error, result] = await safe(graphs.getByApp(req.app, fromMinutes, noNullPoints));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200, result));
}
+2 -2
View File
@@ -6,7 +6,7 @@ exports = module.exports = {
add,
update,
remove,
updateMembers
setMembers
};
const assert = require('assert'),
@@ -49,7 +49,7 @@ async function update(req, res, next) {
next(new HttpSuccess(200, { }));
}
async function updateMembers(req, res, next) {
async function setMembers(req, res, next) {
assert.strictEqual(typeof req.params.groupId, 'string');
if (!req.body.userIds) return next(new HttpError(404, 'missing or invalid userIds fields'));
+1
View File
@@ -4,6 +4,7 @@ exports = module.exports = {
accesscontrol: require('./accesscontrol.js'),
appPasswords: require('./apppasswords.js'),
apps: require('./apps.js'),
applinks: require('./applinks.js'),
appstore: require('./appstore.js'),
backups: require('./backups.js'),
branding: require('./branding.js'),
+10
View File
@@ -177,6 +177,11 @@ async function addMailbox(req, res, next) {
if (typeof req.body.ownerType !== 'string') return next(new HttpError(400, 'ownerType must be a string'));
if (typeof req.body.active !== 'boolean') return next(new HttpError(400, 'active must be a boolean'));
if (!Number.isInteger(req.body.storageQuota)) return next(new HttpError(400, 'storageQuota must be an integer'));
if (req.body.storageQuota < 0) return next(new HttpError(400, 'storageQuota must be a postive integer or zero'));
if (!Number.isInteger(req.body.messagesQuota)) return next(new HttpError(400, 'messagesQuota must be an integer'));
if (req.body.messagesQuota < 0) return next(new HttpError(400, 'messagesQuota must be a positive integer or zero'));
const [error] = await safe(mail.addMailbox(req.body.name, req.params.domain, req.body, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
@@ -192,6 +197,11 @@ async function updateMailbox(req, res, next) {
if (typeof req.body.active !== 'boolean') return next(new HttpError(400, 'active must be a boolean'));
if (typeof req.body.enablePop3 !== 'boolean') return next(new HttpError(400, 'enablePop3 must be a boolean'));
if (!Number.isInteger(req.body.storageQuota)) return next(new HttpError(400, 'storageQuota must be an integer'));
if (req.body.storageQuota < 0) return next(new HttpError(400, 'storageQuota must be a postive integer or zero'));
if (!Number.isInteger(req.body.messagesQuota)) return next(new HttpError(400, 'messagesQuota must be an integer'));
if (req.body.messagesQuota < 0) return next(new HttpError(400, 'messagesQuota must be a positive integer or zero'));
const [error] = await safe(mail.updateMailbox(req.params.name, req.params.domain, req.body, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
+15 -5
View File
@@ -3,6 +3,7 @@
exports = module.exports = {
proxy,
restart,
queueProxy,
setLocation,
getLocation
@@ -26,9 +27,8 @@ async function restart(req, res, next) {
next();
}
async function proxy(req, res, next) {
let parsedUrl = url.parse(req.url, true /* parseQueryString */);
const pathname = req.path.split('/').pop();
async function proxyToMailContainer(port, pathname, req, res, next) {
const parsedUrl = url.parse(req.url, true /* parseQueryString */);
// do not proxy protected values
delete parsedUrl.query['access_token'];
@@ -39,9 +39,9 @@ async function proxy(req, res, next) {
if (error) return next(BoxError.toHttpError(error));
parsedUrl.query['access_token'] = addonDetails.token;
req.url = url.format({ pathname: pathname, query: parsedUrl.query });
req.url = url.format({ pathname, query: parsedUrl.query });
const proxyOptions = url.parse(`http://${addonDetails.ip}:3000`);
const proxyOptions = url.parse(`http://${addonDetails.ip}:${port}`);
const mailserverProxy = middleware.proxy(proxyOptions);
req.clearTimeout(); // TODO: add timeout to mail server proxy logic instead of this
@@ -55,6 +55,16 @@ async function proxy(req, res, next) {
});
}
async function proxy(req, res, next) {
const pathname = req.path.split('/').pop();
proxyToMailContainer(3000, pathname, req, res, next);
}
async function queueProxy(req, res, next) {
proxyToMailContainer(6000, req.path.replace('/', '/queue/'), req, res, next);
}
async function getLocation(req, res, next) {
const [error, result] = await safe(mail.getLocation());
if (error) return next(BoxError.toHttpError(error));
+1 -1
View File
@@ -52,7 +52,7 @@ async function update(req, res, next) {
assert.strictEqual(typeof req.notification, 'object');
assert.strictEqual(typeof req.body, 'object');
if (typeof req.body.acknowledged !== 'boolean') return next(new HttpError(400, 'acknowledged must be a booliean'));
if (typeof req.body.acknowledged !== 'boolean') return next(new HttpError(400, 'acknowledged must be a boolean'));
const [error] = await safe(notifications.update(req.notification, { acknowledged: req.body.acknowledged }));
if (error) return next(BoxError.toHttpError(error));
+33 -1
View File
@@ -6,6 +6,8 @@ exports = module.exports = {
update,
getAvatar,
setAvatar,
getBackgroundImage,
setBackgroundImage,
setPassword,
setTwoFactorAuthenticationSecret,
enableTwoFactorAuthentication,
@@ -37,10 +39,14 @@ async function authorize(req, res, next) {
async function get(req, res, next) {
assert.strictEqual(typeof req.user, 'object');
const [error, avatarUrl] = await safe(users.getAvatarUrl(req.user));
let [error, avatarUrl] = await safe(users.getAvatarUrl(req.user));
if (error) return next(BoxError.toHttpError(error));
if (!avatarUrl) return next(new HttpError(404, 'User not found'));
let backgroundImage;
[error, backgroundImage] = await safe(users.getBackgroundImage(req.user.id));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, {
id: req.user.id,
username: req.user.username,
@@ -50,6 +56,7 @@ async function get(req, res, next) {
twoFactorAuthenticationEnabled: req.user.twoFactorAuthenticationEnabled,
role: req.user.role,
source: req.user.source,
hasBackgroundImage: !!backgroundImage,
avatarUrl
}));
}
@@ -107,6 +114,31 @@ async function getAvatar(req, res, next) {
res.send(avatar);
}
async function setBackgroundImage(req, res, next) {
assert.strictEqual(typeof req.user, 'object');
let backgroundImage = null;
if (req.files && req.files.backgroundImage) {
backgroundImage = safe.fs.readFileSync(req.files.backgroundImage.path);
if (!backgroundImage) return next(BoxError.toHttpError(new BoxError(BoxError.FS_ERROR, safe.error.message)));
}
const [error] = await safe(users.setBackgroundImage(req.user.id, backgroundImage));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(202, {}));
}
async function getBackgroundImage(req, res, next) {
assert.strictEqual(typeof req.user, 'object');
const [error, backgroundImage] = await safe(users.getBackgroundImage(req.user.id));
if (error) return next(BoxError.toHttpError(error));
res.send(backgroundImage);
}
async function setPassword(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
assert.strictEqual(typeof req.user, 'object');
+2
View File
@@ -103,6 +103,8 @@ async function restore(req, res, next) {
const backupConfig = req.body.backupConfig;
if (typeof backupConfig.provider !== 'string') return next(new HttpError(400, 'provider is required'));
if ('password' in backupConfig && typeof backupConfig.password !== 'string') return next(new HttpError(400, 'password must be a string'));
if ('encryptedFilenames' in req.body && typeof req.body.encryptedFilenames !== 'boolean') return next(new HttpError(400, 'encryptedFilenames must be a boolean'));
if (typeof backupConfig.format !== 'string') return next(new HttpError(400, 'format must be a string'));
if ('acceptSelfSignedCerts' in backupConfig && typeof backupConfig.acceptSelfSignedCerts !== 'boolean') return next(new HttpError(400, 'format must be a boolean'));
+8 -6
View File
@@ -74,6 +74,8 @@ async function setBackupConfig(req, res, next) {
if (typeof req.body.provider !== 'string') return next(new HttpError(400, 'provider is required'));
if (typeof req.body.schedulePattern !== 'string') return next(new HttpError(400, 'schedulePattern is required'));
if ('password' in req.body && typeof req.body.password !== 'string') return next(new HttpError(400, 'password must be a string'));
if ('encryptedFilenames' in req.body && typeof req.body.encryptedFilenames !== 'boolean') return next(new HttpError(400, 'encryptedFilenames must be a boolean'));
if ('syncConcurrency' in req.body) {
if (typeof req.body.syncConcurrency !== 'number') return next(new HttpError(400, 'syncConcurrency must be a positive integer'));
if (req.body.syncConcurrency < 1) return next(new HttpError(400, 'syncConcurrency must be a positive integer'));
@@ -137,21 +139,21 @@ async function setExternalLdapConfig(req, res, next) {
next(new HttpSuccess(200, {}));
}
async function getUserDirectoryConfig(req, res, next) {
const [error, config] = await safe(settings.getUserDirectoryConfig());
async function getDirectoryServerConfig(req, res, next) {
const [error, config] = await safe(settings.getDirectoryServerConfig());
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, config));
}
async function setUserDirectoryConfig(req, res, next) {
async function setDirectoryServerConfig(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (typeof req.body.enabled !== 'boolean') return next(new HttpError(400, 'enabled must be a boolean'));
if (typeof req.body.secret !== 'string') return next(new HttpError(400, 'secret must be a string'));
if ('allowlist' in req.body && typeof req.body.allowlist !== 'string') return next(new HttpError(400, 'allowlist must be a string'));
const [error] = await safe(settings.setUserDirectoryConfig(req.body));
const [error] = await safe(settings.setDirectoryServerConfig(req.body));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(200, {}));
@@ -298,7 +300,7 @@ function get(req, res, next) {
case settings.IPV6_CONFIG_KEY: return getIPv6Config(req, res, next);
case settings.BACKUP_CONFIG_KEY: return getBackupConfig(req, res, next);
case settings.EXTERNAL_LDAP_KEY: return getExternalLdapConfig(req, res, next);
case settings.USER_DIRECTORY_KEY: return getUserDirectoryConfig(req, res, next);
case settings.DIRECTORY_SERVER_KEY: return getDirectoryServerConfig(req, res, next);
case settings.UNSTABLE_APPS_KEY: return getUnstableAppsConfig(req, res, next);
case settings.REGISTRY_CONFIG_KEY: return getRegistryConfig(req, res, next);
case settings.SYSINFO_CONFIG_KEY: return getSysinfoConfig(req, res, next);
@@ -321,7 +323,7 @@ function set(req, res, next) {
case settings.DYNAMIC_DNS_KEY: return setDynamicDnsConfig(req, res, next);
case settings.IPV6_CONFIG_KEY: return setIPv6Config(req, res, next);
case settings.EXTERNAL_LDAP_KEY: return setExternalLdapConfig(req, res, next);
case settings.USER_DIRECTORY_KEY: return setUserDirectoryConfig(req, res, next);
case settings.DIRECTORY_SERVER_KEY: return setDirectoryServerConfig(req, res, next);
case settings.UNSTABLE_APPS_KEY: return setUnstableAppsConfig(req, res, next);
case settings.REGISTRY_CONFIG_KEY: return setRegistryConfig(req, res, next);
case settings.SYSINFO_CONFIG_KEY: return setSysinfoConfig(req, res, next);
-3
View File
@@ -63,9 +63,6 @@ async function canEnableRemoteSupport(req, res, next) {
const sshdConfig = safe.fs.readFileSync(SSHD_CONFIG_FILE, 'utf8');
if (!sshdConfig) return next(new HttpError(412, `Failed to read file ${SSHD_CONFIG_FILE}`));
// only check for PermitRootLogin if we want to enable remote support
if (req.body.enable && !sshdConfig.split('\n').find(function (line) { return line.search(/^PermitRootLogin.*yes/) !== -1; })) return next(new HttpError(417, `Set "PermitRootLogin yes" in ${SSHD_CONFIG_FILE}`));
next();
}
+2 -2
View File
@@ -333,10 +333,10 @@ xdescribe('App API', function () {
it('app install fails - reserved smtp subdomain', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, subdomain: constants.SMTP_LOCATION, accessRestriction: null, domain: DOMAIN_0.domain })
.send({ manifest: APP_MANIFEST, subdomain: constants.SMTP_SUBDOMAIN, accessRestriction: null, domain: DOMAIN_0.domain })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.contain(constants.SMTP_LOCATION + ' is reserved');
expect(res.body.message).to.contain(constants.SMTP_SUBDOMAIN + ' is reserved');
done();
});
});
+4 -22
View File
@@ -29,13 +29,13 @@ describe('Appstore Apps API', function () {
it('cannot get app with bad token', async function () {
const scope1 = nock(settings.apiServerOrigin())
.get(`/api/v1/apps/org.wordpress.cloudronapp?accessToken=${appstoreToken}`)
.reply(402, {});
.reply(403, {});
const response = await superagent.get(`${serverUrl}/api/v1/appstore/apps/org.wordpress.cloudronapp`)
.query({ access_token: owner.token })
.ok(() => true);
expect(response.statusCode).to.be(402);
expect(response.statusCode).to.be(412);
expect(scope1.isDone()).to.be.ok();
});
@@ -85,7 +85,7 @@ describe('Appstore Cloudron Registration API - existing user', function () {
it('can setup subscription', async function () {
const scope1 = nock(settings.apiServerOrigin())
.post('/api/v1/register_user', (body) => body.email && body.password)
.post('/api/v1/register_user', (body) => body.email && body.password && body.utmSource)
.reply(201, {});
const scope2 = nock(settings.apiServerOrigin())
@@ -109,15 +109,6 @@ describe('Appstore Cloudron Registration API - existing user', function () {
nock.cleanAll();
});
it('cannot re-register - already registered', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/appstore/register_cloudron`)
.send({ email: 'test@cloudron.io', password: 'secret', signup: false })
.query({ access_token: owner.token })
.ok(() => true);
expect(response.statusCode).to.equal(409);
});
it('can get subscription', async function () {
const scope1 = nock(settings.apiServerOrigin())
.get('/api/v1/subscription?accessToken=CLOUDRON_TOKEN', () => true)
@@ -142,7 +133,7 @@ describe('Appstore Cloudron Registration API - new user signup', function () {
it('can setup subscription', async function () {
const scope1 = nock(settings.apiServerOrigin())
.post('/api/v1/register_user', (body) => body.email && body.password)
.post('/api/v1/register_user', (body) => body.email && body.password && body.utmSource)
.reply(201, {});
const scope2 = nock(settings.apiServerOrigin())
@@ -165,15 +156,6 @@ describe('Appstore Cloudron Registration API - new user signup', function () {
expect(await settings.getAppstoreWebToken()).to.be('SECRET_TOKEN');
});
it('cannot re-register - already registered', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/appstore/register_cloudron`)
.send({ email: 'test@cloudron.io', password: 'secret', signup: false })
.query({ access_token: owner.token })
.ok(() => true);
expect(response.statusCode).to.equal(409);
});
it('can get subscription', async function () {
const scope1 = nock(settings.apiServerOrigin())
.get('/api/v1/subscription?accessToken=CLOUDRON_TOKEN', () => true)
+4 -1
View File
@@ -107,7 +107,10 @@ async function waitForTask(taskId) {
for (let i = 0; i < 10; i++) {
const result = await tasks.get(taskId);
expect(result).to.not.be(null);
if (!result.active) return;
if (!result.active) {
if (result.success) return result;
throw new Error(`Task ${taskId} failed: ${result.error.message} - ${result.error.stack}`);
}
await delay(2000);
console.log(`Waiting for task to ${taskId} finish`);
}
+20 -7
View File
@@ -352,10 +352,19 @@ describe('Mail API', function () {
expect(response.statusCode).to.equal(400);
});
it('cannot set with bad addresses field', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/catch_all`)
.query({ access_token: owner.token })
.send({ addresses: [ 'user1' ] })
.ok(() => true);
expect(response.statusCode).to.equal(400);
});
it('set succeeds', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/catch_all`)
.query({ access_token: owner.token })
.send({ addresses: [ 'user1' ] });
.send({ addresses: [ `user1@${dashboardDomain}` ] });
expect(response.statusCode).to.equal(202);
});
@@ -365,7 +374,7 @@ describe('Mail API', function () {
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(200);
expect(response.body.catchAll).to.eql([ 'user1' ]);
expect(response.body.catchAll).to.eql([ `user1@${dashboardDomain}` ]);
});
});
@@ -422,7 +431,7 @@ describe('Mail API', function () {
it('add succeeds', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/mailboxes`)
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true })
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true, storageQuota: 10, messagesQuota: 20 })
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(201);
@@ -430,7 +439,7 @@ describe('Mail API', function () {
it('cannot add again', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/mailboxes`)
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true })
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true, storageQuota: 10, messagesQuota: 20 })
.query({ access_token: owner.token })
.ok(() => true);
@@ -457,6 +466,8 @@ describe('Mail API', function () {
expect(response.body.mailbox.aliasName).to.equal(null);
expect(response.body.mailbox.aliasDomain).to.equal(null);
expect(response.body.mailbox.domain).to.equal(dashboardDomain);
expect(response.body.mailbox.storageQuota).to.equal(10);
expect(response.body.mailbox.messagesQuota).to.equal(20);
});
it('listing succeeds', async function () {
@@ -471,6 +482,8 @@ describe('Mail API', function () {
expect(response.body.mailboxes[0].ownerType).to.equal('user');
expect(response.body.mailboxes[0].aliases).to.eql([]);
expect(response.body.mailboxes[0].domain).to.equal(dashboardDomain);
expect(response.body.mailboxes[0].storageQuota).to.equal(10);
expect(response.body.mailboxes[0].messagesQuota).to.equal(20);
});
it('disable fails even if not exist', async function () {
@@ -505,7 +518,7 @@ describe('Mail API', function () {
it('add the mailbox', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/mailboxes`)
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true })
.send({ name: MAILBOX_NAME, ownerId: owner.id, ownerType: 'user', active: true, storageQuota: 10, messagesQuota: 20 })
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(201);
@@ -539,7 +552,7 @@ describe('Mail API', function () {
it('set succeeds', async function () {
const response = await superagent.put(`${serverUrl}/api/v1/mail/${dashboardDomain}/mailboxes/${MAILBOX_NAME}/aliases`)
.send({ aliases: [{ name: 'hello', domain: dashboardDomain}, {name: 'there', domain: dashboardDomain}] })
.send({ aliases: [{ name: 'hello*', domain: dashboardDomain}, {name: 'there', domain: dashboardDomain}] })
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(202);
@@ -550,7 +563,7 @@ describe('Mail API', function () {
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(200);
expect(response.body.aliases).to.eql([{ name: 'hello', domain: dashboardDomain}, {name: 'there', domain: dashboardDomain}]);
expect(response.body.aliases).to.eql([{ name: 'hello*', domain: dashboardDomain}, {name: 'there', domain: dashboardDomain}]);
});
it('get fails if mailbox does not exist', async function () {
+31 -2
View File
@@ -15,7 +15,7 @@ describe('Tokens API', function () {
before(setup);
after(cleanup);
let token;
let token, readOnlyToken;
it('cannot create token with bad name', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/tokens`)
@@ -35,13 +35,42 @@ describe('Tokens API', function () {
token = response.body;
});
it('can create read-only token', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/tokens`)
.query({ access_token: owner.token })
.send({ name: 'mytoken1', scope: { '*': 'r' }});
expect(response.status).to.equal(201);
expect(response.body).to.be.a('object');
readOnlyToken = response.body;
});
it('cannot create read-only token with invalid scope', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/tokens`)
.query({ access_token: owner.token })
.send({ name: 'mytoken1', scope: { 'foobar': 'rw' }})
.ok(() => true);
expect(response.status).to.equal(400);
});
it('can list tokens', async function () {
const response = await superagent.get(`${serverUrl}/api/v1/tokens`)
.query({ access_token: owner.token });
expect(response.statusCode).to.equal(200);
expect(response.body.tokens.length).to.be(2); // one is owner token on activation
expect(response.body.tokens.length).to.be(3); // one is owner token on activation
const tokenIds = response.body.tokens.map(t => t.id);
expect(tokenIds).to.contain(token.id);
expect(tokenIds).to.contain(readOnlyToken.id);
});
it('cannot create token with read only token', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/tokens`)
.query({ access_token: readOnlyToken.accessToken })
.send({ name: 'somename' })
.ok(() => true);
expect(response.status).to.equal(403);
});
it('cannot get non-existent token', async function () {
+1 -11
View File
@@ -591,7 +591,7 @@ describe('Users API', function () {
it('add mailbox succeeds as mail manager', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/mail/${dashboardDomain}/mailboxes`)
.send({ name: 'support', ownerId: owner.id, ownerType: 'user', active: true })
.send({ name: 'support', ownerId: owner.id, ownerType: 'user', active: true, storageQuota: 0, messagesQuota: 0 })
.query({ access_token: user.token });
expect(response.statusCode).to.equal(201);
@@ -640,15 +640,5 @@ describe('Users API', function () {
expect(response.statusCode).to.equal(409);
});
});
describe('transfer ownership', function () {
it('succeeds', async function () {
const response = await superagent.post(`${serverUrl}/api/v1/users/${user.id}/make_owner`)
.query({ access_token: owner.token })
.send({});
expect(response.statusCode).to.equal(204);
});
});
});
+3 -1
View File
@@ -50,10 +50,12 @@ async function add(req, res, next) {
if (typeof req.body.name !== 'string') return next(new HttpError(400, 'name must be string'));
if ('expiresAt' in req.body && typeof req.body.expiresAt !== 'number') return next(new HttpError(400, 'expiresAt must be number'));
if ('scope' in req.body && typeof req.body.scope !== 'object') return next(new HttpError(400, 'scope must be an object'));
const expiresAt = req.body.expiresAt || (Date.now() + (100 * 365 * 24 * 60 * 60 * 1000)); // forever - 100 years TODO maybe we should allow 0 or -1 to make that explicit
const scope = req.body.scope || null;
const [error, result] = await safe(tokens.add({ clientId: tokens.ID_SDK, identifier: req.user.id, expires: expiresAt, name: req.body.name }));
const [error, result] = await safe(tokens.add({ clientId: tokens.ID_SDK, identifier: req.user.id, expires: expiresAt, name: req.body.name, scope }));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(201, result));
-15
View File
@@ -10,7 +10,6 @@ exports = module.exports = {
verifyPassword,
setGroups,
setGhost,
makeOwner,
makeLocal,
getPasswordResetLink,
@@ -202,20 +201,6 @@ async function setPassword(req, res, next) {
next(new HttpSuccess(204));
}
// This route transfers ownership from token user to user specified in path param
async function makeOwner(req, res, next) {
assert.strictEqual(typeof req.resource, 'object');
// first make new one owner, then demote current one
let [error] = await safe(users.update(req.resource, { role: users.ROLE_OWNER }, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
[error] = await safe(users.update(req.user, { role: users.ROLE_USER }, AuditSource.fromRequest(req)));
if (error) return next(BoxError.toHttpError(error));
next(new HttpSuccess(204));
}
async function makeLocal(req, res, next) {
assert.strictEqual(typeof req.resource, 'object');
+16 -14
View File
@@ -16,9 +16,8 @@ const apps = require('./apps.js'),
safe = require('safetydance'),
_ = require('underscore');
// appId -> { containerId, schedulerConfig (manifest), cronjobs }
let gState = { };
let gSuspendedAppIds = new Set(); // suspended because some apptask is running
const gState = {}; // appId -> { containerId, schedulerConfig (manifest+crontab), cronjobs }
const gSuspendedAppIds = new Set(); // suspended because some apptask is running
// TODO: this should probably also stop existing jobs to completely prevent race but the code is not re-entrant
function suspendJobs(appId) {
@@ -59,26 +58,30 @@ async function createJobs(app, schedulerConfig) {
assert(schedulerConfig && typeof schedulerConfig === 'object');
const appId = app.id;
const jobs = { };
const jobs = {};
for (const taskName of Object.keys(schedulerConfig)) {
const task = schedulerConfig[taskName];
const randomSecond = Math.floor(60*Math.random()); // don't start all crons to decrease memory pressure
const cronTime = (constants.TEST ? '*/5 ' : `${randomSecond} `) + task.schedule; // time ticks faster in tests
const { schedule, command } = schedulerConfig[taskName];
const containerName = `${app.id}-${taskName}`;
const cmd = schedulerConfig[taskName].command;
// stopJobs only deletes jobs since previous run. This means that when box code restarts, none of the containers
// stopJobs only deletes jobs since previous sync. This means that when box code restarts, none of the containers
// are removed. The deleteContainer here ensures we re-create the cron containers with the latest config
await safe(docker.deleteContainer(containerName)); // ignore error
const [error] = await safe(docker.createSubcontainer(app, containerName, [ '/bin/sh', '-c', cmd ], { } /* options */));
const [error] = await safe(docker.createSubcontainer(app, containerName, [ '/bin/sh', '-c', command ], {} /* options */), { debug });
if (error && error.reason !== BoxError.ALREADY_EXISTS) continue;
debug(`createJobs: ${taskName} (${app.fqdn}) will run in container ${containerName}`);
let cronTime;
if (schedule === '@service') {
cronTime = new Date(Date.now() + 2*1000); // 2 seconds from now
} else {
// random is so that all crons start at once to decrease memory pressure
cronTime = (constants.TEST ? '*/5 ' : `${Math.floor(60*Math.random())} `) + schedule; // time ticks faster in tests
}
const cronJob = new CronJob({
cronTime: cronTime, // at this point, the pattern has been validated
cronTime,
onTick: async () => {
const [error] = await safe(runTask(appId, taskName)); // put the app id in closure, so we don't use the outdated app object by mistake
if (error) debug(`could not run task ${taskName} : ${error.message}`);
@@ -120,10 +123,9 @@ async function sync() {
debug(`sync: removing jobs of ${appId}`);
const [error] = await safe(stopJobs(appId, gState[appId]));
if (error) debug(`sync: error stopping jobs of removed app ${appId}: ${error.message}`);
delete gState[appId];
}
gState = _.omit(gState, removedAppIds);
for (const app of allApps) {
const appState = gState[app.id] || null;
const schedulerConfig = apps.getSchedulerConfig(app);
+13 -23
View File
@@ -7,23 +7,13 @@ if (process.argv[2] === '--check') {
process.exit(0);
}
const assert = require('assert'),
async = require('async'),
backuptask = require('../backuptask.js'),
const backuptask = require('../backuptask.js'),
database = require('../database.js'),
debug = require('debug')('box:backupupload'),
safe = require('safetydance'),
settings = require('../settings.js'),
v8 = require('v8');
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
async.series([
database.initialize,
settings.initCache
], callback);
}
// Main process starts here
const remotePath = process.argv[2];
const format = process.argv[3];
@@ -70,20 +60,20 @@ function dumpMemoryInfo() {
debug(`v8 heap: used ${h(hs.used_heap_size)} total: ${h(hs.total_heap_size)} max: ${h(hs.heap_size_limit)}`);
}
initialize(function (error) {
if (error) throw error;
(async function main() {
await database.initialize();
await settings.initCache();
dumpMemoryInfo();
const timerId = setInterval(dumpMemoryInfo, 30000);
backuptask.upload(remotePath, format, dataLayoutString, throttledProgressCallback(5000), function resultHandler(error) {
debug('upload completed. error: ', error);
const [uploadError] = await safe(backuptask.upload(remotePath, format, dataLayoutString, throttledProgressCallback(5000)));
debug('upload completed. error: ', uploadError);
process.send({ result: error ? error.message : '' });
clearInterval(timerId);
process.send({ result: uploadError ? uploadError.message : '' });
clearInterval(timerId);
// https://nodejs.org/api/process.html are exit codes used by node. apps.js uses the value below
// to check apptask crashes
process.exit(error ? 50 : 0);
});
});
// https://nodejs.org/api/process.html are exit codes used by node. apps.js uses the value below
// to check apptask crashes
process.exit(uploadError ? 50 : 0);
})();
+49
View File
@@ -0,0 +1,49 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $# -eq 0 ]]; then
echo "No arguments supplied"
exit 1
fi
if [[ "$1" == "--check" ]]; then
echo "OK"
exit 0
fi
target_dir="$1"
source_dir="$2"
source_stat=$(stat --format='%d,%i' "${source_dir}")
target_stat=$(stat --format='%d,%i' "${target_dir}")
# test sameness across bind mounts. if it's same, we can skip the emptiness check
if [[ "${source_stat}" == "${target_stat}" ]]; then
echo "Source dir and target dir are the same"
exit 0
fi
readonly test_file="${target_dir}/.chown-test"
mkdir -p "${target_dir}"
rm -f "${test_file}" # clean up any from previous run
if [[ -n $(ls -A "${target_dir}") ]]; then
echo "volume dir is not empty"
exit 2
fi
touch "${test_file}"
if ! chown yellowtent:yellowtent "${test_file}"; then
echo "chown does not work"
exit 3
fi
rm -f "${test_file}"
rm -r "${target_dir}" # will get recreated by the local storage addon
-1
View File
@@ -20,4 +20,3 @@ fi
volume_dir="$1"
mkdir -p "${volume_dir}"
+10 -1
View File
@@ -26,8 +26,17 @@ if [[ "${BOX_ENV}" == "test" ]]; then
[[ "${target_dir}" != *"/.cloudron_test/"* ]] && exit 1
fi
source_stat=$(stat --format='%d,%i' "${source_dir}")
target_stat=$(stat --format='%d,%i' "${target_dir}")
# test sameness across bind mounts
if [[ "${source_stat}" == "${target_stat}" ]]; then
echo "Source dir and target dir are the same"
exit 0
fi
# copy and remove - this way if the copy fails, the original is intact
# the find logic is so that move to a subdir works (and we also move hidden files)
# the find logic is so that move to a one level subdir works (and we also move hidden files)
find "${source_dir}" -maxdepth 1 -mindepth 1 -not -wholename "${target_dir}" -exec cp -ar '{}' "${target_dir}" \;
find "${source_dir}" -maxdepth 1 -mindepth 1 -not -wholename "${target_dir}" -exec rm -rf '{}' \;
# this will fail if target is a subdir or if source is a mountpoint
+1 -1
View File
@@ -17,7 +17,7 @@ if [[ "$1" == "--check" ]]; then
exit 0
fi
CLOUDRON_SUPPORT_PUBLIC_KEY='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQVilclYAIu+ioDp/sgzzFz6YU0hPcRYY7ze/LiF/lC7uQqK062O54BFXTvQ3ehtFZCx3bNckjlT2e6gB8Qq07OM66De4/S/g+HJW4TReY2ppSPMVNag0TNGxDzVH8pPHOysAm33LqT2b6L/wEXwC6zWFXhOhHjcMqXvi8Ejaj20H1HVVcf/j8qs5Thkp9nAaFTgQTPu8pgwD8wDeYX1hc9d0PYGesTADvo6HF4hLEoEnefLw7PaStEbzk2fD3j7/g5r5HcgQQXBe74xYZ/1gWOX2pFNuRYOBSEIrNfJEjFJsqk3NR1+ZoMGK7j+AZBR4k0xbrmncQLcQzl6MMDzkp support@cloudron.io'
CLOUDRON_SUPPORT_PUBLIC_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGWS+930b8QdzbchGljt3KSljH9wRhYvht8srrtQHdzg support@cloudron.io"
cmd="$1"
keys_file="$2"
+59 -42
View File
@@ -79,14 +79,17 @@ function initializeExpressSync() {
const multipart = middleware.multipart({ maxFieldsSize: FIELD_LIMIT, limit: FILE_SIZE_LIMIT, timeout: FILE_TIMEOUT });
// to keep routes code short
// authentication
const password = routes.accesscontrol.passwordAuth;
const token = routes.accesscontrol.tokenAuth;
// authorization
const authorizeOwner = routes.accesscontrol.authorize(users.ROLE_OWNER);
const authorizeAdmin = routes.accesscontrol.authorize(users.ROLE_ADMIN);
const authorizeOperator = routes.accesscontrol.authorizeOperator;
const authorizeUserManager = routes.accesscontrol.authorize(users.ROLE_USER_MANAGER);
const authorizeMailManager = routes.accesscontrol.authorize(users.ROLE_MAIL_MANAGER);
const authorizeUser = routes.accesscontrol.authorize(users.ROLE_USER);
// public routes
router.post('/api/v1/cloudron/setup', json, routes.provision.setupTokenAuth, routes.provision.providerTokenAuth, routes.provision.setup); // only available until no-domain
@@ -113,7 +116,7 @@ function initializeExpressSync() {
router.post('/api/v1/cloudron/check_for_updates', json, token, authorizeAdmin, routes.cloudron.checkForUpdates);
router.get ('/api/v1/cloudron/reboot', token, authorizeAdmin, routes.cloudron.isRebootRequired);
router.post('/api/v1/cloudron/reboot', json, token, authorizeAdmin, routes.cloudron.reboot);
router.get ('/api/v1/cloudron/graphs', token, authorizeAdmin, routes.graphs.getGraphs);
router.get ('/api/v1/cloudron/graphs', token, authorizeAdmin, routes.graphs.getSystemGraphs);
router.get ('/api/v1/cloudron/disks', token, authorizeAdmin, routes.cloudron.getDisks);
router.get ('/api/v1/cloudron/memory', token, authorizeAdmin, routes.cloudron.getMemory);
router.get ('/api/v1/cloudron/logs/:unit', token, authorizeAdmin, routes.cloudron.getLogs);
@@ -121,8 +124,8 @@ function initializeExpressSync() {
router.get ('/api/v1/cloudron/eventlog', token, authorizeAdmin, routes.eventlog.list);
router.get ('/api/v1/cloudron/eventlog/:eventId', token, authorizeAdmin, routes.eventlog.get);
router.post('/api/v1/cloudron/sync_external_ldap', json, token, authorizeAdmin, routes.cloudron.syncExternalLdap);
router.get ('/api/v1/cloudron/server_ipv4', token, authorizeAdmin, routes.cloudron.getServerIpv4);
router.get ('/api/v1/cloudron/server_ipv6', token, authorizeAdmin, routes.cloudron.getServerIpv6);
router.get ('/api/v1/cloudron/server_ipv4', token, authorizeAdmin, routes.cloudron.getServerIpv4);
router.get ('/api/v1/cloudron/server_ipv6', token, authorizeAdmin, routes.cloudron.getServerIpv6);
// task routes
router.get ('/api/v1/tasks', token, authorizeAdmin, routes.tasks.list);
@@ -144,29 +147,31 @@ function initializeExpressSync() {
router.post('/api/v1/backups/:backupId', json, token, authorizeAdmin, routes.backups.update);
// config route (for dashboard). can return some private configuration unlike status
router.get ('/api/v1/config', token, routes.cloudron.getConfig);
router.get ('/api/v1/config', token, authorizeUser, routes.cloudron.getConfig);
// working off the user behind the provided token
router.get ('/api/v1/profile', token, routes.profile.get);
router.post('/api/v1/profile', json, token, routes.profile.authorize, routes.profile.update);
router.get ('/api/v1/profile/avatar/:identifier', routes.profile.getAvatar); // this is not scoped so it can used directly in img tag
router.post('/api/v1/profile/avatar', json, token, (req, res, next) => { return typeof req.body.avatar === 'string' ? next() : multipart(req, res, next); }, routes.profile.setAvatar); // avatar is not exposed in LDAP. so it's personal and not locked
router.post('/api/v1/profile/password', json, token, routes.users.verifyPassword, routes.profile.setPassword);
router.post('/api/v1/profile/twofactorauthentication_secret', json, token, routes.profile.setTwoFactorAuthenticationSecret);
router.post('/api/v1/profile/twofactorauthentication_enable', json, token, routes.profile.enableTwoFactorAuthentication);
router.post('/api/v1/profile/twofactorauthentication_disable', json, token, routes.users.verifyPassword, routes.profile.disableTwoFactorAuthentication);
router.get ('/api/v1/profile', token, authorizeUser, routes.profile.get);
router.post('/api/v1/profile', json, token, authorizeUser, routes.profile.authorize, routes.profile.update);
router.get ('/api/v1/profile/avatar/:identifier', routes.profile.getAvatar); // this is not scoped so it can used directly in img tag
router.post('/api/v1/profile/avatar', json, token, authorizeUser, (req, res, next) => { return typeof req.body.avatar === 'string' ? next() : multipart(req, res, next); }, routes.profile.setAvatar); // avatar is not exposed in LDAP. so it's personal and not locked
router.get ('/api/v1/profile/backgroundImage', token, authorizeUser, routes.profile.getBackgroundImage);
router.post('/api/v1/profile/backgroundImage', token, authorizeUser, multipart, routes.profile.setBackgroundImage); // backgroundImage is not exposed in LDAP. so it's personal and not locked
router.post('/api/v1/profile/password', json, token, authorizeUser, routes.users.verifyPassword, routes.profile.setPassword);
router.post('/api/v1/profile/twofactorauthentication_secret', json, token, authorizeUser, routes.profile.setTwoFactorAuthenticationSecret);
router.post('/api/v1/profile/twofactorauthentication_enable', json, token, authorizeUser, routes.profile.enableTwoFactorAuthentication);
router.post('/api/v1/profile/twofactorauthentication_disable', json, token, authorizeUser, routes.users.verifyPassword, routes.profile.disableTwoFactorAuthentication);
// app password routes
router.get ('/api/v1/app_passwords', token, routes.appPasswords.list);
router.post('/api/v1/app_passwords', json, token, routes.appPasswords.add);
router.get ('/api/v1/app_passwords/:id', token, routes.appPasswords.get);
router.del ('/api/v1/app_passwords/:id', token, routes.appPasswords.del);
router.get ('/api/v1/app_passwords', token, authorizeUser, routes.appPasswords.list);
router.post('/api/v1/app_passwords', json, token, authorizeUser, routes.appPasswords.add);
router.get ('/api/v1/app_passwords/:id', token, authorizeUser, routes.appPasswords.get);
router.del ('/api/v1/app_passwords/:id', token, authorizeUser, routes.appPasswords.del);
// access tokens
router.get ('/api/v1/tokens', token, routes.tokens.list);
router.post('/api/v1/tokens', json, token, routes.tokens.add);
router.get ('/api/v1/tokens/:id', token, routes.tokens.verifyOwnership, routes.tokens.get);
router.del ('/api/v1/tokens/:id', token, routes.tokens.verifyOwnership, routes.tokens.del);
router.get ('/api/v1/tokens', token, authorizeUser, routes.tokens.list);
router.post('/api/v1/tokens', json, token, authorizeUser, routes.tokens.add);
router.get ('/api/v1/tokens/:id', token, authorizeUser, routes.tokens.verifyOwnership, routes.tokens.get);
router.del ('/api/v1/tokens/:id', token, authorizeUser, routes.tokens.verifyOwnership, routes.tokens.del);
// user routes
router.get ('/api/v1/users', token, authorizeUserManager, routes.users.list);
@@ -177,38 +182,37 @@ function initializeExpressSync() {
router.post('/api/v1/users/:userId/password', json, token, authorizeUserManager, routes.users.load, routes.users.setPassword);
router.post('/api/v1/users/:userId/ghost', json, token, authorizeAdmin, routes.users.load, routes.users.setGhost);
router.put ('/api/v1/users/:userId/groups', json, token, authorizeUserManager, routes.users.load, routes.users.setGroups);
router.post('/api/v1/users/:userId/make_owner', json, token, authorizeOwner, routes.users.load, routes.users.makeOwner);
router.post('/api/v1/users/:userId/twofactorauthentication_disable', json, token, authorizeUserManager, routes.users.load, routes.users.disableTwoFactorAuthentication);
router.get ('/api/v1/users/:userId/password_reset_link', json, token, authorizeUserManager, routes.users.load, routes.users.getPasswordResetLink);
router.post('/api/v1/users/:userId/send_password_reset_email', json, token, authorizeUserManager, routes.users.load, routes.users.sendPasswordResetEmail);
router.get ('/api/v1/users/:userId/invite_link', json, token, authorizeUserManager, routes.users.load, routes.users.getInviteLink);
router.post('/api/v1/users/:userId/send_invite_email', json, token, authorizeUserManager, routes.users.load, routes.users.sendInviteEmail);
router.post('/api/v1/users/:userId/make_local', json, token, authorizeUserManager, routes.users.load, routes.users.makeLocal);
router.get ('/api/v1/users/:userId/password_reset_link', json, token, authorizeUserManager, routes.users.load, routes.users.getPasswordResetLink);
router.post('/api/v1/users/:userId/send_password_reset_email', json, token, authorizeUserManager, routes.users.load, routes.users.sendPasswordResetEmail);
router.get ('/api/v1/users/:userId/invite_link', json, token, authorizeUserManager, routes.users.load, routes.users.getInviteLink);
router.post('/api/v1/users/:userId/send_invite_email', json, token, authorizeUserManager, routes.users.load, routes.users.sendInviteEmail);
router.post('/api/v1/users/:userId/twofactorauthentication_disable', json, token, authorizeUserManager, routes.users.load, routes.users.disableTwoFactorAuthentication);
// Group management
router.get ('/api/v1/groups', token, authorizeUserManager, routes.groups.list);
router.post('/api/v1/groups', json, token, authorizeUserManager, routes.groups.add);
router.get ('/api/v1/groups/:groupId', token, authorizeUserManager, routes.groups.get);
router.put ('/api/v1/groups/:groupId/members', json, token, authorizeUserManager, routes.groups.updateMembers);
router.put ('/api/v1/groups/:groupId/members', json, token, authorizeUserManager, routes.groups.setMembers);
router.post('/api/v1/groups/:groupId', json, token, authorizeUserManager, routes.groups.update);
router.del ('/api/v1/groups/:groupId', token, authorizeUserManager, routes.groups.remove);
// appstore and subscription routes
router.post('/api/v1/appstore/register_cloudron', json, token, authorizeOwner, routes.appstore.registerCloudron);
router.get ('/api/v1/appstore/web_token', json, token, authorizeOwner, routes.appstore.getWebToken);
router.get ('/api/v1/appstore/subscription', token, routes.appstore.getSubscription); // for all users
router.get ('/api/v1/appstore/subscription', token, authorizeUser, routes.appstore.getSubscription); // for all users
router.get ('/api/v1/appstore/apps', token, authorizeAdmin, routes.appstore.getApps);
router.get ('/api/v1/appstore/apps/:appstoreId', token, authorizeAdmin, routes.appstore.getApp);
router.get ('/api/v1/appstore/apps/:appstoreId/versions/:versionId', token, authorizeAdmin, routes.appstore.getAppVersion);
// app routes
router.post('/api/v1/apps/install', json, token, authorizeAdmin, routes.apps.install);
router.get ('/api/v1/apps', token, routes.apps.listByUser);
router.post('/api/v1/apps/install', json, token, authorizeAdmin, routes.apps.install);
router.get ('/api/v1/apps', token, authorizeUser, routes.apps.listByUser);
router.get ('/api/v1/apps/:id', token, routes.apps.load, authorizeOperator, routes.apps.getApp);
router.get ('/api/v1/apps/:id/icon', token, routes.apps.load, routes.apps.getAppIcon);
router.post('/api/v1/apps/:id/uninstall', json, token, authorizeAdmin, routes.apps.load, routes.apps.uninstall);
router.post('/api/v1/apps/:id/configure/access_restriction', json, token, authorizeAdmin, routes.apps.load, routes.apps.setAccessRestriction);
router.post('/api/v1/apps/:id/configure/operators', json, token, authorizeAdmin, routes.apps.load, routes.apps.setOperators);
router.get ('/api/v1/apps/:id/icon', token, routes.apps.load, authorizeUser, routes.apps.getAppIcon);
router.post('/api/v1/apps/:id/uninstall', json, token, routes.apps.load, authorizeAdmin, routes.apps.uninstall);
router.post('/api/v1/apps/:id/configure/access_restriction', json, token, routes.apps.load, authorizeAdmin, routes.apps.setAccessRestriction);
router.post('/api/v1/apps/:id/configure/operators', json, token, routes.apps.load, authorizeAdmin, routes.apps.setOperators);
router.post('/api/v1/apps/:id/configure/label', json, token, routes.apps.load, authorizeOperator, routes.apps.setLabel);
router.post('/api/v1/apps/:id/configure/tags', json, token, routes.apps.load, authorizeOperator, routes.apps.setTags);
router.post('/api/v1/apps/:id/configure/icon', json, token, routes.apps.load, authorizeOperator, routes.apps.setIcon);
@@ -222,10 +226,11 @@ function initializeExpressSync() {
router.post('/api/v1/apps/:id/configure/mailbox', json, token, routes.apps.load, authorizeAdmin, routes.apps.setMailbox);
router.post('/api/v1/apps/:id/configure/inbox', json, token, routes.apps.load, authorizeAdmin, routes.apps.setInbox);
router.post('/api/v1/apps/:id/configure/env', json, token, routes.apps.load, authorizeOperator, routes.apps.setEnvironment);
router.post('/api/v1/apps/:id/configure/data_dir', json, token, routes.apps.load, authorizeAdmin, routes.apps.setDataDir);
router.post('/api/v1/apps/:id/configure/storage', json, token, routes.apps.load, authorizeAdmin, routes.apps.setStorage);
router.post('/api/v1/apps/:id/configure/location', json, token, routes.apps.load, authorizeAdmin, routes.apps.setLocation);
router.post('/api/v1/apps/:id/configure/mounts', json, token, routes.apps.load, authorizeAdmin, routes.apps.setMounts);
router.post('/api/v1/apps/:id/configure/crontab', json, token, routes.apps.load, authorizeOperator, routes.apps.setCrontab);
router.post('/api/v1/apps/:id/configure/upstream_uri', json, token, routes.apps.load, authorizeOperator, routes.apps.setUpstreamUri);
router.post('/api/v1/apps/:id/repair', json, token, routes.apps.load, authorizeOperator, routes.apps.repair);
router.post('/api/v1/apps/:id/check_for_updates', json, token, routes.apps.load, authorizeOperator, routes.apps.checkForUpdates);
router.post('/api/v1/apps/:id/update', json, token, routes.apps.load, authorizeOperator, routes.apps.update);
@@ -243,15 +248,25 @@ function initializeExpressSync() {
router.get ('/api/v1/apps/:id/eventlog', token, routes.apps.load, authorizeOperator, routes.apps.listEventlog);
router.get ('/api/v1/apps/:id/limits', token, routes.apps.load, authorizeOperator, routes.apps.getLimits);
router.get ('/api/v1/apps/:id/task', token, routes.apps.load, authorizeOperator, routes.apps.getTask);
router.get ('/api/v1/apps/:id/graphs', token, routes.apps.load, authorizeOperator, routes.graphs.getGraphs); // TODO: restrict to app graphs
router.get ('/api/v1/apps/:id/graphs', token, routes.apps.load, authorizeOperator, routes.graphs.getAppGraphs);
router.post('/api/v1/apps/:id/clone', json, token, routes.apps.load, authorizeAdmin, routes.apps.clone);
router.get ('/api/v1/apps/:id/download', token, routes.apps.load, authorizeOperator, routes.apps.downloadFile);
router.post('/api/v1/apps/:id/upload', json, token, multipart, routes.apps.load, authorizeOperator, routes.apps.uploadFile);
router.use ('/api/v1/apps/:id/files/*', token, routes.apps.load, authorizeOperator, routes.filemanager.proxy('app'));
router.get ('/api/v1/apps/:id/exec', token, routes.apps.load, authorizeOperator, routes.apps.exec);
router.post('/api/v1/apps/:id/exec', json, token, routes.apps.load, authorizeOperator, routes.apps.createExec);
router.get ('/api/v1/apps/:id/exec/:execId/start', token, routes.apps.load, authorizeOperator, routes.apps.startExec);
router.get ('/api/v1/apps/:id/exec/:execId', token, routes.apps.load, authorizeOperator, routes.apps.getExec);
// websocket cannot do bearer authentication
router.get ('/api/v1/apps/:id/execws', token, routes.apps.load, routes.accesscontrol.authorizeOperator, routes.apps.execWebSocket);
router.get ('/api/v1/apps/:id/exec/:execId/startws', token, routes.apps.load, authorizeOperator, routes.apps.startExecWebSocket);
// app links in dashboard
router.get ('/api/v1/applinks', token, authorizeUser, routes.applinks.listByUser);
router.post('/api/v1/applinks', json, token, authorizeAdmin, routes.applinks.add);
router.get ('/api/v1/applinks/:id', token, authorizeAdmin, routes.applinks.get);
router.post('/api/v1/applinks/:id', json, token, authorizeAdmin, routes.applinks.update);
router.del ('/api/v1/applinks/:id', token, authorizeAdmin, routes.applinks.remove);
router.get ('/api/v1/applinks/:id/icon', token, authorizeUser, routes.applinks.getIcon);
// branding routes
router.get ('/api/v1/branding/:setting', token, authorizeOwner, routes.branding.get);
@@ -288,6 +303,8 @@ function initializeExpressSync() {
router.post('/api/v1/mailserver/mailbox_sharing', token, authorizeAdmin, routes.mailserver.proxy, routes.mailserver.restart);
router.get ('/api/v1/mailserver/usage', token, authorizeMailManager, routes.mailserver.proxy);
router.use ('/api/v1/mailserver/queue', token, authorizeAdmin, routes.mailserver.queueProxy);
router.get ('/api/v1/mail/:domain', token, authorizeMailManager, routes.mail.getDomain);
router.post('/api/v1/mail/:domain/enable', json, token, authorizeAdmin, routes.mail.setMailEnabled);
router.get ('/api/v1/mail/:domain/status', token, authorizeMailManager, routes.mail.getStatus);
@@ -317,10 +334,10 @@ function initializeExpressSync() {
// domain routes
router.post('/api/v1/domains', json, token, authorizeAdmin, routes.domains.add);
router.get ('/api/v1/domains', token, routes.domains.list);
router.get ('/api/v1/domains', token, authorizeUser, routes.domains.list);
router.get ('/api/v1/domains/:domain', token, authorizeAdmin, routes.domains.get); // this is manage scope because it returns non-restricted fields
router.post('/api/v1/domains/:domain/config', json, token, authorizeAdmin, routes.domains.setConfig);
router.post('/api/v1/domains/:domain/wellknown', json, token, authorizeAdmin, routes.domains.setWellKnown);
router.post('/api/v1/domains/:domain/config', json, token, authorizeAdmin, routes.domains.setConfig);
router.post('/api/v1/domains/:domain/wellknown', json, token, authorizeAdmin, routes.domains.setWellKnown);
router.del ('/api/v1/domains/:domain', token, authorizeAdmin, routes.domains.del);
router.get ('/api/v1/domains/:domain/dns_check', token, authorizeAdmin, routes.domains.checkDnsRecords);

Some files were not shown because too many files have changed in this diff Show More