Compare commits

..

248 Commits

Author SHA1 Message Date
Girish Ramakrishnan 2ebe92fec3 Do not chown mail directory 2017-10-16 23:18:37 -07:00
Girish Ramakrishnan 628cf1e3de bump mail container for superfluous sa-update supervisor file 2017-10-16 21:16:58 -07:00
Girish Ramakrishnan 9e9aaf68f0 No need to migrate mail data anymore 2017-10-16 21:13:57 -07:00
Girish Ramakrishnan b595ca422c 1.7.4 changes 2017-10-16 15:28:36 -07:00
Girish Ramakrishnan 9273a6c726 Add option to disable hardlinks
We can probably remove this later based on the use
2017-10-16 15:22:40 -07:00
Johannes Zellner 76d00d4e65 Render changelog markdown as html in app update dialog 2017-10-17 00:07:58 +02:00
Johannes Zellner 668c03a11b Give visual feedback in the restore dialog when fetching backups 2017-10-16 22:31:49 +02:00
Girish Ramakrishnan 1e72d2d651 remove debugs (too noisy) 2017-10-16 12:34:09 -07:00
Girish Ramakrishnan 89fc8efc67 Save as empty array if find output is empty 2017-10-16 10:54:48 -07:00
Girish Ramakrishnan 241dbf160e Remove Unused required 2017-10-15 14:07:03 -07:00
Girish Ramakrishnan e46bdc2caa Force the copy just like tar --overwrite 2017-10-13 23:23:36 -07:00
Girish Ramakrishnan e1cb91ca76 bump mail container 2017-10-13 22:36:54 -07:00
Girish Ramakrishnan 709c742c46 Fix tests 2017-10-12 21:14:13 -07:00
Girish Ramakrishnan ecad9c499c Port binding conflict can never happen in update route 2017-10-12 21:04:38 -07:00
Girish Ramakrishnan ed0879ffcd Stop the app only after the backup completed
App backup can take a long time or possibly not work at all. For such
cases, do not stop the app or leave it in some errored state.

newConfigJson is the new config to be updated to. This ensures that
the db has correct app info during the update.
2017-10-12 18:10:41 -07:00
Girish Ramakrishnan 61e2878b08 save/restore exec bit in files
this covers the case where user might stash some executable files
that are used by plugins.
2017-10-12 16:18:11 -07:00
Girish Ramakrishnan d97034bfb2 Follow backup format for box backups as well 2017-10-12 11:02:52 -07:00
Girish Ramakrishnan 21942552d6 Clarify the per-app backup flag 2017-10-12 11:02:52 -07:00
Girish Ramakrishnan dd68c8f91f Various backup fixes 2017-10-12 11:02:48 -07:00
Girish Ramakrishnan 28ce5f41e3 handle errors in log stream 2017-10-11 12:55:56 -07:00
Girish Ramakrishnan 5694e676bd Set default rentention to a week 2017-10-11 12:55:55 -07:00
Girish Ramakrishnan db8c5a116f Typo 2017-10-11 10:30:03 -07:00
Girish Ramakrishnan fa39f0fbf3 Add 1.7.3 changes 2017-10-11 00:50:41 -07:00
Girish Ramakrishnan 1444bb038f only upload needs to be retried
copy/delete are already retried in the sdk code
2017-10-11 00:08:41 -07:00
Girish Ramakrishnan ac9e421ecf improved backup progress and logging 2017-10-10 22:49:38 -07:00
Girish Ramakrishnan b60cbe5a55 move constant 2017-10-10 19:47:21 -07:00
Girish Ramakrishnan 56d794745b Sprinkle retries in syncer logic 2017-10-10 14:25:03 -07:00
Girish Ramakrishnan fd3b73bea2 typo in format name 2017-10-10 13:54:54 -07:00
Girish Ramakrishnan 78807782df Various hacks for exoscale-sos
SOS does not like multipart uploads. They just fail randomly.

As a fix, we try to detect filesystem files and skip multipart uploads
for files < 5GB. For > 5GB, we do multipart upload anyways (mostly fails).

The box backup is switched to flat-file for exoscale for the reason
above.
2017-10-10 11:03:20 -07:00
Girish Ramakrishnan 754b29b263 Start out empty if the previous run errored 2017-10-09 20:12:21 -07:00
Girish Ramakrishnan 9f97f48634 Add note on s3ForcePathStyle 2017-10-09 18:46:14 -07:00
Girish Ramakrishnan 815e5d9d9a graphs: Compute width of system graph from total memory
Fixes #452
2017-10-09 14:58:32 -07:00
Girish Ramakrishnan 91ec2eaaf5 sos: "/" must separate bucket and key name 2017-10-09 11:50:22 -07:00
Girish Ramakrishnan f8d3a7cadd Bump mail container (fixes spam crash) 2017-10-06 16:45:21 -07:00
Girish Ramakrishnan d04a09b015 Add note on bumping major infra version 2017-10-06 15:52:04 -07:00
Girish Ramakrishnan 5d997bcc89 Just mark DO Spaces as experimental instead 2017-10-06 14:45:14 -07:00
Girish Ramakrishnan f0dd90a1f5 listObjectsV2 does not work on some S3 providers
specifically, cloudscale does not support it
2017-10-05 12:07:14 -07:00
Girish Ramakrishnan ee8ee8e786 KeyCount is not set on some S3 providers 2017-10-05 11:36:54 -07:00
Girish Ramakrishnan ee1a4411f8 Do not crash if prefix is empty string
('' || undefined) will return undefined ...
2017-10-05 11:08:01 -07:00
Girish Ramakrishnan df6e6cb071 Allow s3 backend to accept self-signed certs
Fixes #316
2017-10-05 10:14:55 -07:00
Girish Ramakrishnan ba5645a20e Disable DO spaces since it is not yet production ready 2017-10-05 09:21:26 -07:00
Girish Ramakrishnan ca502a2d55 Display error code 2017-10-04 22:34:44 -07:00
Girish Ramakrishnan ecd53b48db Display the backup format 2017-10-04 22:11:11 -07:00
Girish Ramakrishnan b9efb0b50b Fix callback invokation 2017-10-04 19:28:40 -07:00
Johannes Zellner 3fb5034ebd Ensure we setup the correct OAuth redirectURI if altDomain is used 2017-10-05 01:10:25 +02:00
Girish Ramakrishnan afed3f3725 Remove duplicate debug 2017-10-04 15:08:26 -07:00
Girish Ramakrishnan b4f14575d7 Add 1.7.1 changes 2017-10-04 14:31:41 -07:00
Johannes Zellner f437a1f48c Only allow dns setup with subdomain if enterprise query argument is provided 2017-10-04 22:25:14 +02:00
Girish Ramakrishnan c3d7d867be Do not set logCallback 2017-10-04 12:32:12 -07:00
Girish Ramakrishnan 96c16cd5d2 remove debug 2017-10-04 11:54:17 -07:00
Girish Ramakrishnan af182e3df6 caas: cache the creds, otherwise we bombard the server 2017-10-04 11:49:38 -07:00
Girish Ramakrishnan d70ff7cd5b Make copy() return event emitter
This way the storage logic does not need to rely on progress
2017-10-04 11:02:50 -07:00
Johannes Zellner 38331e71e2 Ensure all S3 CopySource properties are URI encoded 2017-10-04 19:07:08 +02:00
Johannes Zellner 322a9a18d7 Use multipart copy for s3 and files larger than 5GB 2017-10-04 18:56:23 +02:00
Johannes Zellner 423ef546a9 Merge branch 'user_agent' into 'master'
Added user agent to health checks

See merge request !19
2017-10-04 11:48:02 +00:00
Dennis Schwerdel e3f3241966 Added user agent to health checks 2017-10-04 13:05:00 +02:00
Johannes Zellner eaef384ea5 Improve the invite link display
Fixes #445
2017-10-04 13:03:32 +02:00
Girish Ramakrishnan b85bc3aa01 s3: Must encode copySource
https://github.com/aws/aws-sdk-js/issues/1302
2017-10-03 15:51:05 -07:00
Girish Ramakrishnan 01154d0ae6 s3: better error messages 2017-10-03 14:46:59 -07:00
Girish Ramakrishnan 6494050d66 Make removeDir less noisy 2017-10-03 01:22:37 -07:00
Girish Ramakrishnan 8c7223ceed Fix cleanup logic to use the app backup format
box backup and app backup can have different format
2017-10-03 00:56:34 -07:00
Girish Ramakrishnan 21afc71d89 add tests for storage backends 2017-10-02 23:08:16 -07:00
Girish Ramakrishnan 7bf70956a1 fix tests 2017-10-02 18:42:13 -07:00
Girish Ramakrishnan 9e9b8b095e Provider dhparams.pem to the mail container 2017-10-02 01:51:28 -07:00
Girish Ramakrishnan 0f543e6703 s3: add progress detail
this is a bit of a hack and we should add another way to set the progress
(maybe via backups.setProgress or via a progress callback). this is because
some methods like removeDir can be called from backuptask and from box code.
2017-10-01 18:25:51 -07:00
Girish Ramakrishnan f9973e765c Add backup cleanup eventlog 2017-10-01 10:35:50 -07:00
Girish Ramakrishnan e089851ae9 add debugs 2017-09-30 20:36:08 -07:00
Girish Ramakrishnan c524d68c2f fix crash when cleaning up snapshots 2017-09-30 20:31:41 -07:00
Girish Ramakrishnan 5cccb50a31 fix backup cleanup logic 2017-09-30 18:38:45 -07:00
Girish Ramakrishnan 3d375b687a style: Fix quoting 2017-09-30 18:26:38 -07:00
Girish Ramakrishnan a93d453963 rename flat-file to rsync
not a name I like but cannot come up with anything better

https://en.wikipedia.org/wiki/Flat_file_database

the term 'rsync format' seems to be used in a few places
2017-09-30 14:19:19 -07:00
Girish Ramakrishnan f8ac2d4628 1.7.0 changes 2017-09-30 14:02:06 -07:00
Girish Ramakrishnan d5ba73716b add emptydirs test 2017-09-29 15:29:22 -07:00
Girish Ramakrishnan 954224dafb make syncer track directories 2017-09-29 15:29:18 -07:00
Johannes Zellner 8b341e2bf8 Only make nginx listen on ipv6 connections if it is supported by the system
Could not decide on the ejs formatting, never nice for me
2017-09-29 19:43:37 +02:00
Johannes Zellner 78fb9401ee Add config.hasIPv6() 2017-09-29 19:43:37 +02:00
Girish Ramakrishnan 4a5cbab194 Do not remove parent directory in fs.remove()
Do the pruning in the cleanup logic instead
2017-09-28 20:55:45 -07:00
Girish Ramakrishnan 19999abc50 s3: fix restore 2017-09-28 14:35:49 -07:00
Girish Ramakrishnan 5123b669d7 remove options.concurrency 2017-09-28 12:20:15 -07:00
Girish Ramakrishnan 565c8445e1 make backup progress work for per-app backups 2017-09-28 11:17:48 -07:00
Girish Ramakrishnan 404a019c56 s3: Check IsTruncated before accessing Contents 2017-09-28 10:36:56 -07:00
Girish Ramakrishnan 24dee80aa6 Make box backups always tarball based
this makes cloudron easy to restore. in the future, if required,
we can move out the mail data as a separate virtual app backup
2017-09-28 10:22:10 -07:00
Girish Ramakrishnan ce6df4bf96 Disable encryption for flat-file for now 2017-09-28 09:47:18 -07:00
Girish Ramakrishnan f8f6c7d93e Add progress detail when rotating snapshots 2017-09-28 09:29:46 -07:00
Girish Ramakrishnan bafc6dce98 s3: refactor out directory listing 2017-09-27 21:59:51 -07:00
Girish Ramakrishnan 56ee4d8e25 Remove old cache files when backup settings is changed 2017-09-27 21:04:46 -07:00
Girish Ramakrishnan eeef221b4e Fix race where pipe finishes before file is created
When there are 0 length files, this is easily reproducible.
2017-09-27 19:40:26 -07:00
Girish Ramakrishnan 4674653982 compare size and inode as well 2017-09-27 19:39:03 -07:00
Girish Ramakrishnan a34180c27b Add format to backupsdb
Call remove/removeDir based on the format
2017-09-27 18:02:30 -07:00
Girish Ramakrishnan aa8ce2c62e Use graphite 0.12.0
this fixes an issue where carbon does not startup properly
if a previous pid file was present
2017-09-27 15:35:55 -07:00
Girish Ramakrishnan b3c6b8aa15 do not spawn process just for chown 2017-09-27 15:07:19 -07:00
Girish Ramakrishnan 44a7a2579c rework backup status
* show backup progress even if not initiated by UI
* display backup progress in separate line
2017-09-27 15:07:15 -07:00
Girish Ramakrishnan 39f0e476f2 Start out empty if cache file is missing 2017-09-27 12:09:19 -07:00
Girish Ramakrishnan 003dc0dbaf Add todo 2017-09-27 11:50:49 -07:00
Girish Ramakrishnan e39329218d Make tests work 2017-09-27 11:38:43 -07:00
Girish Ramakrishnan 8d3fbc5432 Save backup logs and fix backup progress 2017-09-26 21:09:00 -07:00
Girish Ramakrishnan 2780de631e writable streams emit finish 2017-09-26 16:43:51 -07:00
Girish Ramakrishnan 399c756735 use exec so that filenames do not have to be escaped 2017-09-26 15:53:42 -07:00
Girish Ramakrishnan 859311f9e5 Process delete commands before add commands
This is required for cases where a dir becomes a file (or vice-versa)
2017-09-26 15:33:54 -07:00
Girish Ramakrishnan a9e89b57d9 merge caas storage into s3 backend 2017-09-26 12:28:33 -07:00
Girish Ramakrishnan 4e68abe51d Handle fs errors 2017-09-26 12:10:58 -07:00
Girish Ramakrishnan 12083f5608 Ignore all special files 2017-09-26 11:41:01 -07:00
Girish Ramakrishnan d1efb2db56 remove bogus mkdir 2017-09-26 11:34:24 -07:00
Girish Ramakrishnan adde28523f Add backup format to the backup UI 2017-09-26 10:46:02 -07:00
Girish Ramakrishnan f122f46fe2 Generate new index file by appending to file 2017-09-26 07:57:20 -07:00
Girish Ramakrishnan ad7fadb4a9 display backup id in the ui 2017-09-26 07:45:23 -07:00
Johannes Zellner be383582e0 Do not rely on external resource in the appstatus page 2017-09-26 15:33:05 +02:00
Girish Ramakrishnan 0a60365143 Initial version of flat-file uploader 2017-09-26 00:17:11 -07:00
Girish Ramakrishnan 2f6cb3e913 set format in the backup ui 2017-09-26 00:01:36 -07:00
Girish Ramakrishnan b0f85678d4 Implement downloadDir for flat-file format 2017-09-23 18:07:26 -07:00
Girish Ramakrishnan e43413e063 implement remove dir in storage backends 2017-09-23 12:34:51 -07:00
Girish Ramakrishnan e39a5c8872 preserve env in backuptask.js 2017-09-22 11:19:44 -07:00
Girish Ramakrishnan fb4b75dd2a Fix typo in comment 2017-09-22 11:19:37 -07:00
Girish Ramakrishnan 3c1ccc5cf4 Add exoscale provider 2017-09-21 17:50:03 -07:00
Girish Ramakrishnan abd66d6524 Add cloudscale as a provider 2017-09-21 17:49:26 -07:00
Girish Ramakrishnan b61b7f80b5 Add DO spaces 2017-09-21 12:25:39 -07:00
Girish Ramakrishnan efa850614d Add a s3-v4-compat provider 2017-09-21 12:13:45 -07:00
Girish Ramakrishnan 21c534c806 Ensure format is set in backupConfig 2017-09-21 09:49:55 -07:00
Girish Ramakrishnan 7e4ff2440c Fix text for manual DNS 2017-09-21 09:10:12 -07:00
Johannes Zellner f415e19f6f Do not unneccesarily mention error in the logs
Not so friendly for log searches
2017-09-21 15:00:35 +02:00
Girish Ramakrishnan 97da8717ca Refactor backup strategy logic into backups.js 2017-09-20 14:09:55 -07:00
Girish Ramakrishnan cbddb79d15 Resolve the id in rotateAppBackup 2017-09-20 09:38:55 -07:00
Johannes Zellner bffb935f0f Also send digest to appstore account owner 2017-09-20 16:33:25 +02:00
Johannes Zellner e50e0f730b Make nginx listen on :: for ipv6 2017-09-20 16:33:25 +02:00
Girish Ramakrishnan 26f33a8e9b Send resolved path to the storage APIs 2017-09-19 21:58:35 -07:00
Girish Ramakrishnan 952b1f6304 Make backuptask call back into backups.js 2017-09-19 20:27:49 -07:00
Girish Ramakrishnan a3293c4c35 Fix tests 2017-09-19 12:43:13 -07:00
Girish Ramakrishnan 4892473eff backupIds do not have extension anymore
this code existed for legacy reasons
2017-09-19 12:34:09 -07:00
Girish Ramakrishnan 221d5f95e1 ensure backupFolder is always set 2017-09-19 12:34:09 -07:00
Girish Ramakrishnan 84649b9471 Bring back backuptask
This is required for various small reasons:

* dir iteration with a way to pass messagein back to the upload() easily
* can be killed independently of box code
* allows us to run sync (blocking) commands in the upload logic
2017-09-19 12:32:38 -07:00
Girish Ramakrishnan 44435559ab Typo 2017-09-19 10:37:45 -07:00
Girish Ramakrishnan c351660a9a Implement backup rotation
Always upload to 'snapshot' dir and then rotate it. This will allow
us to keep pushing incrementally to 'snapshot' and do server side
rotations.
2017-09-18 21:17:34 -07:00
Girish Ramakrishnan 0a24130fd4 Just reset config instead of clearing cache 2017-09-18 19:41:15 -07:00
Girish Ramakrishnan ea13f8f97e Fix checkInstall script 2017-09-18 18:19:27 -07:00
Johannes Zellner d00801d020 Only require service account key for google dns on setup 2017-09-18 23:50:34 +02:00
Girish Ramakrishnan 8ced0aa78e copy: use hardlinks to preserve space 2017-09-18 14:29:48 -07:00
Girish Ramakrishnan f5d32a9178 copyBackup -> copy 2017-09-18 14:29:15 -07:00
Girish Ramakrishnan 7fc45b3215 Refactor out the backup snapshot logic 2017-09-18 12:43:11 -07:00
Girish Ramakrishnan 9bed14a3e8 Enable IP6 in unbound
On some provider (https://www.nine.ch) disabling IPv6 makes unbound
not respond to the DNS queries.

Also, I was unable to test with prefer-ip6 to 'no' because unbound fails:

unbound[5657]: /etc/unbound/unbound.conf.d/cloudron-network.conf:8: error: unknown keyword 'no'
unbound[5657]: read /etc/unbound/unbound.conf failed: 3 errors in configuration file
2017-09-18 11:41:02 -07:00
Girish Ramakrishnan 71233ecd95 Fix undefined variable 2017-09-18 11:14:04 -07:00
Girish Ramakrishnan 02097298c6 Fix indentation 2017-09-18 10:38:30 -07:00
Girish Ramakrishnan be03dd0821 remove unused require 2017-09-18 10:38:26 -07:00
Girish Ramakrishnan 5b77d2f0cf Add commented out debugging section for unbound 2017-09-18 10:38:22 -07:00
Girish Ramakrishnan 781f543e87 Rename API calls in the storage backend 2017-09-17 18:50:29 -07:00
Girish Ramakrishnan 6525a467a2 Rework backuptask into tar.js
This makes it easy to integrate another backup strategy
as the next step
2017-09-17 18:50:26 -07:00
Girish Ramakrishnan 6cddd61a24 Fix style 2017-09-17 18:50:23 -07:00
Girish Ramakrishnan b0ee116004 targz: make sourceDir a string 2017-09-17 18:50:15 -07:00
Girish Ramakrishnan 867a59d5d8 Pull it all to left 2017-09-15 15:47:37 -07:00
Girish Ramakrishnan 6f5085ebc3 Downcase email 2017-09-15 15:45:26 -07:00
Johannes Zellner e8a93dcb1b Add button to send test email
Fixes #419
2017-09-15 14:42:12 +02:00
Girish Ramakrishnan 09fe957cc7 style 2017-09-15 02:07:06 -07:00
Girish Ramakrishnan 020ccc8a99 gcdns: fix update/del confusion
in the DNS api, we always update/del all records of same type
2017-09-15 01:54:39 -07:00
Girish Ramakrishnan 7ed304bed8 Fix cloudflare domain display 2017-09-15 00:50:29 -07:00
Girish Ramakrishnan db1e39be11 Do not overwrite subdomain when location was changed
* Install in subdomain 'test'
* Move to subdomain 'test2'
* Move to another existing subdomain 'www' (this should be detected as conflict)
* Move to subdomain 'www2' (this should not remove 'www'). This is why dnsRecordId exists.
2017-09-14 22:31:48 -07:00
Girish Ramakrishnan f163577264 Typo 2017-09-14 18:38:48 -07:00
Girish Ramakrishnan 9c7080aea1 Show email text for gcdns 2017-09-14 18:33:07 -07:00
Girish Ramakrishnan c05a7c188f Coding style fixes 2017-09-14 18:15:59 -07:00
Girish Ramakrishnan 72e912770a translate network errors to SubdomainError
fixes #391
2017-09-14 16:14:16 -07:00
Girish Ramakrishnan 28c06d0a72 bump mail container 2017-09-14 12:07:53 -07:00
Girish Ramakrishnan 9805daa835 Add google-cloud/dns to shrinkwrap 2017-09-14 10:45:04 -07:00
Girish Ramakrishnan a920fd011c Merge branch 'feature/gcdns' into 'master'
Adding Google Cloud DNS support

See merge request !17
2017-09-14 17:44:20 +00:00
Girish Ramakrishnan 1b979ee1e9 Send rbl status as part of email check 2017-09-13 23:58:54 -07:00
Girish Ramakrishnan 70eae477dc Fix logstream test 2017-09-13 23:01:04 -07:00
Girish Ramakrishnan c16f7c7891 Fix storage tests 2017-09-13 22:50:38 -07:00
Girish Ramakrishnan 63b8a5b658 Add update pattern of wednesday night
Fixes #432, #435
2017-09-13 14:52:31 -07:00
Aleksandr Bogdanov c0bf51b79f A bit more polish 2017-09-13 21:17:40 +02:00
Aleksandr Bogdanov 3d4178b35c Adding Google Cloud DNS to "setupdns" stage 2017-09-13 21:00:29 +02:00
Aleksandr Bogdanov 34878bbc6a Make sure we don't touch records which are not managed by cloudron, but are in the same zone 2017-09-13 20:53:38 +02:00
Girish Ramakrishnan e78d976c8f Fix backup mapping (mail dir has moved) 2017-09-13 09:51:20 -07:00
Girish Ramakrishnan ba9662f3fa Add 1.6.5 changes 2017-09-12 22:32:57 -07:00
Girish Ramakrishnan c8750a3bed merge the logrotate scripts 2017-09-12 22:03:24 -07:00
Girish Ramakrishnan 9710f74250 remove collectd stats when app is uninstalled 2017-09-12 21:34:15 -07:00
Girish Ramakrishnan 52095cb8ab add debugs for timing backup and restore 2017-09-12 15:37:35 -07:00
Aleksandr Bogdanov c612966b41 Better validation 2017-09-12 22:47:46 +02:00
Aleksandr Bogdanov 90cf4f0784 Allowing to select a service account key as a file for gcdns 2017-09-12 22:35:40 +02:00
Aleksandr Bogdanov ec93d564e9 Adding Google Cloud DNS to webadmin 2017-09-12 19:03:23 +02:00
Aleksandr Bogdanov 37f9e60978 Fixing verifyDns 2017-09-12 16:29:07 +02:00
Johannes Zellner ca199961d5 Make settings.value field TEXT
We already store JSON blobs there and the gce dns backend
will require a larger blob for a certificate.
Since we use innodb the storage format in TEXT will only be different
if the data is large
2017-09-11 15:41:07 +02:00
Girish Ramakrishnan fd811ac334 Remove "cloudron" to fit in one line 2017-09-10 17:43:21 -07:00
Girish Ramakrishnan 609c1d3b78 bump mail container
this is also required since we moved the maildir
2017-09-10 00:07:48 -07:00
Girish Ramakrishnan 9906ed37ae Move mail data inside boxdata directory
This also makes the noop backend more useful because it will dump things
in data directory and user can back it up as they see fit.
2017-09-10 00:07:44 -07:00
Girish Ramakrishnan dcdce6d995 Use MAIL_DATA_DIR constant 2017-09-09 22:24:16 -07:00
Girish Ramakrishnan 9026c555f9 snapshots dir is not used anymore 2017-09-09 22:13:15 -07:00
Girish Ramakrishnan 547a80f17b make shell.exec options non-optional 2017-09-09 19:54:31 -07:00
Girish Ramakrishnan 300d3dd545 remove unused requires 2017-09-09 19:23:22 -07:00
Aleksandr Bogdanov 6fce729ed2 Adding Google Cloud DNS 2017-09-09 17:45:26 +02:00
Girish Ramakrishnan d233ee2a83 ask password only for destructive actions 2017-09-08 15:14:37 -07:00
Girish Ramakrishnan 3240a71feb wording 2017-09-08 14:42:54 -07:00
Girish Ramakrishnan 322be9e5ba Add ip blacklist check
Fixes #431
2017-09-08 13:29:32 -07:00
Girish Ramakrishnan e67ecae2d2 typo 2017-09-07 22:01:37 -07:00
Girish Ramakrishnan 75b3e7fc78 resolve symlinks correctly for deletion
part of #394
2017-09-07 21:57:08 -07:00
Girish Ramakrishnan 74c8d8cc6b set label on the redis container
this ensures that redis is stopped when app is stopped and also
helps identifying app related containers easily
2017-09-07 20:09:46 -07:00
Girish Ramakrishnan 51659a8d2d set label on the redis container
this ensures that redis is stopped when app is stopped and also
helps identifying app related containers easily
2017-09-07 19:54:05 -07:00
Girish Ramakrishnan 70acf1a719 Allow app volumes to be symlinked
The initial plan was to make app volumes to be set using a database
field but this makes the app backups non-portable. It also complicates
things wrt to app and server restores.

For now, ignore the problem and let them be symlinked.

Fixes #394
2017-09-07 15:50:34 -07:00
Girish Ramakrishnan 8d2f3b0217 Add note on disabling ssh password auth 2017-09-06 11:36:23 -07:00
Girish Ramakrishnan e498678488 Use node 6.11.3 2017-09-06 09:39:22 -07:00
Girish Ramakrishnan 513517b15e cf dns: filter by type and name in the REST API
Otherwise, we will have to implement pagination
2017-09-05 16:07:14 -07:00
Girish Ramakrishnan a96f8abaca DO DNS: list all pages of the domain 2017-09-05 15:52:59 -07:00
Johannes Zellner f7bcd54ef5 Better ui feedback on the repair mode 2017-09-05 23:11:04 +02:00
Johannes Zellner d58e4f58c7 Add hook to react whenever apps have changed 2017-09-05 23:10:45 +02:00
Girish Ramakrishnan 45f0f2adbe Fix wording 2017-09-05 10:38:33 -07:00
Johannes Zellner 36c72dd935 Sendgrid only has an api key similar postmark
Fixes #411
2017-09-05 11:28:28 +02:00
Girish Ramakrishnan df9e2a7856 Use robotsTxt in install route 2017-09-04 12:59:14 -07:00
Girish Ramakrishnan 2b043aa95f remove unused require 2017-09-04 12:59:05 -07:00
Johannes Zellner c0a09d1494 Add 1.6.4 changes 2017-09-04 18:53:11 +02:00
Johannes Zellner 1c5c4b5705 Improve overflow handling in logs and terminal view 2017-09-04 18:40:16 +02:00
Girish Ramakrishnan b56dcaac68 Only run scheduler when app is healthy
Fixes #393
2017-09-03 18:21:13 -07:00
Girish Ramakrishnan fd91ccc844 Update the unbound anchor key
This helps the unbound recover from any previous out of disk space
situation.

part of #269
2017-09-03 17:48:26 -07:00
Johannes Zellner fca1a70eaa Add initial repair button alongside webterminal
Part of #416
2017-09-01 20:08:22 +02:00
Johannes Zellner ed81b7890c Fixup the test for the password requirement change 2017-09-01 20:08:22 +02:00
Johannes Zellner cb8dcbf3dd Lift the password requirement for app configure/update/restore actions 2017-09-01 20:08:22 +02:00
Johannes Zellner 4bdbf1f62e Fix indentation 2017-09-01 20:08:22 +02:00
Johannes Zellner 47a8b4fdc2 After consuming the accessToken query param, remove it
Fixes #415
2017-09-01 10:25:28 +02:00
Johannes Zellner 5720e90580 Guide the user to use ctrl+v for pasting into webterminal
Fixes #413
2017-08-31 20:52:04 +02:00
Johannes Zellner f98e13d701 Better highlight dropdown menu hovers 2017-08-31 20:52:04 +02:00
Johannes Zellner d5d924861b Fix gravatar margin in navbar 2017-08-31 20:52:04 +02:00
Girish Ramakrishnan b81a92d407 disable ip6 in unbound as well
part of #412
2017-08-31 11:41:35 -07:00
Johannes Zellner 22b0100354 Ensure we don't crash if the terminal socket is not ready yet
Upstream patch submitted https://github.com/sourcelair/xterm.js/pull/933
2017-08-31 20:31:31 +02:00
Johannes Zellner 6eb6eab3f4 Let the browser handle paste keyboard shortcuts
Related to #413
2017-08-31 20:31:31 +02:00
Girish Ramakrishnan 57d5c2cc47 Use IPv4 address to connect to mysql
Fixes #412
2017-08-31 10:59:14 -07:00
Johannes Zellner 6a9eac7a24 Use the correct input change event
Fixes #414
2017-08-31 19:06:02 +02:00
Johannes Zellner e4760a07f0 Give feedback if the relay settings have successfully saved 2017-08-30 11:02:13 +02:00
Johannes Zellner 257e594de0 Allow mail relay provider specific UI
Only contains specific UI for postmark

Part of #411
2017-08-30 10:55:36 +02:00
Girish Ramakrishnan 6fea022a04 remove dead code 2017-08-29 14:47:59 -07:00
Girish Ramakrishnan f34840d127 remove old data migration paths 2017-08-29 13:08:31 -07:00
Girish Ramakrishnan f9706d6a05 Always generate nginx config for webadmin
Part of #406
2017-08-28 21:16:47 -07:00
Girish Ramakrishnan 61f7c1af48 Remove unused error codes 2017-08-28 15:27:17 -07:00
Girish Ramakrishnan 00786dda05 Do not crash if DNS creds do not work during startup
If DNS creds are invalid, then platform.start() keeps crashing on a
mail container update. For now, just log the error and move on.

Part of #406
2017-08-28 14:55:36 -07:00
Girish Ramakrishnan 8b9f44addc 1.6.3 changes 2017-08-28 13:49:15 -07:00
Johannes Zellner 56c7dbb6e4 Do not attempt to reconnect if the debug view is already gone
Fixes #408
2017-08-28 21:06:25 +02:00
Girish Ramakrishnan c47f878203 Set priority for MX records
Fixes #410
2017-08-26 15:54:38 -07:00
Girish Ramakrishnan 8a2107e6eb Show email text for Cloudflare 2017-08-26 15:37:24 -07:00
Girish Ramakrishnan cd9f0f69d8 email dialog has moved to it's own view 2017-08-26 15:36:12 -07:00
Girish Ramakrishnan 1da91b64f6 Filter out possibly sensitive information for normal users
Fixes #407
2017-08-26 14:47:51 -07:00
Johannes Zellner a87dd65c1d Workaround for firefox flexbox bug
Fixes selection while clicking on empty flexbox space.

This only happens in firefox and seems to be a bug in firefox
flexbox implementation, where the first child element with a
non zero size, in a flexbox managed `block` element, has the
`float` property.

Fixes #405
2017-08-24 23:29:42 +02:00
Johannes Zellner 7c63d9e758 Fix typo in css 2017-08-24 23:16:36 +02:00
Girish Ramakrishnan 329bf596ac Indicate that directories can be downloaded 2017-08-24 13:38:50 -07:00
Girish Ramakrishnan 2a57c4269a handle app not found 2017-08-23 13:23:04 -07:00
Girish Ramakrishnan ca8813dce3 1.6.2 changes 2017-08-23 10:43:27 -07:00
Girish Ramakrishnan 3aebf51360 Fix upload of large files to apps
6a0ef7a1c1 broke the upload for apps

e2e test is being added
2017-08-23 10:22:54 -07:00
Johannes Zellner 103f8db8cb Do not expand to fixed pixel size on mobile 2017-08-23 16:57:34 +02:00
Johannes Zellner 04c127b78d Add changes for 1.6.1
Due to regressions we should skip 1.6.0 thus the same changelog
2017-08-23 16:14:30 +02:00
Johannes Zellner 9bef1bcf64 Hijack and demux the container exec stream to be compliant with new
dockerode
2017-08-23 16:04:50 +02:00
Johannes Zellner 718413c089 autocomplete attribute is not respected for username/password fields
Since the cloudflare email input field is above the password field
some browsers will automatically autofill it with the username
as it looks like a login form. So we add a hidden unused input field
which gets autofilled instead :-/
2017-08-23 13:13:00 +02:00
Girish Ramakrishnan a34691df44 Hide the header as well 2017-08-22 09:30:18 -07:00
Girish Ramakrishnan 795e38fe82 file is an object 2017-08-22 09:15:46 -07:00
Johannes Zellner 1d348fb0f3 Do not lose focus on terminal 2017-08-22 16:24:26 +02:00
Johannes Zellner 91f3318879 Implement rightclick menu for terminal text copy 2017-08-22 16:23:06 +02:00
114 changed files with 7446 additions and 2181 deletions
+119
View File
@@ -959,4 +959,123 @@
* Add webterminal to shell into apps from the admin UI
* Update Haraka for a few crash fixes
[1.6.1]
* Patch release for 1.6.0 to fix regressions
* Allow apps to have 'network' capability (thanks @mehdi)
* Fix crash in collectd disk usage collection script
* Fix layout issues in update and oauth views
* Use maxsize rule instead of size in lograte configs
* Make it possible to skip backups per-app
* Hide restore button for noop backend
* Add popups and warnings for noop backend
* Add webterminal to shell into apps from the admin UI
* Update Haraka for a few crash fixes
[1.6.2]
* Allow apps to have 'network' capability (thanks @mehdi)
* Fix crash in collectd disk usage collection script
* Fix layout issues in update and oauth views
* Use maxsize rule instead of size in lograte configs
* Make it possible to skip backups per-app
* Hide restore button for noop backend
* Add popups and warnings for noop backend
* Add webterminal to shell into apps from the admin UI
* Update Haraka for a few crash fixes
[1.6.3]
* Fixes selection issue while clicking on empty flexbox space
* Indicate directories can be downloaded in the web terminal
* Do not show app update indicator for normal users
* Display email notice when using Cloudflare DNS
* Set MX records correctly when using Cloudflare DNS
* Fix bug where webterminal can incorrectly appear in main view
* Do not crash if DNS credentials are invalid
[1.6.4]
* More descriptive Postmark email relay form
* Fix file upload in chrome
* Support Ctrl/Cmd+v webterminal pasting
* Ensure unbound always starts up
* Add option to run app in repair mode
[1.6.5]
* DigitalOcean DNS: Add pagination
* Cloudflare DNS: Optimize listing of DNS entries
* Update node to 6.11.3
* App volumes can now be symlinked individually to external storage
* Periodically check if IP is blacklisted and notify admins
* Do not ask password when re-configuring app (since it is non-destructive)
* Move mail data inside boxdata directory. This makes the no-op backend more useful
* Remove collectd stats when app is uninstalled
[1.7.0]
* Add rsync format for backups. This feature allows incremental backups
* Add Google DNS backend (thanks @syn)
* Add DigitalOcean spaces backup storage backend
* Add Cloudscale and Exoscale as supported VPS providers
* Display backup progress and status in the web interface
* Preliminary IPv6 support
* Add IP RBL status to web interface
* Add auto-update pattern `Every wednesday night`
* Update Haraka to 2.8.15. This fixes the issue where emails were bounced with the message 'Send MAIL FROM first'
* Do not overwrite existing subdomain when app's location is changed
* Add button to send test email
* Fix crash in carbon which made graphs disappear on some Cloudrons
[1.7.1]
* Add rsync format for backups. This feature allows incremental backups
* Add Google DNS backend (thanks @syn)
* Add DigitalOcean spaces backup storage backend
* Add Cloudscale and Exoscale as supported VPS providers
* Display backup progress and status in the web interface
* Preliminary IPv6 support
* Add IP RBL status to web interface
* Add auto-update pattern `Every wednesday night`
* Update Haraka to 2.8.15. This fixes the issue where emails were bounced with the message 'Send MAIL FROM first'
* Do not overwrite existing subdomain when app's location is changed
* Add button to send test email
* Fix crash in carbon which made graphs disappear on some Cloudrons
[1.7.2]
* Add rsync format for backups. This feature allows incremental backups
* Add Google DNS backend (thanks @syn)
* Add Cloudscale and Exoscale as supported VPS providers
* Display backup progress and status in the web interface
* Preliminary IPv6 support
* Add IP RBL status to web interface
* Add auto-update pattern `Every wednesday night`
* Update Haraka to 2.8.15. This fixes the issue where emails were bounced with the message 'Send MAIL FROM first'
* Do not overwrite existing subdomain when app's location is changed
* Add button to send test email
* Fix crash in carbon which made graphs disappear on some Cloudrons
* Fix issue where OAuth SSO did not work when alternate domain was used
[1.7.3]
* Add rsync format for backups. This feature allows incremental backups
* Add Google DNS backend (thanks @syn)
* Add Cloudscale and Exoscale as supported VPS providers
* Display backup progress and status in the web interface
* Preliminary IPv6 support
* Add IP RBL status to web interface
* Add auto-update pattern `Every wednesday night`
* Update Haraka to 2.8.15. This fixes the issue where emails were bounced with the message 'Send MAIL FROM first'
* Do not overwrite existing subdomain when app's location is changed
* Add button to send test email
* Fix crash in carbon which made graphs disappear on some Cloudrons
* Fix issue where OAuth SSO did not work when alternate domain was used
[1.7.4]
* Add rsync format for backups. This feature allows incremental backups
* Add Google DNS backend (thanks @syn)
* Add DigitalOcean spaces backup storage backend
* Add Cloudscale and Exoscale as supported VPS providers
* Display backup progress and status in the web interface
* Preliminary IPv6 support
* Add IP RBL status to web interface
* Add auto-update pattern `Every wednesday night`
* Update Haraka to 2.8.15. This fixes the issue where emails were bounced with the message 'Send MAIL FROM first'
* Do not overwrite existing subdomain when app's location is changed
* Add button to send test email
* Fix crash in carbon which made graphs disappear on some Cloudrons
* Fix issue where OAuth SSO did not work when alternate domain was used
* Changelog is now rendered in markdown format
+4 -4
View File
@@ -47,10 +47,10 @@ apt-get -y install \
cp /usr/share/unattended-upgrades/20auto-upgrades /etc/apt/apt.conf.d/20auto-upgrades
echo "==> Installing node.js"
mkdir -p /usr/local/node-6.11.2
curl -sL https://nodejs.org/dist/v6.11.2/node-v6.11.2-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-6.11.2
ln -sf /usr/local/node-6.11.2/bin/node /usr/bin/node
ln -sf /usr/local/node-6.11.2/bin/npm /usr/bin/npm
mkdir -p /usr/local/node-6.11.3
curl -sL https://nodejs.org/dist/v6.11.3/node-v6.11.3-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-6.11.3
ln -sf /usr/local/node-6.11.3/bin/node /usr/bin/node
ln -sf /usr/local/node-6.11.3/bin/npm /usr/bin/npm
apt-get install -y python # Install python which is required for npm rebuild
[[ "$(python --version 2>&1)" == "Python 2.7."* ]] || die "Expecting python version to be 2.7.x"
@@ -0,0 +1,15 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE settings MODIFY value TEXT', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE settings MODIFY value VARCHAR(512)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,25 @@
'use strict';
// ensure backupFolder and format are not empty
exports.up = function(db, callback) {
db.all('SELECT * FROM settings WHERE name=?', [ 'backup_config' ], function (error, result) {
if (error || result.length === 0) return callback(error);
var value = JSON.parse(result[0].value);
value.format = 'tgz'; // set the format
if (value.provider === 'filesystem' && !value.backupFolder) {
value.backupFolder = '/var/backups'; // set the backupFolder
}
db.runSql('UPDATE settings SET value = ? WHERE name = ?', [ JSON.stringify(value), 'backup_config' ], function (error) {
if (error) console.error('Error setting ownerid ' + JSON.stringify(u) + error);
callback();
});
});
};
exports.down = function(db, callback) {
callback();
};
@@ -0,0 +1,15 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE backups ADD COLUMN format VARCHAR(16) DEFAULT "tgz"', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE backups DROP COLUMN format', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,15 @@
'use strict';
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN newConfigJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN newConfigJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
+5 -3
View File
@@ -60,7 +60,7 @@ CREATE TABLE IF NOT EXISTS apps(
manifestJson TEXT,
httpPort INTEGER, // this is the nginx proxy port and not manifest.httpPort
location VARCHAR(128) NOT NULL UNIQUE,
dnsRecordId VARCHAR(512), // tracks any id that we got back to track dns updates (unused)
dnsRecordId VARCHAR(512), // tracks any id that we got back to track dns updates
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
createdAt TIMESTAMP(2) NOT NULL DEFAULT CURRENT_TIMESTAMP,
memoryLimit BIGINT DEFAULT 0,
@@ -73,7 +73,8 @@ CREATE TABLE IF NOT EXISTS apps(
// the following fields do not belong here, they can be removed when we use a queue for apptask
lastBackupId VARCHAR(128), // used to pass backupId to restore from to apptask
oldConfigJson TEXT, // used to pass old config for apptask
oldConfigJson TEXT, // used to pass old config for apptask (configure, restore)
newConfigJson TEXT, // used to pass new config for apptask (update)
PRIMARY KEY(id));
@@ -93,7 +94,7 @@ CREATE TABLE IF NOT EXISTS authcodes(
CREATE TABLE IF NOT EXISTS settings(
name VARCHAR(128) NOT NULL UNIQUE,
value VARCHAR(512),
value TEXT,
PRIMARY KEY(name));
CREATE TABLE IF NOT EXISTS appAddonConfigs(
@@ -111,6 +112,7 @@ CREATE TABLE IF NOT EXISTS backups(
dependsOn TEXT, /* comma separate list of objects this backup depends on */
state VARCHAR(16) NOT NULL,
restoreConfigJson TEXT, /* JSON including the manifest of the backed up app */
format VARCHAR(16) DEFAULT "tgz",
PRIMARY KEY (id));
+2815 -6
View File
File diff suppressed because it is too large Load Diff
+5 -3
View File
@@ -14,9 +14,10 @@
"node": ">=4.0.0 <=4.1.1"
},
"dependencies": {
"@google-cloud/dns": "^0.6.2",
"@sindresorhus/df": "^2.1.0",
"async": "^2.5.0",
"aws-sdk": "^2.97.0",
"aws-sdk": "^2.132.0",
"body-parser": "^1.17.2",
"cloudron-manifestformat": "^2.9.0",
"connect-ensure-login": "^0.1.1",
@@ -39,6 +40,7 @@
"hock": "https://registry.npmjs.org/hock/-/hock-1.3.2.tgz",
"json": "^9.0.3",
"ldapjs": "^1.0.0",
"lodash.chunk": "^4.2.0",
"mime": "^1.3.4",
"moment-timezone": "^0.5.5",
"morgan": "^1.7.0",
@@ -58,7 +60,7 @@
"progress-stream": "^2.0.0",
"proxy-middleware": "^0.13.0",
"s3-block-read-stream": "^0.2.0",
"safetydance": "^0.2.0",
"safetydance": "^0.7.1",
"semver": "^4.3.6",
"showdown": "^1.6.0",
"split": "^1.0.0",
@@ -89,7 +91,7 @@
"istanbul": "*",
"js2xmlparser": "^1.0.0",
"mocha": "*",
"mock-aws-s3": "^2.4.0",
"mock-aws-s3": "git+https://github.com/cloudron-io/mock-aws-s3.git",
"nock": "^9.0.14",
"node-sass": "^3.13.1",
"readdirp": "https://registry.npmjs.org/readdirp/-/readdirp-2.1.0.tgz",
+5 -2
View File
@@ -105,13 +105,15 @@ done
# validate arguments in the absence of data
if [[ -z "${dataJson}" ]]; then
if [[ -z "${provider}" ]]; then
echo "--provider is required (azure, digitalocean, ec2, lightsail, linode, ovh, rosehosting, scaleway, vultr or generic)"
echo "--provider is required (azure, cloudscale, digitalocean, ec2, exoscale, lightsail, linode, ovh, rosehosting, scaleway, vultr or generic)"
exit 1
elif [[ \
"${provider}" != "ami" && \
"${provider}" != "azure" && \
"${provider}" != "cloudscale" && \
"${provider}" != "digitalocean" && \
"${provider}" != "ec2" && \
"${provider}" != "exoscale" && \
"${provider}" != "gce" && \
"${provider}" != "lightsail" && \
"${provider}" != "linode" && \
@@ -121,7 +123,7 @@ if [[ -z "${dataJson}" ]]; then
"${provider}" != "vultr" && \
"${provider}" != "generic" \
]]; then
echo "--provider must be one of: azure, digitalocean, ec2, gce, lightsail, linode, ovh, rosehosting, scaleway, vultr or generic"
echo "--provider must be one of: azure, cloudscale, digitalocean, ec2, exoscale, gce, lightsail, linode, ovh, rosehosting, scaleway, vultr or generic"
exit 1
fi
@@ -209,6 +211,7 @@ if [[ -z "${dataJson}" ]]; then
"provider": "filesystem",
"backupFolder": "/var/backups",
"key": "${encryptionKey}",
"format": "tgz",
"retentionSecs": 172800
},
"updateConfig": {
+2 -2
View File
@@ -31,8 +31,8 @@ if ! $(cd "${SOURCE_DIR}" && git diff --exit-code >/dev/null); then
exit 1
fi
if [[ "$(node --version)" != "v6.11.2" ]]; then
echo "This script requires node 6.11.2"
if [[ "$(node --version)" != "v6.11.3" ]]; then
echo "This script requires node 6.11.3"
exit 1
fi
+6 -6
View File
@@ -35,12 +35,12 @@ while true; do
done
echo "==> installer: updating node"
if [[ "$(node --version)" != "v6.11.2" ]]; then
mkdir -p /usr/local/node-6.11.2
$curl -sL https://nodejs.org/dist/v6.11.2/node-v6.11.2-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-6.11.2
ln -sf /usr/local/node-6.11.2/bin/node /usr/bin/node
ln -sf /usr/local/node-6.11.2/bin/npm /usr/bin/npm
rm -rf /usr/local/node-6.11.1
if [[ "$(node --version)" != "v6.11.3" ]]; then
mkdir -p /usr/local/node-6.11.3
$curl -sL https://nodejs.org/dist/v6.11.3/node-v6.11.3-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-6.11.3
ln -sf /usr/local/node-6.11.3/bin/node /usr/bin/node
ln -sf /usr/local/node-6.11.3/bin/npm /usr/bin/npm
rm -rf /usr/local/node-6.11.2
fi
for try in `seq 1 10`; do
+2 -2
View File
@@ -34,11 +34,11 @@ if [[ "${arg_retire_reason}" != "" || "${existing_infra}" != "${current_infra}"
echo "Showing progress bar on all subdomains in retired mode or infra update. retire: ${arg_retire_reason} existing: ${existing_infra} current: ${current_infra}"
rm -f ${PLATFORM_DATA_DIR}/nginx/applications/*
${box_src_dir}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\", \"robotsTxtQuoted\": null }" > "${PLATFORM_DATA_DIR}/nginx/applications/admin.conf"
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\", \"robotsTxtQuoted\": null, \"hasIPv6\": false }" > "${PLATFORM_DATA_DIR}/nginx/applications/admin.conf"
else
echo "Show progress bar only on admin domain for normal update"
${box_src_dir}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\", \"robotsTxtQuoted\": null }" > "${PLATFORM_DATA_DIR}/nginx/applications/admin.conf"
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\", \"robotsTxtQuoted\": null, \"hasIPv6\": false }" > "${PLATFORM_DATA_DIR}/nginx/applications/admin.conf"
fi
if [[ "${arg_retire_reason}" == "migrate" ]]; then
+46 -52
View File
@@ -7,7 +7,6 @@ echo "==> Cloudron Start"
readonly USER="yellowtent"
readonly HOME_DIR="/home/${USER}"
readonly BOX_SRC_DIR="${HOME_DIR}/box"
readonly OLD_DATA_DIR="${HOME_DIR}/data";
readonly PLATFORM_DATA_DIR="${HOME_DIR}/platformdata" # platform data
readonly APPS_DATA_DIR="${HOME_DIR}/appsdata" # app data
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata" # box data
@@ -73,54 +72,29 @@ fi
mkdir -p "${BOX_DATA_DIR}"
mkdir -p "${APPS_DATA_DIR}"
mkdir -p "${PLATFORM_DATA_DIR}"
# keep these in sync with paths.js
echo "==> Ensuring directories"
if [[ ! -d "${PLATFORM_DATA_DIR}/mail" ]]; then
if [[ -d "${OLD_DATA_DIR}/mail" ]]; then
echo "==> Migrate old mail data"
# Migrate mail data to new format
docker stop mail || true # otherwise the move below might fail if mail container writes in the middle
mkdir -p "${PLATFORM_DATA_DIR}/mail"
# we can't move the whole folder as it is a btrfs subvolume mount
mv -f "${OLD_DATA_DIR}/mail/"* "${PLATFORM_DATA_DIR}/mail/" # this used to be mail container's run directory
else
echo "==> Create new mail data dir"
mkdir -p "${PLATFORM_DATA_DIR}/mail"
fi
fi
mkdir -p "${PLATFORM_DATA_DIR}/graphite"
mkdir -p "${PLATFORM_DATA_DIR}/mail/dkim"
mkdir -p "${PLATFORM_DATA_DIR}/mysql"
mkdir -p "${PLATFORM_DATA_DIR}/postgresql"
mkdir -p "${PLATFORM_DATA_DIR}/mongodb"
mkdir -p "${PLATFORM_DATA_DIR}/snapshots"
mkdir -p "${PLATFORM_DATA_DIR}/addons/mail"
mkdir -p "${PLATFORM_DATA_DIR}/collectd/collectd.conf.d"
mkdir -p "${PLATFORM_DATA_DIR}/logrotate.d"
mkdir -p "${PLATFORM_DATA_DIR}/acme"
mkdir -p "${PLATFORM_DATA_DIR}/backup"
mkdir -p "${BOX_DATA_DIR}/appicons"
mkdir -p "${BOX_DATA_DIR}/certs"
mkdir -p "${BOX_DATA_DIR}/acme" # acme keys
mkdir -p "${BOX_DATA_DIR}/mail/dkim"
# ensure backups folder exists and is writeable
mkdir -p /var/backups
chmod 777 /var/backups
echo "==> Check for old btrfs volumes"
if mountpoint -q "${OLD_DATA_DIR}"; then
echo "==> Cleanup btrfs volumes"
# First stop all container to be able to unmount
docker ps -q | xargs docker stop
umount "${OLD_DATA_DIR}"
rm -rf "/root/user_data.img"
else
echo "==> No btrfs volumes found";
fi
echo "==> Configuring journald"
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
@@ -146,7 +120,10 @@ echo "==> Setting up unbound"
# DO uses Google nameservers by default. This causes RBL queries to fail (host 2.0.0.127.zen.spamhaus.org)
# We do not use dnsmasq because it is not a recursive resolver and defaults to the value in the interfaces file (which is Google DNS!)
# We listen on 0.0.0.0 because there is no way control ordering of docker (which creates the 172.18.0.0/16) and unbound
echo -e "server:\n\tinterface: 0.0.0.0\n\taccess-control: 127.0.0.1 allow\n\taccess-control: 172.18.0.1/16 allow\n\tcache-max-negative-ttl: 30\n\tcache-max-ttl: 300" > /etc/unbound/unbound.conf.d/cloudron-network.conf
# If IP6 is not enabled, dns queries seem to fail on some hosts
echo -e "server:\n\tinterface: 0.0.0.0\n\tdo-ip6: yes\n\taccess-control: 127.0.0.1 allow\n\taccess-control: 172.18.0.1/16 allow\n\tcache-max-negative-ttl: 30\n\tcache-max-ttl: 300\n\t#logfile: /var/log/unbound.log\n\t#verbosity: 10" > /etc/unbound/unbound.conf.d/cloudron-network.conf
# update the root anchor after a out-of-disk-space situation (see #269)
unbound-anchor -a /var/lib/unbound/root.key
echo "==> Adding systemd services"
cp -r "${script_dir}/start/systemd/." /etc/systemd/system/
@@ -221,20 +198,31 @@ mysql -u root -p${mysql_root_password} -e 'CREATE DATABASE IF NOT EXISTS box'
if [[ -n "${arg_restore_url}" ]]; then
set_progress "30" "Downloading restore data"
decrypt=""
if [[ "${arg_restore_url}" == *.tar.gz.enc || -n "${arg_restore_key}" ]]; then
echo "==> Downloading encrypted backup: ${arg_restore_url} and key: ${arg_restore_key}"
decrypt=(openssl aes-256-cbc -d -nosalt -pass "pass:${arg_restore_key}")
else
echo "==> Downloading backup: ${arg_restore_url}"
decrypt=(cat -)
fi
readonly restore_dir="${arg_restore_url#file://}"
while true; do
if $curl -L "${arg_restore_url}" | "${decrypt[@]}" \
| tar -zxf - --overwrite --transform="s,^box/\?,boxdata/," --transform="s,^mail/\?,platformdata/mail/," --show-transformed-names -C "${HOME_DIR}"; then break; fi
echo "Failed to download data, trying again"
done
if [[ -d "${restore_dir}" ]]; then # rsync backup
echo "==> Copying backup: ${restore_dir}"
if [[ $(stat -c "%d" "${BOX_DATA_DIR}") == $(stat -c "%d" "${restore_dir}") ]]; then
cp -rfl "${restore_dir}/." "${BOX_DATA_DIR}"
else
cp -rf "${restore_dir}/." "${BOX_DATA_DIR}"
fi
else # tgz backup
decrypt=""
if [[ "${arg_restore_url}" == *.tar.gz.enc || -n "${arg_restore_key}" ]]; then
echo "==> Downloading encrypted backup: ${arg_restore_url} and key: ${arg_restore_key}"
decrypt=(openssl aes-256-cbc -d -nosalt -pass "pass:${arg_restore_key}")
elif [[ "${arg_restore_url}" == *.tar.gz ]]; then
echo "==> Downloading backup: ${arg_restore_url}"
decrypt=(cat -)
fi
while true; do
if $curl -L "${arg_restore_url}" | "${decrypt[@]}" \
| tar -zxf - --overwrite -C "${BOX_DATA_DIR}"; then break; fi
echo "Failed to download data, trying again"
done
fi
set_progress "35" "Setting up MySQL"
if [[ -f "${BOX_DATA_DIR}/box.mysqldump" ]]; then
@@ -247,7 +235,7 @@ set_progress "40" "Migrating data"
sudo -u "${USER}" -H bash <<EOF
set -eu
cd "${BOX_SRC_DIR}"
BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@localhost/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up
BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@127.0.0.1/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up
EOF
echo "==> Creating cloudron.conf"
@@ -263,7 +251,7 @@ cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
"provider": "${arg_provider}",
"isDemo": ${arg_is_demo},
"database": {
"hostname": "localhost",
"hostname": "127.0.0.1",
"username": "root",
"password": "${mysql_root_password}",
"port": 3306,
@@ -285,14 +273,25 @@ cat > "${BOX_SRC_DIR}/webadmin/dist/config.json" <<CONF_END
}
CONF_END
if [[ ! -f "${BOX_DATA_DIR}/dhparams.pem" ]]; then
echo "==> Generating dhparams (takes forever)"
openssl dhparam -out "${BOX_DATA_DIR}/dhparams.pem" 2048
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
else
cp "${BOX_DATA_DIR}/dhparams.pem" "${PLATFORM_DATA_DIR}/addons/mail/dhparams.pem"
fi
echo "==> Changing ownership"
chown "${USER}:${USER}" -R "${CONFIG_DIR}"
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/logrotate.d" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme"
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}"
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/mail/dkim" # this is owned by box currently since it generates the keys
chown "${USER}:${USER}" -R "${PLATFORM_DATA_DIR}/nginx" "${PLATFORM_DATA_DIR}/collectd" "${PLATFORM_DATA_DIR}/logrotate.d" "${PLATFORM_DATA_DIR}/addons" "${PLATFORM_DATA_DIR}/acme" "${PLATFORM_DATA_DIR}/backup"
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}/INFRA_VERSION" 2>/dev/null || true
chown "${USER}:${USER}" "${PLATFORM_DATA_DIR}"
# do not chown the boxdata/mail directory; dovecot gets upset
chown "${USER}:${USER}" "${BOX_DATA_DIR}"
find "${BOX_DATA_DIR}" -mindepth 1 -maxdepth 1 -not -path "${BOX_DATA_DIR}/mail" -exec chown -R "${USER}:${USER}" {} \;
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}/mail/dkim" # this is owned by box currently since it generates the keys
echo "==> Adding automated configs"
if [[ ! -z "${arg_backup_config}" ]]; then
mysql -u root -p${mysql_root_password} \
@@ -314,11 +313,6 @@ if [[ ! -z "${arg_tls_config}" ]]; then
-e "REPLACE INTO settings (name, value) VALUES (\"tls_config\", '$arg_tls_config')" box
fi
echo "==> Generating dhparams (takes forever)"
if [[ ! -f "${BOX_DATA_DIR}/dhparams.pem" ]]; then
openssl dhparam -out "${BOX_DATA_DIR}/dhparams.pem" 2048
fi
set_progress "60" "Starting Cloudron"
systemctl start cloudron.target
+6
View File
@@ -7,3 +7,9 @@ printf "Cloudron relies on and may break your installation. Ubuntu security upda
printf "are automatically installed on this server every night.\n"
printf "\n"
printf "Read more at https://cloudron.io/documentation/security/#os-updates\n"
if grep -q "^PasswordAuthentication yes" /etc/ssh/sshd_config; then
printf "\nPlease disable password based SSH access to secure your server. Read more at\n"
printf "https://cloudron.io/documentation/security/#securing-ssh-access\n"
fi
+13 -4
View File
@@ -5,12 +5,18 @@ map $http_upgrade $connection_upgrade {
}
server {
<% if (vhost) { %>
listen 443 http2;
<% if (vhost) { -%>
server_name <%= vhost %>;
<% } else { %>
listen 443 http2;
<% if (hasIPv6) { -%>
listen [::]:443 http2;
<% } -%>
<% } else { -%>
listen 443 http2 default_server;
<% } %>
<% if (hasIPv6) { -%>
listen [::]:443 http2 default_server;
<% } -%>
<% } -%>
ssl on;
# paths are relative to prefix and not to this file
@@ -80,6 +86,9 @@ server {
# No buffering to temp files, it fails for large downloads
proxy_max_temp_file_size 0;
# Disable check to allow unlimited body sizes. this allows apps to accept whatever size they want
client_max_body_size 0;
<% if (robotsTxtQuoted) { %>
location = /robots.txt {
return 200 <%- robotsTxtQuoted %>;
+1
View File
@@ -39,6 +39,7 @@ http {
# HTTP server
server {
listen 80;
listen [::]:80;
# collectd
location /nginx_status {
+6 -8
View File
@@ -13,8 +13,8 @@ yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reloadnginx.sh
Defaults!/home/yellowtent/box/src/scripts/reboot.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reboot.sh
Defaults!/home/yellowtent/box/src/scripts/reloadcollectd.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reloadcollectd.sh
Defaults!/home/yellowtent/box/src/scripts/configurecollectd.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/configurecollectd.sh
Defaults!/home/yellowtent/box/src/scripts/collectlogs.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/collectlogs.sh
@@ -28,11 +28,9 @@ yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/update.sh
Defaults!/home/yellowtent/box/src/scripts/authorized_keys.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/authorized_keys.sh
Defaults!/home/yellowtent/box/src/scripts/node.sh env_keep="HOME BOX_ENV NODE_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/node.sh
Defaults!/home/yellowtent/box/src/scripts/configurelogrotate.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/configurelogrotate.sh
Defaults!/home/yellowtent/box/src/scripts/mvlogrotateconfig.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/mvlogrotateconfig.sh
Defaults!/home/yellowtent/box/src/backuptask.js env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD:SETENV: /home/yellowtent/box/src/backuptask.js
Defaults!/home/yellowtent/box/src/scripts/rmlogrotateconfig.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/rmlogrotateconfig.sh
+4 -3
View File
@@ -20,7 +20,6 @@ var appdb = require('./appdb.js'),
async = require('async'),
clients = require('./clients.js'),
config = require('./config.js'),
constants = require('./constants.js'),
ClientsError = clients.ClientsError,
debug = require('debug')('box:addons'),
docker = require('./docker.js'),
@@ -249,7 +248,7 @@ function setupOauth(app, options, callback) {
if (!app.sso) return callback(null);
var appId = app.id;
var redirectURI = 'https://' + config.appFqdn(app.location);
var redirectURI = 'https://' + (app.altDomain || config.appFqdn(app.location));
var scope = 'profile';
clients.delByAppIdAndType(appId, clients.TYPE_OAUTH, function (error) { // remove existing creds
@@ -645,7 +644,9 @@ function setupRedis(app, options, callback) {
}
const tag = infra.images.redis.tag, redisName = 'redis-' + app.id;
// note that we do not add appId label because this interferes with the stop/start app logic
const cmd = `docker run --restart=always -d --name=${redisName} \
--label=location=${app.location} \
--net cloudron \
--net-alias ${redisName} \
-m ${memoryLimit/2} \
@@ -692,7 +693,7 @@ function teardownRedis(app, options, callback) {
safe.fs.unlinkSync(paths.ADDON_CONFIG_DIR, 'redis-' + app.id + '_vars.sh');
shell.sudo('teardownRedis', [ RMAPPDIR_CMD, app.id + '/redis' ], function (error, stdout, stderr) {
shell.sudo('teardownRedis', [ RMAPPDIR_CMD, app.id + '/redis', true /* delete directory */ ], function (error, stdout, stderr) {
if (error) return callback(new Error('Error removing redis data:' + error));
appdb.unsetAddonConfig(app.id, 'redis', callback);
+7 -12
View File
@@ -59,7 +59,7 @@ var assert = require('assert'),
var APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.installationState', 'apps.installationProgress', 'apps.runState',
'apps.health', 'apps.containerId', 'apps.manifestJson', 'apps.httpPort', 'apps.location', 'apps.dnsRecordId',
'apps.accessRestrictionJson', 'apps.lastBackupId', 'apps.oldConfigJson', 'apps.memoryLimit', 'apps.altDomain',
'apps.accessRestrictionJson', 'apps.lastBackupId', 'apps.oldConfigJson', 'apps.newConfigJson', 'apps.memoryLimit', 'apps.altDomain',
'apps.xFrameOptions', 'apps.sso', 'apps.debugModeJson', 'apps.robotsTxt', 'apps.enableBackup' ].join(',');
var PORT_BINDINGS_FIELDS = [ 'hostPort', 'environmentVariable', 'appId' ].join(',');
@@ -75,6 +75,10 @@ function postProcess(result) {
result.oldConfig = safe.JSON.parse(result.oldConfigJson);
delete result.oldConfigJson;
assert(result.newConfigJson === null || typeof result.newConfigJson === 'string');
result.newConfig = safe.JSON.parse(result.newConfigJson);
delete result.newConfigJson;
assert(result.hostPorts === null || typeof result.hostPorts === 'string');
assert(result.environmentVariables === null || typeof result.environmentVariables === 'string');
@@ -305,17 +309,8 @@ function updateWithConstraints(id, app, constraints, callback) {
var fields = [ ], values = [ ];
for (var p in app) {
if (p === 'manifest') {
fields.push('manifestJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'oldConfig') {
fields.push('oldConfigJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'accessRestriction') {
fields.push('accessRestrictionJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'debugMode') {
fields.push('debugModeJson = ?');
if (p === 'manifest' || p === 'oldConfig' || p === 'newConfig' || p === 'accessRestriction' || p === 'debugMode') {
fields.push(`${p}Json = ?`);
values.push(JSON.stringify(app[p]));
} else if (p !== 'portBindings') {
fields.push(p + ' = ?');
+11 -11
View File
@@ -94,19 +94,20 @@ function checkAppHealth(app, callback) {
superagent
.get(healthCheckUrl)
.set('Host', app.fqdn) // required for some apache configs with rewrite rules
.set('User-Agent', 'Mozilla') // required for some apps (e.g. minio)
.redirects(0)
.timeout(HEALTHCHECK_INTERVAL)
.end(function (error, res) {
if (error && !error.response) {
debugApp(app, 'not alive (network error): %s', error.message);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else if (res.statusCode >= 400) { // 2xx and 3xx are ok
debugApp(app, 'not alive : %s', error || res.status);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else {
setHealth(app, appdb.HEALTH_HEALTHY, callback);
}
});
if (error && !error.response) {
debugApp(app, 'not alive (network error): %s', error.message);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else if (res.statusCode >= 400) { // 2xx and 3xx are ok
debugApp(app, 'not alive : %s', error || res.status);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else {
setHealth(app, appdb.HEALTH_HEALTHY, callback);
}
});
});
}
@@ -156,7 +157,6 @@ function processDockerEvents() {
stream.setEncoding('utf8');
stream.on('data', function (data) {
var ev = JSON.parse(data);
debug('Container ' + ev.id + ' went OOM');
appdb.getByContainerId(ev.id, function (error, app) { // this can error for addons
var program = error || !app.appStoreId ? ev.id : app.appStoreId;
var context = JSON.stringify(ev);
+14 -14
View File
@@ -490,7 +490,8 @@ function install(data, auditSource, callback) {
debugMode: debugMode,
mailboxName: (location ? location : manifest.title.toLowerCase().replace(/[^a-zA-Z0-9]/g, '')) + '.app',
lastBackupId: backupId,
enableBackup: enableBackup
enableBackup: enableBackup,
robotsTxt: robotsTxt
};
appdb.add(appId, appStoreId, manifest, location, portBindings, data, function (error) {
@@ -628,7 +629,7 @@ function update(appId, data, auditSource, callback) {
downloadManifest(data.appStoreId, data.manifest, function (error, appStoreId, manifest) {
if (error) return callback(error);
var values = { };
var newConfig = { };
error = manifestFormat.parse(manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest error:' + error.message));
@@ -636,11 +637,13 @@ function update(appId, data, auditSource, callback) {
error = checkManifestConstraints(manifest);
if (error) return callback(error);
values.manifest = manifest;
newConfig.manifest = manifest;
// TODO: disallow portBindings when an app updates and let ports simply be disabled. the new ports
// might conflict when the update is actually carried out as we do not 'reserve' them in the db
if ('portBindings' in data) {
values.portBindings = data.portBindings;
error = validatePortBindings(data.portBindings, values.manifest.tcpPorts);
newConfig.portBindings = data.portBindings;
error = validatePortBindings(data.portBindings, newConfig.manifest.tcpPorts);
if (error) return callback(error);
}
@@ -662,26 +665,23 @@ function update(appId, data, auditSource, callback) {
// prevent user from installing a app with different manifest id over an existing app
// this allows cloudron install -f --app <appid> for an app installed from the appStore
if (app.manifest.id !== values.manifest.id) {
if (app.manifest.id !== newConfig.manifest.id) {
if (!data.force) return callback(new AppsError(AppsError.BAD_FIELD, 'manifest id does not match. force to override'));
// clear appStoreId so that this app does not get updates anymore
values.appStoreId = '';
newConfig.appStoreId = '';
}
// do not update apps in debug mode
if (app.debugMode && !data.force) return callback(new AppsError(AppsError.BAD_STATE, 'debug mode enabled. force to override'));
// Ensure we update the memory limit in case the new app requires more memory as a minimum
// 0 and -1 are special values for memory limit indicating unset and unlimited
if (app.memoryLimit > 0 && values.manifest.memoryLimit && app.memoryLimit < values.manifest.memoryLimit) {
values.memoryLimit = values.manifest.memoryLimit;
// 0 and -1 are special newConfig for memory limit indicating unset and unlimited
if (app.memoryLimit > 0 && newConfig.manifest.memoryLimit && app.memoryLimit < newConfig.manifest.memoryLimit) {
newConfig.memoryLimit = newConfig.manifest.memoryLimit;
}
values.oldConfig = getAppConfig(app);
appdb.setInstallationCommand(appId, data.force ? appdb.ISTATE_PENDING_FORCE_UPDATE : appdb.ISTATE_PENDING_UPDATE, values, function (error) {
appdb.setInstallationCommand(appId, data.force ? appdb.ISTATE_PENDING_FORCE_UPDATE : appdb.ISTATE_PENDING_UPDATE, { newConfig: newConfig }, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails('' /* location cannot conflict */, values.portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
+22 -1
View File
@@ -11,6 +11,8 @@ exports = module.exports = {
getAppUpdate: getAppUpdate,
getBoxUpdate: getBoxUpdate,
getAccount: getAccount,
AppstoreError: AppstoreError
};
@@ -162,7 +164,8 @@ function sendAliveStatus(data, callback) {
provider: result[settings.TLS_CONFIG_KEY].provider
},
backupConfig: {
provider: result[settings.BACKUP_CONFIG_KEY].provider
provider: result[settings.BACKUP_CONFIG_KEY].provider,
hardlinks: !result[settings.BACKUP_CONFIG_KEY].noHardlinks
},
mailConfig: {
enabled: result[settings.MAIL_CONFIG_KEY].enabled
@@ -245,3 +248,21 @@ function getAppUpdate(app, callback) {
});
});
}
function getAccount(callback) {
assert.strictEqual(typeof callback, 'function');
getAppstoreConfig(function (error, appstoreConfig) {
if (error) return callback(error);
var url = config.apiServerOrigin() + '/api/v1/users/' + appstoreConfig.userId;
superagent.get(url).query({ accessToken: appstoreConfig.token }).timeout(10 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new AppstoreError(AppstoreError.EXTERNAL_ERROR, error));
if (result.statusCode !== 200) return callback(new AppstoreError(AppstoreError.EXTERNAL_ERROR, util.format('Bad response: %s %s', result.statusCode, result.text)));
// { profile: { id, email, groupId, billing, firstName, lastName, company, street, city, zip, state, country } }
callback(null, result.body.profile);
});
});
}
+57 -47
View File
@@ -56,10 +56,9 @@ var addons = require('./addons.js'),
_ = require('underscore');
var COLLECTD_CONFIG_EJS = fs.readFileSync(__dirname + '/collectd.config.ejs', { encoding: 'utf8' }),
RELOAD_COLLECTD_CMD = path.join(__dirname, 'scripts/reloadcollectd.sh'),
CONFIGURE_COLLECTD_CMD = path.join(__dirname, 'scripts/configurecollectd.sh'),
LOGROTATE_CONFIG_EJS = fs.readFileSync(__dirname + '/logrotate.ejs', { encoding: 'utf8' }),
MV_LOGROTATE_CONFIG_CMD = path.join(__dirname, 'scripts/mvlogrotateconfig.sh'),
RM_LOGROTATE_CONFIG_CMD = path.join(__dirname, 'scripts/rmlogrotateconfig.sh'),
CONFIGURE_LOGROTATE_CMD = path.join(__dirname, 'scripts/configurelogrotate.sh'),
RMAPPDIR_CMD = path.join(__dirname, 'scripts/rmappdir.sh'),
CREATEAPPDIR_CMD = path.join(__dirname, 'scripts/createappdir.sh');
@@ -131,7 +130,7 @@ function deleteContainers(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'deleting containers');
debugApp(app, 'deleting app containers (app, scheduler)');
docker.deleteContainers(app.id, function (error) {
if (error) return callback(new Error('Error deleting container: ' + error));
@@ -147,11 +146,12 @@ function createVolume(app, callback) {
shell.sudo('createVolume', [ CREATEAPPDIR_CMD, app.id ], callback);
}
function deleteVolume(app, callback) {
function deleteVolume(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
shell.sudo('deleteVolume', [ RMAPPDIR_CMD, app.id ], callback);
shell.sudo('deleteVolume', [ RMAPPDIR_CMD, app.id, !!options.removeDirectory ], callback);
}
function addCollectdProfile(app, callback) {
@@ -161,7 +161,7 @@ function addCollectdProfile(app, callback) {
var collectdConf = ejs.render(COLLECTD_CONFIG_EJS, { appId: app.id, containerId: app.containerId });
fs.writeFile(path.join(paths.COLLECTD_APPCONFIG_DIR, app.id + '.conf'), collectdConf, function (error) {
if (error) return callback(error);
shell.sudo('addCollectdProfile', [ RELOAD_COLLECTD_CMD ], callback);
shell.sudo('addCollectdProfile', [ CONFIGURE_COLLECTD_CMD, 'add', app.id ], callback);
});
}
@@ -171,7 +171,7 @@ function removeCollectdProfile(app, callback) {
fs.unlink(path.join(paths.COLLECTD_APPCONFIG_DIR, app.id + '.conf'), function (error) {
if (error && error.code !== 'ENOENT') debugApp(app, 'Error removing collectd profile', error);
shell.sudo('removeCollectdProfile', [ RELOAD_COLLECTD_CMD ], callback);
shell.sudo('removeCollectdProfile', [ CONFIGURE_COLLECTD_CMD, 'remove', app.id ], callback);
});
}
@@ -185,11 +185,12 @@ function addLogrotateConfig(app, callback) {
var runVolume = result.Mounts.find(function (mount) { return mount.Destination === '/run'; });
if (!runVolume) return callback(new Error('App does not have /run mounted'));
// logrotate configs can have arbitrary commands, so the config files must be owned by root
var logrotateConf = ejs.render(LOGROTATE_CONFIG_EJS, { volumePath: runVolume.Source });
var tmpFilePath = path.join(os.tmpdir(), app.id + '.logrotate');
fs.writeFile(tmpFilePath, logrotateConf, function (error) {
if (error) return callback(error);
shell.sudo('addLogrotateConfig', [ MV_LOGROTATE_CONFIG_CMD, tmpFilePath, app.id ], callback);
shell.sudo('addLogrotateConfig', [ CONFIGURE_LOGROTATE_CMD, 'add', app.id, tmpFilePath ], callback);
});
});
}
@@ -198,16 +199,13 @@ function removeLogrotateConfig(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
shell.sudo('removeLogrotateConfig', [ RM_LOGROTATE_CONFIG_CMD, app.id ], callback);
shell.sudo('removeLogrotateConfig', [ CONFIGURE_LOGROTATE_CMD, 'remove', app.id ], callback);
}
function verifyManifest(app, callback) {
assert.strictEqual(typeof app, 'object');
function verifyManifest(manifest, callback) {
assert.strictEqual(typeof manifest, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Verifying manifest');
var manifest = app.manifest;
var error = manifestFormat.parse(manifest);
if (error) return callback(new Error(util.format('Manifest error: %s', error.message)));
@@ -240,7 +238,7 @@ function downloadIcon(app, callback) {
if (!safe.fs.writeFileSync(path.join(paths.APP_ICONS_DIR, app.id + '.png'), res.body)) return retryCallback(new Error('Error saving icon:' + safe.error.message));
retryCallback(null);
});
});
}, callback);
}
@@ -272,6 +270,7 @@ function registerSubdomain(app, overwrite, callback) {
}, function (error, result) {
if (error || result instanceof Error) return callback(error || result);
// dnsRecordId tracks whether we created this DNS record so that we can unregister later
updateApp(app, { dnsRecordId: result }, callback);
});
});
@@ -288,6 +287,11 @@ function unregisterSubdomain(app, location, callback) {
return callback(null);
}
if (!app.dnsRecordId) {
debugApp(app, 'Skip unregister of record not created by cloudron');
return callback(null);
}
sysinfo.getPublicIp(function (error, ip) {
if (error) return callback(error);
@@ -379,6 +383,7 @@ function updateApp(app, values, callback) {
// - setup addons (requires the above volume)
// - setup the container (requires image, volumes, addons)
// - setup collectd (requires container id)
// restore is also handled here since restore is just an install with some oldConfig to clean up
function install(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
@@ -386,7 +391,9 @@ function install(app, callback) {
const backupId = app.lastBackupId, isRestoring = app.installationState === appdb.ISTATE_PENDING_RESTORE;
async.series([
verifyManifest.bind(null, app),
// this protects against the theoretical possibility of an app being marked for install/restore from
// a previous version of box code
verifyManifest.bind(null, app.manifest),
// teardown for re-installs
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
@@ -397,13 +404,13 @@ function install(app, callback) {
deleteContainers.bind(null, app),
// oldConfig can be null during upgrades
addons.teardownAddons.bind(null, app, app.oldConfig ? app.oldConfig.manifest.addons : app.manifest.addons),
deleteVolume.bind(null, app),
deleteVolume.bind(null, app, { removeDirectory: false }), // do not remove any symlinked volume
// for restore case
function deleteImageIfChanged(done) {
if (!app.oldConfig || (app.oldConfig.manifest.dockerImage === app.manifest.dockerImage)) return done();
if (!app.oldConfig || (app.oldConfig.manifest.dockerImage === app.manifest.dockerImage)) return done();
docker.deleteImage(app.oldConfig.manifest, done);
docker.deleteImage(app.oldConfig.manifest, done);
},
reserveHttpPort.bind(null, app),
@@ -472,11 +479,9 @@ function backup(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
var prefix = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
async.series([
updateApp.bind(null, app, { installationProgress: '10, Backing up' }),
backups.backupApp.bind(null, app, app.manifest, prefix),
backups.backupApp.bind(null, app, app.manifest),
// done!
function (callback) {
@@ -497,6 +502,9 @@ function configure(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
// oldConfig can be null during an infra update
var locationChanged = app.oldConfig && app.oldConfig.location !== app.location;
async.series([
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
unconfigureNginx.bind(null, app),
@@ -505,8 +513,7 @@ function configure(app, callback) {
stopApp.bind(null, app),
deleteContainers.bind(null, app),
function (next) {
// oldConfig can be null during an infra update
if (!app.oldConfig || app.oldConfig.location === app.location) return next();
if (!locationChanged) return next();
unregisterSubdomain(app, app.oldConfig.location, next);
},
@@ -516,7 +523,7 @@ function configure(app, callback) {
downloadIcon.bind(null, app),
updateApp.bind(null, app, { installationProgress: '35, Registering subdomain' }),
registerSubdomain.bind(null, app, true /* overwrite */),
registerSubdomain.bind(null, app, !locationChanged /* overwrite */), // if location changed, do not overwrite to detect conflicts
updateApp.bind(null, app, { installationProgress: '40, Downloading image' }),
docker.downloadImage.bind(null, app.manifest),
@@ -567,48 +574,51 @@ function update(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Updating to %s', safe.query(app, 'manifest.version'));
debugApp(app, `Updating to ${app.newConfig.manifest.version}`);
// app does not want these addons anymore
// FIXME: this does not handle option changes (like multipleDatabases)
var unusedAddons = _.omit(app.oldConfig.manifest.addons, Object.keys(app.manifest.addons));
var unusedAddons = _.omit(app.manifest.addons, Object.keys(app.newConfig.manifest.addons));
async.series([
// this protects against the theoretical possibility of an app being marked for update from
// a previous version of box code
updateApp.bind(null, app, { installationProgress: '0, Verify manifest' }),
verifyManifest.bind(null, app),
verifyManifest.bind(null, app.newConfig.manifest),
function (next) {
if (app.installationState === appdb.ISTATE_PENDING_FORCE_UPDATE) return next(null);
async.series([
updateApp.bind(null, app, { installationProgress: '15, Backing up app' }),
backups.backupApp.bind(null, app, app.manifest)
], next);
},
// download new image before app is stopped. this is so we can reduce downtime
// and also not remove the 'common' layers when the old image is deleted
updateApp.bind(null, app, { installationProgress: '15, Downloading image' }),
docker.downloadImage.bind(null, app.manifest),
updateApp.bind(null, app, { installationProgress: '25, Downloading image' }),
docker.downloadImage.bind(null, app.newConfig.manifest),
// note: we cleanup first and then backup. this is done so that the app is not running should backup fail
// we cannot easily 'recover' from backup failures because we have to revert manfest and portBindings
updateApp.bind(null, app, { installationProgress: '25, Cleaning up old install' }),
updateApp.bind(null, app, { installationProgress: '35, Cleaning up old install' }),
removeCollectdProfile.bind(null, app),
removeLogrotateConfig.bind(null, app),
stopApp.bind(null, app),
deleteContainers.bind(null, app),
function deleteImageIfChanged(done) {
if (app.oldConfig.manifest.dockerImage === app.manifest.dockerImage) return done();
if (app.manifest.dockerImage === app.newConfig.manifest.dockerImage) return done();
docker.deleteImage(app.oldConfig.manifest, done);
},
function (next) {
if (app.installationState === appdb.ISTATE_PENDING_FORCE_UPDATE) return next(null);
var prefix = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
async.series([
updateApp.bind(null, app, { installationProgress: '30, Backing up app' }),
backups.backupApp.bind(null, app, app.oldConfig.manifest, prefix)
], next);
docker.deleteImage(app.manifest, done);
},
// only delete unused addons after backup
addons.teardownAddons.bind(null, app, unusedAddons),
// switch over to the new config. manifest, memoryLimit, portBindings, appstoreId are updated here
updateApp.bind(null, app, app.newConfig),
updateApp.bind(null, app, { installationProgress: '45, Downloading icon' }),
downloadIcon.bind(null, app),
@@ -629,7 +639,7 @@ function update(app, callback) {
// done!
function (callback) {
debugApp(app, 'updated');
updateApp(app, { installationState: appdb.ISTATE_INSTALLED, installationProgress: '', health: null }, callback);
updateApp(app, { installationState: appdb.ISTATE_INSTALLED, installationProgress: '', health: null, newConfig: null }, callback);
}
], function seriesDone(error) {
if (error) {
@@ -663,7 +673,7 @@ function uninstall(app, callback) {
addons.teardownAddons.bind(null, app, app.manifest.addons),
updateApp.bind(null, app, { installationProgress: '40, Deleting volume' }),
deleteVolume.bind(null, app),
deleteVolume.bind(null, app, { removeDirectory: true }),
updateApp.bind(null, app, { installationProgress: '50, Deleting image' }),
docker.deleteImage.bind(null, app.manifest),
+28 -38
View File
@@ -6,7 +6,7 @@ var assert = require('assert'),
safe = require('safetydance'),
util = require('util');
var BACKUPS_FIELDS = [ 'id', 'creationTime', 'version', 'type', 'dependsOn', 'state', 'restoreConfigJson' ];
var BACKUPS_FIELDS = [ 'id', 'creationTime', 'version', 'type', 'dependsOn', 'state', 'restoreConfigJson', 'format' ];
exports = module.exports = {
add: add,
@@ -47,12 +47,12 @@ function getByTypeAndStatePaged(type, state, page, perPage, callback) {
database.query('SELECT ' + BACKUPS_FIELDS + ' FROM backups WHERE type = ? AND state = ? ORDER BY creationTime DESC LIMIT ?,?',
[ type, state, (page-1)*perPage, perPage ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results.forEach(function (result) { postProcess(result); });
results.forEach(function (result) { postProcess(result); });
callback(null, results);
});
callback(null, results);
});
}
function getByTypePaged(type, page, perPage, callback) {
@@ -63,12 +63,12 @@ function getByTypePaged(type, page, perPage, callback) {
database.query('SELECT ' + BACKUPS_FIELDS + ' FROM backups WHERE type = ? ORDER BY creationTime DESC LIMIT ?,?',
[ type, (page-1)*perPage, perPage ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results.forEach(function (result) { postProcess(result); });
results.forEach(function (result) { postProcess(result); });
callback(null, results);
});
callback(null, results);
});
}
function getByAppIdPaged(page, perPage, appId, callback) {
@@ -80,12 +80,12 @@ function getByAppIdPaged(page, perPage, appId, callback) {
// box versions (0.93.x and below) used to use appbackup_ prefix
database.query('SELECT ' + BACKUPS_FIELDS + ' FROM backups WHERE type = ? AND state = ? AND id LIKE ? ORDER BY creationTime DESC LIMIT ?,?',
[ exports.BACKUP_TYPE_APP, exports.BACKUP_STATE_NORMAL, '%app%\\_' + appId + '\\_%', (page-1)*perPage, perPage ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results.forEach(function (result) { postProcess(result); });
results.forEach(function (result) { postProcess(result); });
callback(null, results);
});
callback(null, results);
});
}
function get(id, callback) {
@@ -94,13 +94,13 @@ function get(id, callback) {
database.query('SELECT ' + BACKUPS_FIELDS + ' FROM backups WHERE id = ? ORDER BY creationTime DESC',
[ id ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
postProcess(result[0]);
postProcess(result[0]);
callback(null, result[0]);
});
callback(null, result[0]);
});
}
function add(backup, callback) {
@@ -110,19 +110,20 @@ function add(backup, callback) {
assert(backup.type === exports.BACKUP_TYPE_APP || backup.type === exports.BACKUP_TYPE_BOX);
assert(util.isArray(backup.dependsOn));
assert.strictEqual(typeof backup.restoreConfig, 'object');
assert.strictEqual(typeof backup.format, 'string');
assert.strictEqual(typeof callback, 'function');
var creationTime = backup.creationTime || new Date(); // allow tests to set the time
var restoreConfig = backup.restoreConfig ? JSON.stringify(backup.restoreConfig) : '';
database.query('INSERT INTO backups (id, version, type, creationTime, state, dependsOn, restoreConfigJson) VALUES (?, ?, ?, ?, ?, ?, ?)',
[ backup.id, backup.version, backup.type, creationTime, exports.BACKUP_STATE_NORMAL, backup.dependsOn.join(','), restoreConfig ],
database.query('INSERT INTO backups (id, version, type, creationTime, state, dependsOn, restoreConfigJson, format) VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
[ backup.id, backup.version, backup.type, creationTime, exports.BACKUP_STATE_NORMAL, backup.dependsOn.join(','), restoreConfig, backup.format ],
function (error) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
callback(null);
});
}
function update(id, backup, callback) {
@@ -158,19 +159,8 @@ function del(id, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
get(id, function (error, result) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback();
if (error) return callback(error);
var whereClause = [ 'id=?' ], whereArgs = [ result.id ];
result.dependsOn.forEach(function (id) {
whereClause.push('id=?');
whereArgs.push(id);
});
database.query('DELETE FROM backups WHERE ' + whereClause.join(' OR '), whereArgs, function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
database.query('DELETE FROM backups WHERE id=?', [ id ], function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
+627 -179
View File
File diff suppressed because it is too large Load Diff
+15 -69
View File
@@ -1,7 +1,12 @@
#!/usr/bin/env node
#!/bin/bash
':' //# comment; exec /usr/bin/env node --max_old_space_size=300 "$0" "$@"
// to understand the above hack read http://sambal.org/2014/02/passing-options-node-shebang-line/
'use strict';
if (process.argv[2] === '--check') return console.log('OK');
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
@@ -10,29 +15,11 @@ require('debug').formatArgs = function formatArgs(args) {
};
var assert = require('assert'),
BackupsError = require('./backups.js').BackupsError,
caas = require('./storage/caas.js'),
backups = require('./backups.js'),
database = require('./database.js'),
debug = require('debug')('box:backuptask'),
filesystem = require('./storage/filesystem.js'),
noop = require('./storage/noop.js'),
path = require('path'),
paths = require('./paths.js'),
s3 = require('./storage/s3.js'),
safe = require('safetydance'),
settings = require('./settings.js');
function api(provider) {
switch (provider) {
case 'caas': return caas;
case 's3': return s3;
case 'filesystem': return filesystem;
case 'minio': return s3;
case 'exoscale-sos': return s3;
case 'noop': return noop;
default: return null;
}
}
safe = require('safetydance');
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
@@ -40,52 +27,12 @@ function initialize(callback) {
database.initialize(callback);
}
function backupApp(backupId, appId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Start app backup with id %s for %s', backupId, appId);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
var backupMapping = [{
source: path.join(paths.APPS_DATA_DIR, appId),
destination: '.'
}];
api(backupConfig.provider).backup(backupConfig, backupId, backupMapping, callback);
});
}
function backupBox(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Start box backup with id %s', backupId);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
var backupMapping = [{
source: paths.BOX_DATA_DIR,
destination: 'box'
}, {
source: path.join(paths.PLATFORM_DATA_DIR, 'mail'),
destination: 'mail'
}];
api(backupConfig.provider).backup(backupConfig, backupId, backupMapping, callback);
});
}
// Main process starts here
var backupId = process.argv[2];
var appId = process.argv[3];
var format = process.argv[3];
var dataDir = process.argv[4];
if (appId) debug('Backuptask for the app %s with id %s', appId, backupId);
else debug('Backuptask for the whole Cloudron with id %s', backupId);
debug(`Backing up ${dataDir} to ${backupId}`);
process.on('SIGTERM', function () {
process.exit(0);
@@ -94,7 +41,9 @@ process.on('SIGTERM', function () {
initialize(function (error) {
if (error) throw error;
function resultHandler(error) {
safe.fs.writeFileSync(paths.BACKUP_RESULT_FILE, '');
backups.upload(backupId, format, dataDir, function resultHandler(error) {
if (error) debug('completed with error', error);
debug('completed');
@@ -104,8 +53,5 @@ initialize(function (error) {
// https://nodejs.org/api/process.html are exit codes used by node. apps.js uses the value below
// to check apptask crashes
process.exit(error ? 50 : 0);
}
if (appId) backupApp(backupId, appId, resultHandler);
else backupBox(backupId, resultHandler);
});
});
+22 -14
View File
@@ -63,7 +63,6 @@ var appdb = require('./appdb.js'),
updateChecker = require('./updatechecker.js'),
user = require('./user.js'),
UserError = user.UserError,
user = require('./user.js'),
util = require('util'),
_ = require('underscore');
@@ -236,24 +235,32 @@ function configureWebadmin(callback) {
callback(error);
}
sysinfo.getPublicIp(function (error, ip) {
if (error) return done(error);
function configureNginx(error) {
debug('configureNginx: dns update:%j', error);
addDnsRecords(ip, function (error) {
certificates.ensureCertificate({ location: constants.ADMIN_LOCATION }, function (error, certFilePath, keyFilePath) {
if (error) return done(error);
gWebadminStatus.tls = true;
nginx.configureAdmin(certFilePath, keyFilePath, constants.NGINX_ADMIN_CONFIG_FILE_NAME, config.adminFqdn(), done);
});
}
// update the DNS. configure nginx regardless of whether it succeeded so that
// box is accessible even if dns creds are invalid
sysinfo.getPublicIp(function (error, ip) {
if (error) return configureNginx(error);
addDnsRecords(ip, function (error) {
if (error) return configureNginx(error);
subdomains.waitForDns(config.adminFqdn(), ip, 'A', { interval: 30000, times: 50000 }, function (error) {
if (error) return done(error);
if (error) return configureNginx(error);
gWebadminStatus.dns = true;
certificates.ensureCertificate({ location: constants.ADMIN_LOCATION }, function (error, certFilePath, keyFilePath) {
if (error) return done(error);
gWebadminStatus.tls = true;
nginx.configureAdmin(certFilePath, keyFilePath, constants.NGINX_ADMIN_CONFIG_FILE_NAME, config.adminFqdn(), done);
});
configureNginx();
});
});
});
@@ -412,7 +419,7 @@ function getConfig(callback) {
fqdn: config.fqdn(),
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
progress: progress.getAll(),
isCustomDomain: config.isCustomDomain(),
isDemo: config.isDemo(),
developerMode: developerMode,
@@ -568,7 +575,8 @@ function addDnsRecords(ip, callback) {
});
});
}, function (error) {
debug('addDnsRecords: done updating records with error:', error);
if (error) debug('addDnsRecords: done updating records with error:', error);
else debug('addDnsRecords: done');
callback(error);
});
+7 -1
View File
@@ -33,6 +33,7 @@ exports = module.exports = {
appFqdn: appFqdn,
zoneName: zoneName,
setZoneName: setZoneName,
hasIPv6: hasIPv6,
isDemo: isDemo,
@@ -97,7 +98,7 @@ function initConfig() {
data.port = 5454;
data.apiServerOrigin = 'http://localhost:6060'; // hock doesn't support https
data.database = {
hostname: 'localhost',
hostname: '127.0.0.1',
username: 'root',
password: '',
port: 3306,
@@ -232,3 +233,8 @@ function tlsKey() {
var keyFile = path.join(baseDir(), 'configs/host.key');
return safe.fs.readFileSync(keyFile, 'utf8');
}
function hasIPv6() {
const IPV6_PROC_FILE = '/proc/net/if_inet6';
return fs.existsSync(IPV6_PROC_FILE);
}
+1 -1
View File
@@ -139,7 +139,7 @@ function recreateJobs(tz) {
if (gCleanupBackupsJob) gCleanupBackupsJob.stop();
gCleanupBackupsJob = new CronJob({
cronTime: '00 45 */6 * * *', // every 6 hours. try not to overlap with ensureBackup job
onTick: backups.cleanup,
onTick: backups.cleanup.bind(null, AUDIT_SOURCE, NOOP_CALLBACK),
start: true,
timeZone: tz
});
+3 -3
View File
@@ -38,7 +38,7 @@ function add(dnsConfig, zoneName, subdomain, type, values, callback) {
.send(data)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 400) return callback(new SubdomainError(SubdomainError.BAD_FIELD, result.body.message));
if (result.statusCode === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.statusCode !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
@@ -63,7 +63,7 @@ function get(dnsConfig, zoneName, subdomain, type, callback) {
.query({ token: dnsConfig.token, type: type })
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null, result.body.values);
@@ -102,7 +102,7 @@ function del(dnsConfig, zoneName, subdomain, type, values, callback) {
.send(data)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 400) return callback(new SubdomainError(SubdomainError.BAD_FIELD, result.body.message));
if (result.statusCode === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.statusCode === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND));
+12 -4
View File
@@ -66,18 +66,18 @@ function getDNSRecordsByZoneId(dnsConfig, zoneId, zoneName, subdomain, type, cal
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
var fqdn = subdomain === '' ? zoneName : subdomain + '.' + zoneName;
superagent.get(CLOUDFLARE_ENDPOINT + '/zones/' + zoneId + '/dns_records')
.set('X-Auth-Key',dnsConfig.token)
.set('X-Auth-Email',dnsConfig.email)
.query({ type: type, name: fqdn })
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200 || result.body.success !== true) return translateRequestError(result, callback);
var fqdn = subdomain === '' ? zoneName : subdomain + '.' + zoneName;
var tmp = result.body.result.filter(function (record) {
return (record.type === type && record.name === fqdn);
});
var tmp = result.body.result;
return callback(null, tmp);
});
@@ -109,10 +109,18 @@ function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
var i = 0;
async.eachSeries(values, function (value, callback) {
var priority = null;
if (type === 'MX') {
priority = value.split(' ')[0];
value = value.split(' ')[1];
}
var data = {
type: type,
name: fqdn,
content: value,
priority: priority,
ttl: 120 // 1 means "automatic" (meaning 300ms) and 120 is the lowest supported
};
+35 -18
View File
@@ -13,6 +13,7 @@ var assert = require('assert'),
constants = require('../constants.js'),
debug = require('debug')('box:dns/digitalocean'),
dns = require('dns'),
safe = require('safetydance'),
SubdomainError = require('../subdomains.js').SubdomainError,
superagent = require('superagent'),
util = require('util');
@@ -30,22 +31,34 @@ function getInternal(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
superagent.get(DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records')
.set('Authorization', 'Bearer ' + dnsConfig.token)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND, formatError(result)));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, formatError(result)));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, formatError(result)));
var nextPage = null, matchingRecords = [];
var tmp = result.body.domain_records.filter(function (record) {
return (record.type === type && record.name === subdomain);
async.doWhilst(function (iteratorDone) {
var url = nextPage ? nextPage : DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records';
superagent.get(url)
.set('Authorization', 'Bearer ' + dnsConfig.token)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND, formatError(result)));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, formatError(result)));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, formatError(result)));
matchingRecords = matchingRecords.concat(result.body.domain_records.filter(function (record) {
return (record.type === type && record.name === subdomain);
}));
nextPage = (result.body.links && result.body.links.pages) ? result.body.links.pages.next : null;
iteratorDone();
});
}, function () { return !!nextPage; }, function (error) {
if (error) return callback(error);
debug('getInternal: %j', tmp);
debug('getInternal: %j', matchingRecords);
return callback(null, tmp);
return callback(null, matchingRecords);
});
}
@@ -65,7 +78,7 @@ function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
if (error) return callback(error);
// used to track available records to update instead of create
var i = 0;
var i = 0, recordIds = [];
async.eachSeries(values, function (value, callback) {
var priority = null;
@@ -89,11 +102,13 @@ function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
.send(data)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, formatError(result)));
if (result.statusCode === 422) return callback(new SubdomainError(SubdomainError.BAD_FIELD, result.body.message));
if (result.statusCode !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, formatError(result)));
recordIds.push(safe.query(result.body, 'domain_record.id'));
return callback(null);
});
} else {
@@ -105,18 +120,20 @@ function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
// increment, as we have consumed the record
++i;
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, formatError(result)));
if (result.statusCode === 422) return callback(new SubdomainError(SubdomainError.BAD_FIELD, result.body.message));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, formatError(result)));
recordIds.push(safe.query(result.body, 'domain_record.id'));
return callback(null);
});
}
}, function (error) {
}, function (error, id) {
if (error) return callback(error);
callback(null, 'unused');
callback(null, '' + recordIds[0]); // DO ids are integers
});
});
}
@@ -169,7 +186,7 @@ function del(dnsConfig, zoneName, subdomain, type, values, callback) {
.set('Authorization', 'Bearer ' + dnsConfig.token)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (error && !error.response) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('Network error %s', error.message)));
if (result.statusCode === 404) return callback(null);
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, formatError(result)));
if (result.statusCode !== 204) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, formatError(result)));
+201
View File
@@ -0,0 +1,201 @@
'use strict';
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
GCDNS = require('@google-cloud/dns'),
constants = require('../constants.js'),
debug = require('debug')('box:dns/gcdns'),
dns = require('dns'),
SubdomainError = require('../subdomains.js').SubdomainError,
util = require('util'),
_ = require('underscore');
function getDnsCredentials(dnsConfig) {
assert.strictEqual(typeof dnsConfig, 'object');
var config = {
provider: dnsConfig.provider,
projectId: dnsConfig.projectId,
keyFilename: dnsConfig.keyFilename,
email: dnsConfig.email
};
if (dnsConfig.credentials) {
config.credentials = {
client_email: dnsConfig.credentials.client_email,
private_key: dnsConfig.credentials.private_key
};
}
return config;
}
function getZoneByName(dnsConfig, zoneName, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof callback, 'function');
var gcdns = GCDNS(getDnsCredentials(dnsConfig));
gcdns.getZones(function (error, zones) {
if (error && error.message === 'invalid_grant') return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, 'The key was probably revoked'));
if (error && error.reason === 'No such domain') return callback(new SubdomainError(SubdomainError.NOT_FOUND, error.message));
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error && error.code === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND, error.message));
if (error) {
debug('gcdns.getZones', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error));
}
var zone = zones.filter(function (zone) {
return zone.metadata.dnsName.slice(0, -1) === zoneName; // the zone name contains a '.' at the end
})[0];
if (!zone) return callback(new SubdomainError(SubdomainError.NOT_FOUND, 'no such zone'));
callback(null, zone); //zone.metadata ~= {name="", dnsName="", nameServers:[]}
});
}
function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
debug('add: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
getZoneByName(getDnsCredentials(dnsConfig), zoneName, function (error, zone) {
if (error) return callback(error);
var domain = (subdomain ? subdomain + '.' : '') + zoneName + '.';
zone.getRecords({ type: type, name: domain }, function (error, oldRecords) {
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error) {
debug('upsert->zone.getRecords', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
var newRecord = zone.record(type, {
name: domain,
data: values,
ttl: 1
});
zone.createChange({ delete: oldRecords, add: newRecord }, function(error, change) {
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error && error.code === 412) return callback(new SubdomainError(SubdomainError.STILL_BUSY, error.message));
if (error) {
debug('upsert->zone.createChange', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
callback(null, change.id);
});
});
});
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
getZoneByName(getDnsCredentials(dnsConfig), zoneName, function (error, zone) {
if (error) return callback(error);
var params = {
name: (subdomain ? subdomain + '.' : '') + zoneName + '.',
type: type
};
zone.getRecords(params, function (error, records) {
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error));
if (records.length === 0) return callback(null, [ ]);
return callback(null, records[0].data);
});
});
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
getZoneByName(getDnsCredentials(dnsConfig), zoneName, function (error, zone) {
if (error) return callback(error);
var domain = (subdomain ? subdomain + '.' : '') + zoneName + '.';
zone.getRecords({ type: type, name: domain }, function(error, oldRecords) {
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error) {
debug('del->zone.getRecords', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
zone.deleteRecords(oldRecords, function (error, change) {
if (error && error.code === 403) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error && error.code === 412) return callback(new SubdomainError(SubdomainError.STILL_BUSY, error.message));
if (error) {
debug('del->zone.createChange', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
callback(null, change.id);
});
});
});
}
function verifyDnsConfig(dnsConfig, fqdn, zoneName, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
var credentials = getDnsCredentials(dnsConfig);
if (process.env.BOX_ENV === 'test') return callback(null, credentials); // this shouldn't be here
dns.resolveNs(zoneName, function (error, resolvedNS) {
if (error && error.code === 'ENOTFOUND') return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Unable to resolve nameservers for this domain'));
if (error || !resolvedNS) return callback(new SubdomainError(SubdomainError.BAD_FIELD, error ? error.message : 'Unable to get nameservers'));
getZoneByName(credentials, zoneName, function (error, zone) {
if (error) return callback(error);
var definedNS = zone.metadata.nameServers.sort().map(function(r) { return r.replace(/\.$/, ''); });
if (!_.isEqual(definedNS, resolvedNS.sort())) {
debug('verifyDnsConfig: %j and %j do not match', resolvedNS, definedNS);
return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Domain nameservers are not set to Google Cloud DNS'));
}
const name = constants.ADMIN_LOCATION + (fqdn === zoneName ? '' : '.' + fqdn.slice(0, - zoneName.length - 1));
upsert(credentials, zoneName, name, 'A', [ ip ], function (error, changeId) {
if (error) return callback(error);
debug('verifyDnsConfig: A record added with change id %s', changeId);
callback(null, credentials);
});
});
});
}
+8 -12
View File
@@ -186,9 +186,9 @@ function createSubcontainer(app, name, cmd, options, callback) {
'/run': {}
},
Labels: {
"location": app.location,
"appId": app.id,
"isSubcontainer": String(!isAppContainer)
'location': app.location,
'appId': app.id,
'isSubcontainer': String(!isAppContainer)
},
HostConfig: {
Binds: addons.getBindsSync(app, app.manifest.addons),
@@ -198,15 +198,15 @@ function createSubcontainer(app, name, cmd, options, callback) {
PublishAllPorts: false,
ReadonlyRootfs: app.debugMode ? !!app.debugMode.readonlyRootfs : true,
RestartPolicy: {
"Name": isAppContainer ? "always" : "no",
"MaximumRetryCount": 0
'Name': isAppContainer ? 'always' : 'no',
'MaximumRetryCount': 0
},
CpuShares: 512, // relative to 1024 for system processes
VolumesFrom: isAppContainer ? null : [ app.containerId + ":rw" ],
VolumesFrom: isAppContainer ? null : [ app.containerId + ':rw' ],
NetworkMode: 'cloudron',
Dns: ['172.18.0.1'], // use internal dns
DnsSearch: ['.'], // use internal dns
SecurityOpt: enableSecurityOpt ? [ "apparmor=docker-cloudron-app" ] : null // profile available only on cloudron
SecurityOpt: enableSecurityOpt ? [ 'apparmor=docker-cloudron-app' ] : null // profile available only on cloudron
}
};
@@ -219,7 +219,7 @@ function createSubcontainer(app, name, cmd, options, callback) {
containerOptions = _.extend(containerOptions, options);
debugApp(app, 'Creating container for %s with options %j', app.manifest.dockerImage, containerOptions);
debugApp(app, 'Creating container for %s', app.manifest.dockerImage);
docker.createContainer(containerOptions, callback);
});
@@ -367,8 +367,6 @@ function getContainerIdByIp(ip, callback) {
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
debug('get container by ip %s', ip);
var docker = exports.connection;
docker.listNetworks({}, function (error, result) {
@@ -390,8 +388,6 @@ function getContainerIdByIp(ip, callback) {
}
if (!containerId) return callback(new Error('No container with that ip'));
debug('found container %s with ip %s', containerId, ip);
callback(null, containerId);
});
}
+100 -1
View File
@@ -3,6 +3,7 @@
exports = module.exports = {
verifyRelay: verifyRelay,
getStatus: getStatus,
checkRblStatus: checkRblStatus,
EmailError: EmailError
};
@@ -14,6 +15,7 @@ var assert = require('assert'),
constants = require('./constants.js'),
debug = require('debug')('box:email'),
dig = require('./dig.js'),
mailer = require('./mailer.js'),
net = require('net'),
nodemailer = require('nodemailer'),
safe = require('safetydance'),
@@ -23,6 +25,8 @@ var assert = require('assert'),
util = require('util'),
_ = require('underscore');
var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
const digOptions = { server: '127.0.0.1', port: 53, timeout: 5000 };
function EmailError(reason, errorOrMessage) {
@@ -259,6 +263,100 @@ function checkPtr(callback) {
});
}
// https://raw.githubusercontent.com/jawsome/node-dnsbl/master/list.json
const RBL_LIST = [
{
"name": "Barracuda",
"dns": "b.barracudacentral.org",
"site": "http://www.barracudacentral.org/rbl/removal-request"
},
{
"name": "SpamCop",
"dns": "bl.spamcop.net",
"site": "http://spamcop.net"
},
{
"name": "Sorbs Aggregate Zone",
"dns": "dnsbl.sorbs.net",
"site": "http://dnsbl.sorbs.net/"
},
{
"name": "Sorbs spam.dnsbl Zone",
"dns": "spam.dnsbl.sorbs.net",
"site": "http://sorbs.net"
},
{
"name": "Composite Blocking List",
"dns": "cbl.abuseat.org",
"site": "http://www.abuseat.org"
},
{
"name": "SpamHaus Zen",
"dns": "zen.spamhaus.org",
"site": "http://spamhaus.org"
},
{
"name": "Multi SURBL",
"dns": "multi.surbl.org",
"site": "http://www.surbl.org"
},
{
"name": "Spam Cannibal",
"dns": "bl.spamcannibal.org",
"site": "http://www.spamcannibal.org/cannibal.cgi"
},
{
"name": "dnsbl.abuse.ch",
"dns": "spam.abuse.ch",
"site": "http://dnsbl.abuse.ch/"
},
{
"name": "The Unsubscribe Blacklist(UBL)",
"dns": "ubl.unsubscore.com ",
"site": "http://www.lashback.com/blacklist/"
},
{
"name": "UCEPROTECT Network",
"dns": "dnsbl-1.uceprotect.net",
"site": "http://www.uceprotect.net/en"
}
];
function checkRblStatus(callback) {
assert.strictEqual(typeof callback, 'function');
sysinfo.getPublicIp(function (error, ip) {
if (error) return callback(error, ip);
var flippedIp = ip.split('.').reverse().join('.');
// https://tools.ietf.org/html/rfc5782
async.map(RBL_LIST, function (rblServer, iteratorDone) {
dig.resolve(flippedIp + '.' + rblServer.dns, 'A', digOptions, function (error, records) {
if (error || !records) return iteratorDone(null, null); // not listed
debug('checkRblStatus: %s (ip: %s) is in the blacklist of %j', config.fqdn(), flippedIp, rblServer);
var result = _.extend({ }, rblServer);
dig.resolve(flippedIp + '.' + rblServer.dns, 'TXT', digOptions, function (error, txtRecords) {
result.txtRecords = error || !txtRecords ? 'No txt record' : txtRecords;
debug('checkRblStatus: %s (error: %s) (txtRecords: %j)', config.fqdn(), error, txtRecords);
return iteratorDone(null, result);
});
});
}, function (ignoredError, blacklistedServers) {
blacklistedServers = blacklistedServers.filter(function(b) { return b !== null; });
debug('checkRblStatus: %s (ip: %s) servers: %j', config.fqdn(), ip, blacklistedServers);
return callback(null, { status: blacklistedServers.length === 0, ip: ip, servers: blacklistedServers });
});
});
}
function getStatus(callback) {
assert.strictEqual(typeof callback, 'function');
@@ -290,7 +388,8 @@ function getStatus(callback) {
recordResult('dns.spf', checkSpf),
recordResult('dns.dkim', checkDkim),
recordResult('dns.ptr', checkPtr),
recordResult('relay', checkOutboundPort25)
recordResult('relay', checkOutboundPort25),
recordResult('rbl', checkRblStatus)
);
} else {
checks.push(recordResult('relay', checkSmtpRelay.bind(null, relay)));
+1
View File
@@ -20,6 +20,7 @@ exports = module.exports = {
ACTION_APP_LOGIN: 'app.login',
ACTION_BACKUP_FINISH: 'backup.finish',
ACTION_BACKUP_START: 'backup.start',
ACTION_BACKUP_CLEANUP: 'backup.cleanup',
ACTION_CERTIFICATE_RENEWAL: 'certificate.renew',
ACTION_CLI_MODE: 'settings.climode',
ACTION_START: 'cloudron.start',
+4 -4
View File
@@ -5,9 +5,9 @@
// Do not require anything here!
exports = module.exports = {
// a major version makes all apps restore from backup
// a major version makes all apps restore from backup. #451 must be fixed before we do this.
// a minor version makes all apps re-configure themselves
'version': '48.5.0',
'version': '48.6.0',
'baseImages': [ 'cloudron/base:0.10.0' ],
@@ -18,7 +18,7 @@ exports = module.exports = {
'postgresql': { repo: 'cloudron/postgresql', tag: 'cloudron/postgresql:0.17.0' },
'mongodb': { repo: 'cloudron/mongodb', tag: 'cloudron/mongodb:0.13.0' },
'redis': { repo: 'cloudron/redis', tag: 'cloudron/redis:0.11.0' },
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:0.36.3' },
'graphite': { repo: 'cloudron/graphite', tag: 'cloudron/graphite:0.11.0' }
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:0.37.4' },
'graphite': { repo: 'cloudron/graphite', tag: 'cloudron/graphite:0.12.0' }
}
};
+2 -3
View File
@@ -70,14 +70,13 @@ function cleanupTmpVolume(containerInfo, callback) {
docker.getContainer(containerInfo.Id).exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true, Tty: false }, function (error, execContainer) {
if (error) return callback(new Error('Failed to exec container : ' + error.message));
execContainer.start(function(err, stream) {
execContainer.start({ hijack: true }, function (error, stream) {
if (error) return callback(new Error('Failed to start exec container : ' + error.message));
stream.on('error', callback);
stream.on('end', callback);
stream.setEncoding('utf8');
stream.pipe(process.stdout);
docker.modem.demuxStream(stream, process.stdout, process.stderr);
});
});
}
+11
View File
@@ -0,0 +1,11 @@
<%if (format === 'text') { %>
Test email from <%= fqdn %>,
If you can read this, your Cloudron email settings are good.
Sent at: <%= new Date().toUTCString() %>
<% } else { %>
<% } %>
+44 -22
View File
@@ -30,11 +30,15 @@ exports = module.exports = {
FEEDBACK_TYPE_UPGRADE_REQUEST: 'upgrade_request',
sendFeedback: sendFeedback,
sendTestMail: sendTestMail,
_getMailQueue: _getMailQueue,
_clearMailQueue: _clearMailQueue
};
var assert = require('assert'),
var appstore = require('./appstore.js'),
AppstoreError = appstore.AppstoreError,
assert = require('assert'),
async = require('async'),
config = require('./config.js'),
debug = require('debug')('box:mailer'),
@@ -424,29 +428,34 @@ function sendDigest(info) {
cloudronName = 'Cloudron';
}
var templateData = {
fqdn: config.fqdn(),
webadminUrl: config.adminOrigin(),
cloudronName: cloudronName,
cloudronAvatarUrl: config.adminOrigin() + '/api/v1/cloudron/avatar',
info: info
};
appstore.getAccount(function (error, appstoreProfile) {
if (error && error.reason !== AppstoreError.BILLING_REQUIRED) console.error(error);
if (appstoreProfile) adminEmails.push(appstoreProfile.email);
var templateDataText = JSON.parse(JSON.stringify(templateData));
templateDataText.format = 'text';
var templateData = {
fqdn: config.fqdn(),
webadminUrl: config.adminOrigin(),
cloudronName: cloudronName,
cloudronAvatarUrl: config.adminOrigin() + '/api/v1/cloudron/avatar',
info: info
};
var templateDataHTML = JSON.parse(JSON.stringify(templateData));
templateDataHTML.format = 'html';
var templateDataText = JSON.parse(JSON.stringify(templateData));
templateDataText.format = 'text';
var mailOptions = {
from: mailConfig().from,
to: adminEmails.join(', '),
subject: util.format('[%s] Cloudron - Weekly activity digest', config.fqdn()),
text: render('digest.ejs', templateDataText),
html: render('digest.ejs', templateDataHTML)
};
var templateDataHTML = JSON.parse(JSON.stringify(templateData));
templateDataHTML.format = 'html';
enqueue(mailOptions);
var mailOptions = {
from: mailConfig().from,
to: adminEmails.join(', '),
subject: util.format('[%s] Cloudron - Weekly activity digest', config.fqdn()),
text: render('digest.ejs', templateDataText),
html: render('digest.ejs', templateDataHTML)
};
enqueue(mailOptions);
});
});
});
}
@@ -552,8 +561,8 @@ function sendFeedback(user, type, subject, description) {
type === exports.FEEDBACK_TYPE_UPGRADE_REQUEST ||
type === exports.FEEDBACK_TYPE_APP_ERROR);
var mailOptions = {
from: mailConfig().from,
var mailOptions = {
from: mailConfig().from,
to: 'support@cloudron.io',
subject: util.format('[%s] %s - %s', type, config.fqdn(), subject),
text: render('feedback.ejs', { fqdn: config.fqdn(), type: type, user: user, subject: subject, description: description, format: 'text'})
@@ -562,6 +571,19 @@ function sendFeedback(user, type, subject, description) {
enqueue(mailOptions);
}
function sendTestMail(email) {
assert.strictEqual(typeof email, 'string');
var mailOptions = {
from: mailConfig().from,
to: email,
subject: util.format('Test Email from %s', config.fqdn()),
text: render('test.ejs', { fqdn: config.fqdn(), format: 'text'})
};
enqueue(mailOptions);
}
function _getMailQueue() {
return gMailQueue;
}
+2
View File
@@ -32,6 +32,7 @@ function configureAdmin(certFilePath, keyFilePath, configFileName, vhost, callba
sourceDir: path.resolve(__dirname, '..'),
adminOrigin: config.adminOrigin(),
vhost: vhost, // if vhost is empty it will become the default_server
hasIPv6: config.hasIPv6(),
endpoint: 'admin',
certFilePath: certFilePath,
keyFilePath: keyFilePath,
@@ -60,6 +61,7 @@ function configureApp(app, certFilePath, keyFilePath, callback) {
sourceDir: sourceDir,
adminOrigin: config.adminOrigin(),
vhost: vhost,
hasIPv6: config.hasIPv6(),
port: app.httpPort,
endpoint: endpoint,
certFilePath: certFilePath,
+6 -3
View File
@@ -7,7 +7,8 @@ var config = require('./config.js'),
exports = module.exports = {
CLOUDRON_DEFAULT_AVATAR_FILE: path.join(__dirname + '/../assets/avatar.png'),
INFRA_VERSION_FILE: path.join(config.baseDir(), 'platformdata/INFRA_VERSION'),
BACKUP_RESULT_FILE: path.join(config.baseDir(), 'platformdata/backupresult'),
BACKUP_RESULT_FILE: path.join(config.baseDir(), 'platformdata/backup/result.txt'),
BACKUP_LOG_FILE: path.join(config.baseDir(), 'platformdata/backup/logs.txt'),
OLD_DATA_DIR: path.join(config.baseDir(), 'data'),
PLATFORM_DATA_DIR: path.join(config.baseDir(), 'platformdata'),
@@ -18,14 +19,16 @@ exports = module.exports = {
ADDON_CONFIG_DIR: path.join(config.baseDir(), 'platformdata/addons'),
COLLECTD_APPCONFIG_DIR: path.join(config.baseDir(), 'platformdata/collectd/collectd.conf.d'),
LOGROTATE_CONFIG_DIR: path.join(config.baseDir(), 'platformdata/logrotate.d'),
MAIL_DATA_DIR: path.join(config.baseDir(), 'platformdata/mail'),
NGINX_CONFIG_DIR: path.join(config.baseDir(), 'platformdata/nginx'),
NGINX_APPCONFIG_DIR: path.join(config.baseDir(), 'platformdata/nginx/applications'),
NGINX_CERT_DIR: path.join(config.baseDir(), 'platformdata/nginx/cert'),
BACKUP_INFO_DIR: path.join(config.baseDir(), 'platformdata/backup'),
SNAPSHOT_INFO_FILE: path.join(config.baseDir(), 'platformdata/backup/snapshot-info.json'),
// this is not part of appdata because an icon may be set before install
ACME_ACCOUNT_KEY_FILE: path.join(config.baseDir(), 'boxdata/acme/acme.key'),
APP_ICONS_DIR: path.join(config.baseDir(), 'boxdata/appicons'),
MAIL_DATA_DIR: path.join(config.baseDir(), 'boxdata/mail'),
ACME_ACCOUNT_KEY_FILE: path.join(config.baseDir(), 'boxdata/acme/acme.key'),
APP_CERTS_DIR: path.join(config.baseDir(), 'boxdata/certs'),
CLOUDRON_AVATAR_FILE: path.join(config.baseDir(), 'boxdata/avatar.png'),
FIRST_RUN_FILE: path.join(config.baseDir(), 'boxdata/first_run'),
+11 -9
View File
@@ -162,7 +162,7 @@ function startMysql(callback) {
const memoryLimit = (1 + Math.round(os.totalmem()/(1024*1024*1024)/4)) * 256;
if (!safe.fs.writeFileSync(paths.ADDON_CONFIG_DIR + '/mysql_vars.sh',
'MYSQL_ROOT_PASSWORD=' + rootPassword +'\nMYSQL_ROOT_HOST=172.18.0.1', 'utf8')) {
'MYSQL_ROOT_PASSWORD=' + rootPassword +'\nMYSQL_ROOT_HOST=172.18.0.1', 'utf8')) {
return callback(new Error('Could not create mysql var file:' + safe.error.message));
}
@@ -261,10 +261,10 @@ function createMailConfig(callback) {
var relay = result[settings.MAIL_RELAY_KEY];
const enabled = relay.provider !== 'cloudron-smtp' ? true : false,
host = relay.host || '',
port = relay.port || 25,
username = relay.username || '',
password = relay.password || '';
host = relay.host || '',
port = relay.port || 25,
username = relay.username || '',
password = relay.password || '';
if (!safe.fs.writeFileSync(paths.ADDON_CONFIG_DIR + '/mail/smtp_forward.ini',
`enable_outbound=${enabled}\nhost=${host}\nport=${port}\nenable_tls=true\nauth_type=plain\nauth_user=${username}\nauth_pass=${password}`, 'utf8')) {
@@ -283,13 +283,13 @@ function startMail(callback) {
// mail container uses /app/data for backed up data and /run for restart-able data
const tag = infra.images.mail.tag;
const dataDir = paths.PLATFORM_DATA_DIR;
const memoryLimit = Math.max((1 + Math.round(os.totalmem()/(1024*1024*1024)/4)) * 128, 256);
// admin and mail share the same certificate
certificates.getAdminCertificate(function (error, cert, key) {
if (error) return callback(error);
// the setup script copies dhparams.pem to /addons/mail
if (!safe.fs.writeFileSync(paths.ADDON_CONFIG_DIR + '/mail/tls_cert.pem', cert)) return callback(new Error('Could not create cert file:' + safe.error.message));
if (!safe.fs.writeFileSync(paths.ADDON_CONFIG_DIR + '/mail/tls_key.pem', key)) return callback(new Error('Could not create key file:' + safe.error.message));
@@ -311,8 +311,8 @@ function startMail(callback) {
--dns 172.18.0.1 \
--dns-search=. \
--env ENABLE_MDA=${mailConfig.enabled} \
-v "${dataDir}/mail:/app/data" \
-v "${dataDir}/addons/mail:/etc/mail" \
-v "${paths.MAIL_DATA_DIR}:/app/data" \
-v "${paths.PLATFORM_DATA_DIR}/addons/mail:/etc/mail" \
${ports} \
--read-only -v /run -v /tmp ${tag}`;
@@ -329,7 +329,9 @@ function startMail(callback) {
async.mapSeries(records, function (record, iteratorCallback) {
subdomains.upsert(record.subdomain, record.type, record.values, iteratorCallback);
}, callback);
}, NOOP_CALLBACK); // do not crash if DNS creds do not work in startup sequence
callback();
});
});
});
+14 -3
View File
@@ -2,8 +2,9 @@
exports = module.exports = {
set: set,
setDetail: setDetail,
clear: clear,
get: get,
getAll: getAll,
UPDATE: 'update',
BACKUP: 'backup',
@@ -29,12 +30,22 @@ function set(tag, percent, message) {
progress[tag] = {
percent: percent,
message: message
message: message,
detail: ''
};
debug('%s: %s %s', tag, percent, message);
}
function setDetail(tag, detail) {
assert.strictEqual(typeof tag, 'string');
assert.strictEqual(typeof detail, 'string');
if (!progress[tag]) return debug('unable to set detail %s', detail);
progress[tag].detail = detail;
}
function clear(tag) {
assert.strictEqual(typeof tag, 'string');
@@ -43,6 +54,6 @@ function clear(tag) {
debug('clearing %s', tag);
}
function get() {
function getAll() {
return progress;
}
+2 -2
View File
@@ -323,7 +323,6 @@ function updateApp(req, res, next) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error && error.reason === AppsError.BAD_STATE) return next(new HttpError(409, error.message));
if (error && error.reason === AppsError.PORT_CONFLICT) return next(new HttpError(409, 'Port ' + error.message + ' is already in use.'));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, { }));
@@ -536,12 +535,13 @@ function listBackups(req, res, next) {
function uploadFile(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
debug('uploadFile: %s %s -> %s', req.params.id, req.files, req.query.file);
debug('uploadFile: %s %j -> %s', req.params.id, req.files, req.query.file);
if (typeof req.query.file !== 'string' || !req.query.file) return next(new HttpError(400, 'file query argument must be provided'));
if (!req.files.file) return next(new HttpError(400, 'file must be provided as multipart'));
apps.uploadFile(req.params.id, req.files.file.path, req.query.file, function (error) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, error.message));
if (error) return next(new HttpError(500, error));
debug('uploadFile: done');
+17 -2
View File
@@ -15,7 +15,8 @@ exports = module.exports = {
feedback: feedback,
checkForUpdates: checkForUpdates,
getLogs: getLogs,
getLogStream: getLogStream
getLogStream: getLogStream,
sendTestMail: sendTestMail
};
var assert = require('assert'),
@@ -141,7 +142,7 @@ function getStatus(req, res, next) {
}
function getProgress(req, res, next) {
return next(new HttpSuccess(200, progress.get()));
return next(new HttpSuccess(200, progress.getAll()));
}
function reboot(req, res, next) {
@@ -184,6 +185,10 @@ function getConfig(req, res, next) {
cloudron.getConfig(function (error, cloudronConfig) {
if (error) return next(new HttpError(500, error));
if (!req.user.admin) {
cloudronConfig = _.pick(cloudronConfig, 'apiServerOrigin', 'webServerOrigin', 'fqdn', 'version', 'progress', 'isCustomDomain', 'isDemo', 'cloudronName', 'provider');
}
next(new HttpSuccess(200, cloudronConfig));
});
}
@@ -297,3 +302,13 @@ function getLogStream(req, res, next) {
logStream.on('error', res.end.bind(res, null));
});
}
function sendTestMail(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.email || typeof req.body.email !== 'string') return next(new HttpError(400, 'email must be a non-empty string'));
mailer.sendTestMail(req.body.email);
next(new HttpSuccess(202));
}
+2
View File
@@ -274,6 +274,8 @@ function setBackupConfig(req, res, next) {
if (typeof req.body.provider !== 'string') return next(new HttpError(400, 'provider is required'));
if (typeof req.body.retentionSecs !== 'number') return next(new HttpError(400, 'retentionSecs is required'));
if ('key' in req.body && typeof req.body.key !== 'string') return next(new HttpError(400, 'key must be a string'));
if (typeof req.body.format !== 'string') return next(new HttpError(400, 'format must be a string'));
if ('acceptSelfSignedCerts' in req.body && typeof req.body.acceptSelfSignedCerts !== 'boolean') return next(new HttpError(400, 'format must be a boolean'));
settings.setBackupConfig(req.body, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return next(new HttpError(400, error.message));
+26 -27
View File
@@ -269,7 +269,6 @@ describe('App API', function () {
it('app install fails - missing manifest', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ password: PASSWORD })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('appStoreId or manifest is required');
@@ -280,7 +279,7 @@ describe('App API', function () {
it('app install fails - null manifest', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: null, password: PASSWORD })
.send({ manifest: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('appStoreId or manifest is required');
@@ -291,7 +290,7 @@ describe('App API', function () {
it('app install fails - bad manifest format', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: 'epic', password: PASSWORD })
.send({ manifest: 'epic' })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('manifest must be an object');
@@ -302,7 +301,7 @@ describe('App API', function () {
it('app install fails - empty appStoreId format', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: null, appStoreId: '', password: PASSWORD })
.send({ manifest: null, appStoreId: '' })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('appStoreId or manifest is required');
@@ -323,7 +322,7 @@ describe('App API', function () {
it('app install fails - invalid location', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: '!awesome', accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: '!awesome', accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('Hostname can only contain alphanumerics and hyphen');
@@ -334,7 +333,7 @@ describe('App API', function () {
it('app install fails - invalid location type', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: 42, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: 42, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('location is required');
@@ -345,7 +344,7 @@ describe('App API', function () {
it('app install fails - reserved admin location', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: constants.ADMIN_LOCATION, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: constants.ADMIN_LOCATION, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql(constants.ADMIN_LOCATION + ' is reserved');
@@ -356,7 +355,7 @@ describe('App API', function () {
it('app install fails - reserved api location', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: constants.API_LOCATION, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: constants.API_LOCATION, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql(constants.API_LOCATION + ' is reserved');
@@ -367,7 +366,7 @@ describe('App API', function () {
it('app install fails - portBindings must be object', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: APP_LOCATION, portBindings: 23, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: APP_LOCATION, portBindings: 23, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('portBindings must be an object');
@@ -378,7 +377,7 @@ describe('App API', function () {
it('app install fails - accessRestriction is required', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: APP_LOCATION, portBindings: {} })
.send({ manifest: APP_MANIFEST, location: APP_LOCATION, portBindings: {} })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('accessRestriction is required');
@@ -389,7 +388,7 @@ describe('App API', function () {
it('app install fails - accessRestriction type is wrong', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: APP_LOCATION, portBindings: {}, accessRestriction: '' })
.send({ manifest: APP_MANIFEST, location: APP_LOCATION, portBindings: {}, accessRestriction: '' })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(res.body.message).to.eql('accessRestriction is required');
@@ -400,7 +399,7 @@ describe('App API', function () {
it('app install fails for non admin', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token_1 })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(403);
done();
@@ -412,7 +411,7 @@ describe('App API', function () {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ appStoreId: APP_STORE_ID, password: PASSWORD, location: APP_LOCATION, portBindings: null, accessRestriction: { users: [ 'someuser' ], groups: [] } })
.send({ appStoreId: APP_STORE_ID, location: APP_LOCATION, portBindings: null, accessRestriction: { users: [ 'someuser' ], groups: [] } })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
expect(fake.isDone()).to.be.ok();
@@ -426,7 +425,7 @@ describe('App API', function () {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ appStoreId: APP_STORE_ID, password: PASSWORD, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.send({ appStoreId: APP_STORE_ID, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(503);
expect(fake1.isDone()).to.be.ok();
@@ -442,7 +441,7 @@ describe('App API', function () {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ appStoreId: APP_STORE_ID, password: PASSWORD, location: APP_LOCATION, portBindings: null, accessRestriction: { users: [ 'someuser' ], groups: [] } })
.send({ appStoreId: APP_STORE_ID, location: APP_LOCATION, portBindings: null, accessRestriction: { users: [ 'someuser' ], groups: [] } })
.end(function (err, res) {
expect(res.statusCode).to.equal(202);
expect(res.body.id).to.be.a('string');
@@ -455,7 +454,7 @@ describe('App API', function () {
it('app install fails because of conflicting location', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ manifest: APP_MANIFEST, password: PASSWORD, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.send({ manifest: APP_MANIFEST, location: APP_LOCATION, portBindings: null, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(409);
done();
@@ -565,7 +564,7 @@ describe('App API', function () {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ appStoreId: APP_STORE_ID, password: PASSWORD, location: APP_LOCATION_2, portBindings: null, accessRestriction: null })
.send({ appStoreId: APP_STORE_ID, location: APP_LOCATION_2, portBindings: null, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(202);
expect(res.body.id).to.be.a('string');
@@ -695,7 +694,7 @@ describe('App installation', function () {
superagent.post(SERVER_URL + '/api/v1/apps/install')
.query({ access_token: token })
.send({ appStoreId: APP_STORE_ID, password: PASSWORD, location: APP_LOCATION, portBindings: { ECHO_SERVER_PORT: 7171 }, accessRestriction: null })
.send({ appStoreId: APP_STORE_ID, location: APP_LOCATION, portBindings: { ECHO_SERVER_PORT: 7171 }, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(202);
expect(fake1.isDone()).to.be.ok();
@@ -986,7 +985,7 @@ describe('App installation', function () {
it('cannot reconfigure app with bad location', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: 1234, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null })
.send({ location: 1234, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -996,7 +995,7 @@ describe('App installation', function () {
it('cannot reconfigure app with bad accessRestriction', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: false })
.send({ portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: false })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -1006,7 +1005,7 @@ describe('App installation', function () {
it('cannot reconfigure app with only the cert, no key', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: validCert1 })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: validCert1 })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -1016,7 +1015,7 @@ describe('App installation', function () {
it('cannot reconfigure app with only the key, no cert', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, key: validKey1 })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, key: validKey1 })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -1026,7 +1025,7 @@ describe('App installation', function () {
it('cannot reconfigure app with cert not being a string', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: 1234, key: validKey1 })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: 1234, key: validKey1 })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -1036,7 +1035,7 @@ describe('App installation', function () {
it('cannot reconfigure app with key not being a string', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, cert: validCert1, key: 1234 })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, cert: validCert1, key: 1234 })
.end(function (err, res) {
expect(res.statusCode).to.equal(400);
done();
@@ -1046,7 +1045,7 @@ describe('App installation', function () {
it('non admin cannot reconfigure app', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token_1 })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null })
.end(function (err, res) {
expect(res.statusCode).to.equal(403);
done();
@@ -1056,7 +1055,7 @@ describe('App installation', function () {
it('can reconfigure app', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 } })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 } })
.end(function (err, res) {
expect(res.statusCode).to.equal(202);
checkConfigureStatus(0, done);
@@ -1098,7 +1097,7 @@ describe('App installation', function () {
it('can reconfigure app with custom certificate', function (done) {
superagent.post(SERVER_URL + '/api/v1/apps/' + APP_ID + '/configure')
.query({ access_token: token })
.send({ password: PASSWORD, location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: validCert1, key: validKey1 })
.send({ location: APP_LOCATION_NEW, portBindings: { ECHO_SERVER_PORT: 7172 }, accessRestriction: null, cert: validCert1, key: validKey1 })
.end(function (err, res) {
expect(res.statusCode).to.equal(202);
checkConfigureStatus(0, done);
+32 -45
View File
@@ -10,20 +10,16 @@ var appdb = require('../../appdb.js'),
config = require('../../config.js'),
database = require('../../database.js'),
expect = require('expect.js'),
hock = require('hock'),
http = require('http'),
nock = require('nock'),
superagent = require('superagent'),
server = require('../../server.js'),
settings = require('../../settings.js'),
url = require('url');
settings = require('../../settings.js');
var SERVER_URL = 'http://localhost:' + config.get('port');
var USERNAME = 'superadmin', PASSWORD = 'Foobar?1337', EMAIL ='silly@me.com';
var token = null;
var server;
function setup(done) {
nock.cleanAll();
config._reset();
@@ -40,19 +36,19 @@ function setup(done) {
var scope2 = nock(config.apiServerOrigin()).post('/api/v1/boxes/' + config.fqdn() + '/setup/done?setupToken=somesetuptoken').reply(201, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(result.statusCode).to.eql(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(result.statusCode).to.eql(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
},
function addApp(callback) {
@@ -61,7 +57,7 @@ function setup(done) {
},
function createSettings(callback) {
settings.setBackupConfig({ provider: 'caas', token: 'BACKUP_TOKEN', bucket: 'Bucket', prefix: 'Prefix' }, callback);
settings.setBackupConfig({ provider: 'filesystem', backupFolder: '/tmp', format: 'tgz' }, callback);
}
], done);
}
@@ -75,19 +71,12 @@ function cleanup(done) {
}
describe('Backups API', function () {
var apiHockInstance = hock.createHock({ throwOnUnmatched: false }), apiHockServer;
var scope1 = nock(config.apiServerOrigin()).post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey' } }, { 'Content-Type': 'application/json' });
before(setup);
before(function (done) {
apiHockInstance
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey' } }, { 'Content-Type': 'application/json' });
var port = parseInt(url.parse(config.apiServerOrigin()).port, 10);
apiHockServer = http.createServer(apiHockInstance.handler).listen(port, done);
});
after(function (done) {
apiHockServer.close();
done();
});
after(cleanup);
@@ -95,37 +84,35 @@ describe('Backups API', function () {
describe('create', function () {
it('fails due to mising token', function (done) {
superagent.post(SERVER_URL + '/api/v1/backups')
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
});
it('fails due to wrong token', function (done) {
superagent.post(SERVER_URL + '/api/v1/backups')
.query({ access_token: token.toUpperCase() })
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
.query({ access_token: token.toUpperCase() })
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
});
it('succeeds', function (done) {
superagent.post(SERVER_URL + '/api/v1/backups')
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
function checkAppstoreServerCalled() {
apiHockInstance.done(function (error) {
if (!error) return done();
function checkAppstoreServerCalled() {
if (scope1.isDone()) return done();
setTimeout(checkAppstoreServerCalled, 100);
});
}
}
checkAppstoreServerCalled();
});
checkAppstoreServerCalled();
});
});
});
});
+332 -290
View File
@@ -16,14 +16,15 @@ var async = require('async'),
superagent = require('superagent'),
server = require('../../server.js'),
settings = require('../../settings.js'),
shell = require('../../shell.js');
shell = require('../../shell.js'),
tokendb = require('../../tokendb.js');
var SERVER_URL = 'http://localhost:' + config.get('port');
var USERNAME = 'superadmin', PASSWORD = 'Foobar?1337', EMAIL ='silly@me.com';
var token = null; // authentication token
var USERNAME_1 = 'userTheFirst', EMAIL_1 = 'taO@zen.mac', userId_1, token_1;
var server;
function setup(done) {
nock.cleanAll();
config._reset();
@@ -32,7 +33,7 @@ function setup(done) {
server.start(function (error) {
if (error) return done(error);
settings.setBackupConfig({ provider: 'caas', token: 'BACKUP_TOKEN', bucket: 'Bucket', prefix: 'Prefix' }, done);
settings.setBackupConfig({ provider: 'filesystem', backupFolder: '/tmp', format: 'tgz' }, done);
});
}
@@ -65,89 +66,89 @@ describe('Cloudron', function () {
it('fails due to missing setupToken', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.send({ username: '', password: 'somepassword', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ username: '', password: 'somepassword', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails due to internal server error on appstore side', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(500, { message: 'this is wrong' });
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'strong#A3asdf', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(500);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'strong#A3asdf', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(500);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('fails due to empty username', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: '', password: 'ADSFsdf$%436', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: '', password: 'ADSFsdf$%436', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('fails due to empty password', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: '', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: '', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('fails due to empty email', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: '' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: '' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('fails due to wrong displayName type', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF?#asd546', email: 'admin@foo.bar', displayName: 1234 })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF?#asd546', email: 'admin@foo.bar', displayName: 1234 })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('fails due to invalid email', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'invalidemail' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'invalidemail' })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
expect(scope.isDone()).to.be.ok();
done();
});
});
it('succeeds', function (done) {
@@ -155,27 +156,27 @@ describe('Cloudron', function () {
var scope2 = nock(config.apiServerOrigin()).post('/api/v1/boxes/' + config.fqdn() + '/setup/done?setupToken=somesetuptoken').reply(201, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'admin@foo.bar', displayName: 'tester' })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'admin@foo.bar', displayName: 'tester' })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
done();
});
});
it('fails the second time', function (done) {
var scope = nock(config.apiServerOrigin()).get('/api/v1/boxes/' + config.fqdn() + '/setup/verify?setupToken=somesetuptoken').reply(200, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(409);
expect(scope.isDone()).to.be.ok();
done();
});
.query({ setupToken: 'somesetuptoken' })
.send({ username: 'someuser', password: 'ADSF#asd546', email: 'admin@foo.bar' })
.end(function (error, result) {
expect(result.statusCode).to.equal(409);
expect(scope.isDone()).to.be.ok();
done();
});
});
});
@@ -191,19 +192,35 @@ describe('Cloudron', function () {
config._reset();
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
},
function (callback) {
superagent.post(SERVER_URL + '/api/v1/users')
.query({ access_token: token })
.send({ username: USERNAME_1, email: EMAIL_1, invite: false })
.end(function (error, result) {
expect(result).to.be.ok();
expect(result.statusCode).to.eql(201);
token_1 = tokendb.generateToken();
userId_1 = result.body.id;
// HACK to get a token for second user (passwords are generated and the user should have gotten a password setup link...)
tokendb.add(token_1, userId_1, 'test-client-id', Date.now() + 100000, '*', callback);
});
}
], done);
});
@@ -211,60 +228,85 @@ describe('Cloudron', function () {
it('cannot get without token', function (done) {
superagent.get(SERVER_URL + '/api/v1/cloudron/config')
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
});
it('succeeds without appstore', function (done) {
superagent.get(SERVER_URL + '/api/v1/cloudron/config')
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(200);
expect(result.body.apiServerOrigin).to.eql('http://localhost:6060');
expect(result.body.webServerOrigin).to.eql(null);
expect(result.body.fqdn).to.eql(config.fqdn());
expect(result.body.isCustomDomain).to.eql(true);
expect(result.body.progress).to.be.an('object');
expect(result.body.update).to.be.an('object');
expect(result.body.version).to.eql(config.version());
expect(result.body.developerMode).to.be.a('boolean');
expect(result.body.size).to.eql(null);
expect(result.body.region).to.eql(null);
expect(result.body.memory).to.eql(os.totalmem());
expect(result.body.cloudronName).to.be.a('string');
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(200);
expect(result.body.apiServerOrigin).to.eql('http://localhost:6060');
expect(result.body.webServerOrigin).to.eql(null);
expect(result.body.fqdn).to.eql(config.fqdn());
expect(result.body.isCustomDomain).to.eql(true);
expect(result.body.progress).to.be.an('object');
expect(result.body.update).to.be.an('object');
expect(result.body.version).to.eql(config.version());
expect(result.body.developerMode).to.be.a('boolean');
expect(result.body.size).to.eql(null);
expect(result.body.region).to.eql(null);
expect(result.body.memory).to.eql(os.totalmem());
expect(result.body.cloudronName).to.be.a('string');
done();
});
done();
});
});
it('succeeds', function (done) {
it('succeeds (admin)', function (done) {
var scope = nock(config.apiServerOrigin())
.get('/api/v1/boxes/localhost?token=' + config.token())
.reply(200, { box: { region: 'sfo', size: '1gb' }, user: { }});
.get('/api/v1/boxes/localhost?token=' + config.token())
.reply(200, { box: { region: 'sfo', size: '1gb' }, user: { }});
superagent.get(SERVER_URL + '/api/v1/cloudron/config')
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(200);
expect(result.body.apiServerOrigin).to.eql('http://localhost:6060');
expect(result.body.webServerOrigin).to.eql(null);
expect(result.body.fqdn).to.eql(config.fqdn());
expect(result.body.isCustomDomain).to.eql(true);
expect(result.body.progress).to.be.an('object');
expect(result.body.update).to.be.an('object');
expect(result.body.version).to.eql(config.version());
expect(result.body.developerMode).to.be.a('boolean');
expect(result.body.size).to.eql('1gb');
expect(result.body.region).to.eql('sfo');
expect(result.body.memory).to.eql(os.totalmem());
expect(result.body.cloudronName).to.be.a('string');
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(200);
expect(result.body.apiServerOrigin).to.eql('http://localhost:6060');
expect(result.body.webServerOrigin).to.eql(null);
expect(result.body.fqdn).to.eql(config.fqdn());
expect(result.body.isCustomDomain).to.eql(true);
expect(result.body.progress).to.be.an('object');
expect(result.body.update).to.be.an('object');
expect(result.body.version).to.eql(config.version());
expect(result.body.developerMode).to.be.a('boolean');
expect(result.body.size).to.eql('1gb');
expect(result.body.region).to.eql('sfo');
expect(result.body.memory).to.eql(os.totalmem());
expect(result.body.cloudronName).to.be.a('string');
expect(result.body.provider).to.be.a('string');
expect(scope.isDone()).to.be.ok();
expect(scope.isDone()).to.be.ok();
done();
});
done();
});
});
it('succeeds (non-admin)', function (done) {
superagent.get(SERVER_URL + '/api/v1/cloudron/config')
.query({ access_token: token_1 })
.end(function (error, result) {
expect(result.statusCode).to.equal(200);
expect(result.body.apiServerOrigin).to.eql('http://localhost:6060');
expect(result.body.webServerOrigin).to.eql(null);
expect(result.body.fqdn).to.eql(config.fqdn());
expect(result.body.isCustomDomain).to.eql(true);
expect(result.body.progress).to.be.an('object');
expect(result.body.version).to.eql(config.version());
expect(result.body.cloudronName).to.be.a('string');
expect(result.body.provider).to.be.a('string');
expect(result.body.update).to.be(undefined);
expect(result.body.size).to.be(undefined);
expect(result.body.region).to.be(undefined);
expect(result.body.memory).to.be(undefined);
done();
});
});
});
@@ -279,18 +321,18 @@ describe('Cloudron', function () {
var scope2 = nock(config.apiServerOrigin()).post('/api/v1/boxes/' + config.fqdn() + '/setup/done?setupToken=somesetuptoken').reply(201, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
}
], done);
});
@@ -302,73 +344,73 @@ describe('Cloudron', function () {
it('fails without token', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', region: 'sfo'})
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
.send({ size: 'small', region: 'sfo'})
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
});
it('fails without password', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', region: 'sfo'})
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ size: 'small', region: 'sfo'})
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('succeeds without size', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
done();
});
.send({ region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
done();
});
});
it('fails with wrong size type', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 4, region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ size: 4, region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('succeeds without region', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
done();
});
.send({ size: 'small', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
done();
});
});
it('fails with wrong region type', function (done) {
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', region: 4, password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ size: 'small', region: 4, password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails when in wrong state', function (done) {
var scope2 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey', SessionToken: 'sessionToken' } });
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey', SessionToken: 'sessionToken' } });
var scope3 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/backupDone?token=APPSTORE_TOKEN', function (body) {
return body.boxVersion && body.restoreKey && !body.appId && !body.appVersion && body.appBackupIds.length === 0;
})
.reply(200, { id: 'someid' });
.post('/api/v1/boxes/' + config.fqdn() + '/backupDone?token=APPSTORE_TOKEN', function (body) {
return body.boxVersion && body.restoreKey && !body.appId && !body.appVersion && body.appBackupIds.length === 0;
})
.reply(200, { id: 'someid' });
var scope1 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/migrate?token=APPSTORE_TOKEN', function (body) {
@@ -378,22 +420,22 @@ describe('Cloudron', function () {
injectShellMock();
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
.send({ size: 'small', region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
function checkAppstoreServerCalled() {
if (scope1.isDone() && scope2.isDone() && scope3.isDone()) {
restoreShellMock();
return done();
function checkAppstoreServerCalled() {
if (scope1.isDone() && scope2.isDone() && scope3.isDone()) {
restoreShellMock();
return done();
}
setTimeout(checkAppstoreServerCalled, 100);
}
setTimeout(checkAppstoreServerCalled, 100);
}
checkAppstoreServerCalled();
});
checkAppstoreServerCalled();
});
});
it('succeeds', function (done) {
@@ -402,34 +444,34 @@ describe('Cloudron', function () {
}).reply(202, {});
var scope2 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/backupDone?token=APPSTORE_TOKEN', function (body) {
return body.boxVersion && body.restoreKey && !body.appId && !body.appVersion && body.appBackupIds.length === 0;
})
.reply(200, { id: 'someid' });
.post('/api/v1/boxes/' + config.fqdn() + '/backupDone?token=APPSTORE_TOKEN', function (body) {
return body.boxVersion && body.restoreKey && !body.appId && !body.appVersion && body.appBackupIds.length === 0;
})
.reply(200, { id: 'someid' });
var scope3 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey', SessionToken: 'sessionToken' } });
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=BACKUP_TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey', SessionToken: 'sessionToken' } });
injectShellMock();
superagent.post(SERVER_URL + '/api/v1/cloudron/migrate')
.send({ size: 'small', region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
.send({ size: 'small', region: 'sfo', password: PASSWORD })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
function checkAppstoreServerCalled() {
if (scope1.isDone() && scope2.isDone() && scope3.isDone()) {
restoreShellMock();
return done();
function checkAppstoreServerCalled() {
if (scope1.isDone() && scope2.isDone() && scope3.isDone()) {
restoreShellMock();
return done();
}
setTimeout(checkAppstoreServerCalled, 100);
}
setTimeout(checkAppstoreServerCalled, 100);
}
checkAppstoreServerCalled();
});
checkAppstoreServerCalled();
});
});
});
@@ -445,18 +487,18 @@ describe('Cloudron', function () {
config._reset();
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
},
], done);
});
@@ -465,111 +507,111 @@ describe('Cloudron', function () {
it('fails without token', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', subject: 'some subject', description: 'some description' })
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
.send({ type: 'ticket', subject: 'some subject', description: 'some description' })
.end(function (error, result) {
expect(result.statusCode).to.equal(401);
done();
});
});
it('fails without type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails with empty type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: '', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: '', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails with unknown type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'foobar', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: 'foobar', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('succeeds with ticket type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
.send({ type: 'ticket', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
});
it('succeeds with app type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'app_missing', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
.send({ type: 'app_missing', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
});
it('fails without description', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', subject: 'some subject' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: 'ticket', subject: 'some subject' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails with empty subject', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', subject: '', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: 'ticket', subject: '', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('fails with empty description', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', subject: 'some subject', description: '' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: 'ticket', subject: 'some subject', description: '' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
it('succeeds with feedback type', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'feedback', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
.send({ type: 'feedback', subject: 'some subject', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(201);
done();
});
});
it('fails without subject', function (done) {
superagent.post(SERVER_URL + '/api/v1/feedback')
.send({ type: 'ticket', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
.send({ type: 'ticket', description: 'some description' })
.query({ access_token: token })
.end(function (error, result) {
expect(result.statusCode).to.equal(400);
done();
});
});
});
@@ -585,18 +627,18 @@ describe('Cloudron', function () {
config._reset();
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
},
], done);
});
@@ -607,9 +649,9 @@ describe('Cloudron', function () {
superagent.get(SERVER_URL + '/api/v1/cloudron/logstream')
.query({ access_token: token, fromLine: 0 })
.end(function (err, res) {
expect(res.statusCode).to.be(400);
done();
});
expect(res.statusCode).to.be(400);
done();
});
});
it('logStream - stream logs', function (done) {
@@ -630,8 +672,8 @@ describe('Cloudron', function () {
if (line.indexOf('id: ') === 0) {
expect(parseInt(line.substr('id: '.length), 10)).to.be.a('number');
} else if (line.indexOf('data: ') === 0) {
expect(JSON.parse(line.slice('data: '.length)).message).to.be.a('string');
dataMessageFound = true;
var message = JSON.parse(line.slice('data: '.length)).message;
if (Array.isArray(message) || typeof message === 'string') dataMessageFound = true;
}
});
+36 -26
View File
@@ -12,18 +12,23 @@ var appdb = require('../../appdb.js'),
expect = require('expect.js'),
hock = require('hock'),
http = require('http'),
MockS3 = require('mock-aws-s3'),
nock = require('nock'),
superagent = require('superagent'),
os = require('os'),
path = require('path'),
rimraf = require('rimraf'),
s3 = require('../../storage/s3.js'),
safe = require('safetydance'),
server = require('../../server.js'),
settings = require('../../settings.js'),
settingsdb = require('../../settingsdb.js'),
superagent = require('superagent'),
url = require('url');
var SERVER_URL = 'http://localhost:' + config.get('port');
var USERNAME = 'superadmin', PASSWORD = 'Foobar?1337', EMAIL ='silly@me.com';
var token = null;
var server;
function setup(done) {
config.setVersion('1.2.3');
@@ -37,19 +42,16 @@ function setup(done) {
var scope2 = nock(config.apiServerOrigin()).post('/api/v1/boxes/' + config.fqdn() + '/setup/done?setupToken=somesetuptoken').reply(201, {});
superagent.post(SERVER_URL + '/api/v1/cloudron/activate')
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(result.statusCode).to.eql(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
.query({ setupToken: 'somesetuptoken' })
.send({ username: USERNAME, password: PASSWORD, email: EMAIL })
.end(function (error, result) {
expect(result).to.be.ok();
expect(result.statusCode).to.eql(201);
expect(scope1.isDone()).to.be.ok();
expect(scope2.isDone()).to.be.ok();
// stash token for further use
token = result.body.token;
callback();
});
callback();
});
},
function addApp(callback) {
@@ -58,12 +60,20 @@ function setup(done) {
},
function createSettings(callback) {
settings.setBackupConfig({ provider: 'caas', token: 'BACKUP_TOKEN', bucket: 'Bucket', prefix: 'Prefix' }, callback);
MockS3.config.basePath = path.join(os.tmpdir(), 's3-sysadmin-test-buckets/');
s3._mockInject(MockS3);
safe.fs.mkdirSync('/tmp/box-sysadmin-test');
settingsdb.set(settings.BACKUP_CONFIG_KEY, JSON.stringify({ provider: 'caas', token: 'BACKUP_TOKEN', key: 'key', prefix: 'boxid', format: 'tgz'}), callback);
}
], done);
}
function cleanup(done) {
s3._mockRestore();
rimraf.sync(MockS3.config.basePath);
database._clear(function (error) {
expect(!error).to.be.ok();
@@ -93,19 +103,19 @@ describe('Internal API', function () {
describe('backup', function () {
it('succeeds', function (done) {
superagent.post(config.sysadminOrigin() + '/api/v1/backup')
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
.end(function (error, result) {
expect(result.statusCode).to.equal(202);
function checkAppstoreServerCalled() {
apiHockInstance.done(function (error) {
if (!error) return done();
function checkAppstoreServerCalled() {
apiHockInstance.done(function (error) {
if (!error) return done();
setTimeout(checkAppstoreServerCalled, 100);
});
}
setTimeout(checkAppstoreServerCalled, 100);
});
}
checkAppstoreServerCalled();
});
checkAppstoreServerCalled();
});
});
});
});
+2 -2
View File
@@ -145,8 +145,8 @@ function doTask(appId, taskName, callback) {
apps.get(appId, function (error, app) {
if (error) return callback(error);
if (app.installationState !== appdb.ISTATE_INSTALLED || app.runState !== appdb.RSTATE_RUNNING) {
debug('task %s skipped. app %s is not installed/running', taskName, app.id);
if (app.installationState !== appdb.ISTATE_INSTALLED || app.runState !== appdb.RSTATE_RUNNING || app.health !== appdb.HEALTH_HEALTHY) {
debug('task %s skipped. app %s is not installed/running/healthy', taskName, app.id);
return callback();
}
@@ -12,6 +12,9 @@ if [[ $# == 1 && "$1" == "--check" ]]; then
exit 0
fi
cmd="$1"
appid="$2"
if [[ "${BOX_ENV}" == "cloudron" ]]; then
# when restoring the cloudron with many apps, the apptasks rush in to restart
# collectd which makes systemd/collectd very unhappy and puts the collectd in
@@ -19,10 +22,17 @@ if [[ "${BOX_ENV}" == "cloudron" ]]; then
for i in {1..10}; do
echo "Restarting collectd"
if systemctl restart collectd; then
exit 0
break
fi
echo "Failed to reload collectd. Maybe some other apptask is restarting it"
sleep $((RANDOM%30))
done
# delete old stats when uninstalling an app
if [[ "${cmd}" == "remove" ]]; then
echo "Removing collectd stats of ${appid}"
rm -rf ${HOME}/platformdata/graphite/whisper/collectd/localhost/*${appid}*
fi
fi
+40
View File
@@ -0,0 +1,40 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $# -eq 0 ]]; then
echo "No arguments supplied"
exit 1
fi
if [[ "$1" == "--check" ]]; then
echo "OK"
exit 0
fi
cmd="$1"
appid="$2"
if [[ "${cmd}" == "add" ]]; then
# TODO prevent this script from moving the file from $1 into a random dir with using a relative ../ path
if [[ "${BOX_ENV}" == "cloudron" ]]; then
readonly destination_file_path="${HOME}/platformdata/logrotate.d/${appid}"
else
readonly destination_file_path="${HOME}/.cloudron_test/platformdata/logrotate.d/${appid}"
fi
mv "${3}" "${destination_file_path}"
chown root:root "${destination_file_path}"
elif [[ "${cmd}" == "remove" ]]; then
if [[ "${BOX_ENV}" == "cloudron" ]]; then
rm -rf "${HOME}/platformdata/logrotate.d/${appid}"
else
rm -rf "${HOME}/.cloudron_test/platformdata/logrotate.d/${appid}"
fi
fi
-28
View File
@@ -1,28 +0,0 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $# -eq 0 ]]; then
echo "No arguments supplied"
exit 1
fi
if [[ "$1" == "--check" ]]; then
echo "OK"
exit 0
fi
# TODO prevent this script from moving the file from $1 into a random dir with using a relative ../ path
if [[ "${BOX_ENV}" == "cloudron" ]]; then
readonly destination_file_path="${HOME}/platformdata/logrotate.d/$2"
else
readonly destination_file_path="${HOME}/.cloudron_test/platformdata/logrotate.d/$2"
fi
mv "${1}" "${destination_file_path}"
chown root:root "${destination_file_path}"
-23
View File
@@ -1,23 +0,0 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $# -eq 0 ]]; then
echo "No arguments supplied"
exit 1
fi
if [[ "$1" == "--check" ]]; then
echo "OK"
exit 0
fi
echo "Running node with memory constraints"
# note BOX_ENV and NODE_ENV are derived from parent process
exec env "DEBUG=box*,connect-lastmile" /usr/bin/node --max_old_space_size=300 "$@"
+14 -3
View File
@@ -17,10 +17,21 @@ if [[ "$1" == "--check" ]]; then
exit 0
fi
# this script is called from redis addon as well!
appid="$1"
rmdir="$2"
if [[ "${BOX_ENV}" == "cloudron" ]]; then
readonly app_data_dir="${HOME}/appsdata/$1"
rm -rf "${app_data_dir}"
readonly app_data_dir="${HOME}/appsdata/${appid}"
else
readonly app_data_dir="${HOME}/.cloudron_test/appsdata/$1"
readonly app_data_dir="${HOME}/.cloudron_test/appsdata/${appid}"
fi
# the approach below ensures symlinked contents are also deleted
find -H "${app_data_dir}" -mindepth 1 -delete || true # -H means resolve symlink in args
if [[ "${rmdir}" == "true" ]]; then
rm -rf "${app_data_dir}"
fi
-24
View File
@@ -1,24 +0,0 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $# -eq 0 ]]; then
echo "No arguments supplied"
exit 1
fi
if [[ "$1" == "--check" ]]; then
echo "OK"
exit 0
fi
if [[ "${BOX_ENV}" == "cloudron" ]]; then
rm -rf "${HOME}/platformdata/logrotate.d/$1"
else
rm -rf "${HOME}/.cloudron_test/platformdata/logrotate.d/$1"
fi
+23
View File
@@ -0,0 +1,23 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
var tar = require('tar-fs');
var sourceDir = process.argv[2];
if (sourceDir === '--check') return console.log('OK');
process.stderr.write('Packing ' + sourceDir + '\n');
tar.pack('/', {
dereference: false, // pack the symlink and not what it points to
entries: [ sourceDir ],
map: function(header) {
header.name = header.name.replace(new RegExp('^' + sourceDir + '(/?)'), '.$1'); // make paths relative
return header;
},
strict: false // do not error for unknown types (skip fifo, char/block devices)
}).pipe(process.stdout);
+5 -4
View File
@@ -112,7 +112,7 @@ function initializeExpressSync() {
// cloudron routes
router.get ('/api/v1/cloudron/config', cloudronScope, routes.cloudron.getConfig);
router.post('/api/v1/cloudron/update', cloudronScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.cloudron.update);
router.post('/api/v1/cloudron/update', cloudronScope, routes.user.requireAdmin, routes.cloudron.update);
router.post('/api/v1/cloudron/check_for_updates', cloudronScope, routes.user.requireAdmin, routes.cloudron.checkForUpdates);
router.post('/api/v1/cloudron/reboot', cloudronScope, routes.user.requireAdmin, routes.cloudron.reboot);
router.post('/api/v1/cloudron/migrate', cloudronScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.cloudron.migrate);
@@ -124,7 +124,8 @@ function initializeExpressSync() {
router.put ('/api/v1/cloudron/ssh/authorized_keys', cloudronScope, routes.user.requireAdmin, routes.ssh.addAuthorizedKey);
router.get ('/api/v1/cloudron/ssh/authorized_keys/:identifier', cloudronScope, routes.user.requireAdmin, routes.ssh.getAuthorizedKey);
router.del ('/api/v1/cloudron/ssh/authorized_keys/:identifier', cloudronScope, routes.user.requireAdmin, routes.ssh.delAuthorizedKey);
router.get ('/api/v1/cloudron/eventlog', settingsScope, routes.user.requireAdmin, routes.eventlog.get);
router.get ('/api/v1/cloudron/eventlog', cloudronScope, routes.user.requireAdmin, routes.eventlog.get);
router.post('/api/v1/cloudron/send_test_mail', cloudronScope, routes.user.requireAdmin, routes.cloudron.sendTestMail);
// profile api, working off the user behind the provided token
router.get ('/api/v1/profile', profileScope, routes.profile.get);
@@ -182,8 +183,8 @@ function initializeExpressSync() {
router.post('/api/v1/apps/install', appsScope, routes.user.requireAdmin, routes.apps.installApp);
router.post('/api/v1/apps/:id/uninstall', appsScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.apps.uninstallApp);
router.post('/api/v1/apps/:id/configure', appsScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.apps.configureApp);
router.post('/api/v1/apps/:id/update', appsScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.apps.updateApp);
router.post('/api/v1/apps/:id/configure', appsScope, routes.user.requireAdmin, routes.apps.configureApp);
router.post('/api/v1/apps/:id/update', appsScope, routes.user.requireAdmin, routes.apps.updateApp);
router.post('/api/v1/apps/:id/restore', appsScope, routes.user.requireAdmin, routes.user.verifyPassword, routes.apps.restoreApp);
router.post('/api/v1/apps/:id/backup', appsScope, routes.user.requireAdmin, routes.apps.backupApp);
router.get ('/api/v1/apps/:id/backups', appsScope, routes.user.requireAdmin, routes.apps.listBackups);
+5 -1
View File
@@ -410,11 +410,15 @@ function setBackupConfig(backupConfig, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof callback, 'function');
if (backupConfig.key && backupConfig.format !== 'tgz') return callback(new SettingsError(SettingsError.BAD_FIELD, 'format does not support encryption'));
backups.testConfig(backupConfig, function (error) {
if (error && error.reason === BackupsError.BAD_FIELD) return callback(new SettingsError(SettingsError.BAD_FIELD, error.message));
if (error && error.reason === BackupsError.EXTERNAL_ERROR) return callback(new SettingsError(SettingsError.EXTERNAL_ERROR, error.message));
if (error) return callback(new SettingsError(SettingsError.INTERNAL_ERROR, error));
backups.cleanupCacheFilesSync();
settingsdb.set(exports.BACKUP_CONFIG_KEY, JSON.stringify(backupConfig), function (error) {
if (error) return callback(new SettingsError(SettingsError.INTERNAL_ERROR, error));
@@ -664,7 +668,7 @@ function getAll(callback) {
// convert JSON objects
[exports.DNS_CONFIG_KEY, exports.TLS_CONFIG_KEY, exports.BACKUP_CONFIG_KEY, exports.MAIL_CONFIG_KEY,
exports.UPDATE_CONFIG_KEY, exports.APPSTORE_CONFIG_KEY, exports.MAIL_RELAY_KEY, exports.CATCH_ALL_ADDRESS_KEY].forEach(function (key) {
exports.UPDATE_CONFIG_KEY, exports.APPSTORE_CONFIG_KEY, exports.MAIL_RELAY_KEY, exports.CATCH_ALL_ADDRESS_KEY].forEach(function (key) {
result[key] = typeof result[key] === 'object' ? result[key] : safe.JSON.parse(result[key]);
});
+14 -13
View File
@@ -10,6 +10,7 @@ exports = module.exports = {
var assert = require('assert'),
child_process = require('child_process'),
debug = require('debug')('box:shell'),
fs = require('fs'),
once = require('once'),
util = require('util');
@@ -33,26 +34,26 @@ function exec(tag, file, args, options, callback) {
assert.strictEqual(typeof tag, 'string');
assert.strictEqual(typeof file, 'string');
assert(util.isArray(args));
if (typeof options === 'function') {
callback = options;
options = { };
}
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
callback = once(callback); // exit may or may not be called after an 'error'
debug(tag + ' execFile: %s', file); // do not dump args as it might have sensitive info
var cp = child_process.spawn(file, args, options);
cp.stdout.on('data', function (data) {
debug(tag + ' (stdout): %s', data.toString('utf8'));
});
if (options.logStream) {
cp.stdout.pipe(options.logStream);
cp.stderr.pipe(options.logStream);
} else {
cp.stdout.on('data', function (data) {
debug(tag + ' (stdout): %s', data.toString('utf8'));
});
cp.stderr.on('data', function (data) {
debug(tag + ' (stderr): %s', data.toString('utf8'));
});
cp.stderr.on('data', function (data) {
debug(tag + ' (stderr): %s', data.toString('utf8'));
});
}
cp.on('exit', function (code, signal) {
if (code || signal) debug(tag + ' code: %s, signal: %s', code, signal);
@@ -83,7 +84,7 @@ function sudo(tag, args, options, callback) {
assert.strictEqual(typeof options, 'object');
// -S makes sudo read stdin for password. -E preserves arguments
// -S makes sudo read stdin for password. -E preserves environment
var cp = exec(tag, SUDO, [ options.env ? '-SE' : '-S' ].concat(args), options, callback);
cp.stdin.end();
return cp;
-229
View File
@@ -1,229 +0,0 @@
'use strict';
exports = module.exports = {
backup: backup,
restore: restore,
copyBackup: copyBackup,
removeBackups: removeBackups,
backupDone: backupDone,
testConfig: testConfig,
};
var assert = require('assert'),
AWS = require('aws-sdk'),
BackupsError = require('../backups.js').BackupsError,
config = require('../config.js'),
debug = require('debug')('box:storage/caas'),
once = require('once'),
PassThrough = require('stream').PassThrough,
path = require('path'),
S3BlockReadStream = require('s3-block-read-stream'),
superagent = require('superagent'),
targz = require('./targz.js');
var FILE_TYPE = '.tar.gz.enc';
// internal only
function getBackupCredentials(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
assert(apiConfig.token);
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/awscredentials';
superagent.post(url).query({ token: apiConfig.token }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 201) return callback(new Error(result.text));
if (!result.body || !result.body.credentials) return callback(new Error('Unexpected response: ' + JSON.stringify(result.headers)));
var credentials = {
signatureVersion: 'v4',
accessKeyId: result.body.credentials.AccessKeyId,
secretAccessKey: result.body.credentials.SecretAccessKey,
sessionToken: result.body.credentials.SessionToken,
region: apiConfig.region || 'us-east-1'
};
if (apiConfig.endpoint) credentials.endpoint = new AWS.Endpoint(apiConfig.endpoint);
callback(null, credentials);
});
}
function getBackupFilePath(apiConfig, backupId) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
const FILE_TYPE = apiConfig.key ? '.tar.gz.enc' : '.tar.gz';
return path.join(apiConfig.prefix, backupId.endsWith(FILE_TYPE) ? backupId : backupId+FILE_TYPE);
}
// storage api
function backup(apiConfig, backupId, sourceDirectories, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(sourceDirectories));
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
var backupFilePath = getBackupFilePath(apiConfig, backupId);
debug('[%s] backup: %j -> %s', backupId, sourceDirectories, backupFilePath);
getBackupCredentials(apiConfig, function (error, credentials) {
if (error) return callback(error);
var passThrough = new PassThrough();
var params = {
Bucket: apiConfig.bucket,
Key: backupFilePath,
Body: passThrough
};
var s3 = new AWS.S3(credentials);
// s3.upload automatically does a multi-part upload. we set queueSize to 1 to reduce memory usage
s3.upload(params, { partSize: 10 * 1024 * 1024, queueSize: 1 }, function (error) {
if (error) {
debug('[%s] backup: s3 upload error.', backupId, error);
return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
}
callback(null);
});
targz.create(sourceDirectories, apiConfig.key || null, passThrough, callback);
});
}
function restore(apiConfig, backupId, destination, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
var backupFilePath = getBackupFilePath(apiConfig, backupId);
debug('[%s] restore: %s -> %s', backupId, backupFilePath, destination);
getBackupCredentials(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: apiConfig.bucket,
Key: backupFilePath
};
var s3 = new AWS.S3(credentials);
var multipartDownload = new S3BlockReadStream(s3, params, { blockSize: 64 * 1024 * 1024, logCallback: debug });
multipartDownload.on('error', function (error) {
// TODO ENOENT for the mock, fix upstream!
if (error.code === 'NoSuchKey' || error.code === 'ENOENT') return callback(new BackupsError(BackupsError.NOT_FOUND));
debug('[%s] restore: s3 stream error.', backupId, error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
targz.extract(multipartDownload, destination, apiConfig.key || null, callback);
});
}
function copyBackup(apiConfig, oldBackupId, newBackupId, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldBackupId, 'string');
assert.strictEqual(typeof newBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
getBackupCredentials(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: apiConfig.bucket,
Key: getBackupFilePath(apiConfig, newBackupId),
CopySource: path.join(apiConfig.bucket, getBackupFilePath(apiConfig, oldBackupId))
};
var s3 = new AWS.S3(credentials);
s3.copyObject(params, function (error) {
if (error && error.code === 'NoSuchKey') return callback(new BackupsError(BackupsError.NOT_FOUND));
if (error) {
debug('copyBackup: s3 copy error.', error);
return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
}
callback(null);
});
});
}
function removeBackups(apiConfig, backupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert(Array.isArray(backupIds));
assert.strictEqual(typeof callback, 'function');
getBackupCredentials(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: apiConfig.bucket,
Delete: {
Objects: [ ] // { Key }
}
};
backupIds.forEach(function (backupId) {
params.Delete.Objects.push({ Key: getBackupFilePath(apiConfig, backupId) });
});
var s3 = new AWS.S3(credentials);
s3.deleteObjects(params, function (error, data) {
if (error) debug('Unable to remove %s. Not fatal.', params.Key, error);
else debug('removeBackups: Deleted: %j Errors: %j', data.Deleted, data.Errors);
callback(null);
});
});
}
function testConfig(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
if (config.provider() !== 'caas') return callback(new BackupsError(BackupsError.BAD_FIELD, 'instance provider must be caas'));
callback();
}
function backupDone(backupId, appBackupIds, callback) {
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
// Caas expects filenames instead of backupIds, this means no prefix but a file type extension
var boxBackupFilename = backupId + FILE_TYPE;
var appBackupFilenames = appBackupIds.map(function (id) { return id + FILE_TYPE; });
debug('[%s] backupDone: %s apps %j', backupId, boxBackupFilename, appBackupFilenames);
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/backupDone';
var data = {
boxVersion: config.version(),
restoreKey: boxBackupFilename,
appId: null, // now unused
appVersion: null, // now unused
appBackupIds: appBackupFilenames
};
superagent.post(url).send(data).query({ token: config.token() }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 200) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, result.text));
return callback(null);
});
}
+94 -94
View File
@@ -1,10 +1,14 @@
'use strict';
exports = module.exports = {
backup: backup,
restore: restore,
copyBackup: copyBackup,
removeBackups: removeBackups,
upload: upload,
download: download,
downloadDir: downloadDir,
copy: copy,
remove: remove,
removeDir: removeDir,
backupDone: backupDone,
@@ -12,156 +16,151 @@ exports = module.exports = {
};
var assert = require('assert'),
async = require('async'),
BackupsError = require('../backups.js').BackupsError,
config = require('../config.js'),
debug = require('debug')('box:storage/filesystem'),
EventEmitter = require('events'),
fs = require('fs'),
mkdirp = require('mkdirp'),
once = require('once'),
path = require('path'),
safe = require('safetydance'),
targz = require('./targz.js');
var FALLBACK_BACKUP_FOLDER = '/var/backups';
var BACKUP_USER = config.TEST ? process.env.USER : 'yellowtent';
// internal only
function getBackupFilePath(apiConfig, backupId) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
const FILE_TYPE = apiConfig.key ? '.tar.gz.enc' : '.tar.gz';
return path.join(apiConfig.backupFolder || FALLBACK_BACKUP_FOLDER, backupId.endsWith(FILE_TYPE) ? backupId : backupId+FILE_TYPE);
}
shell = require('../shell.js');
// storage api
function backup(apiConfig, backupId, sourceDirectories, callback) {
function upload(apiConfig, backupFilePath, sourceStream, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(sourceDirectories));
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof sourceStream, 'object');
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
var backupFilePath = getBackupFilePath(apiConfig, backupId);
debug('[%s] backup: %j -> %s', backupId, sourceDirectories, backupFilePath);
mkdirp(path.dirname(backupFilePath), function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
safe.fs.unlinkSync(backupFilePath); // remove any hardlink
var fileStream = fs.createWriteStream(backupFilePath);
// this pattern is required to ensure that the file got created before 'finish'
fileStream.on('open', function () {
sourceStream.pipe(fileStream);
});
fileStream.on('error', function (error) {
debug('[%s] backup: out stream error.', backupId, error);
debug('[%s] upload: out stream error.', backupFilePath, error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
fileStream.on('close', function () {
debug('[%s] backup: changing ownership.', backupId);
fileStream.on('finish', function () {
// in test, upload() may or may not be called via sudo script
const BACKUP_UID = parseInt(process.env.SUDO_UID, 10) || process.getuid();
if (!safe.child_process.execSync('chown -R ' + BACKUP_USER + ':' + BACKUP_USER + ' ' + path.dirname(backupFilePath))) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.chownSync(backupFilePath, BACKUP_UID, BACKUP_UID)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, 'Unable to chown:' + safe.error.message));
if (!safe.fs.chownSync(path.dirname(backupFilePath), BACKUP_UID, BACKUP_UID)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, 'Unable to chown:' + safe.error.message));
debug('[%s] backup: done.', backupId);
debug('upload %s: done.', backupFilePath);
callback(null);
});
targz.create(sourceDirectories, apiConfig.key || null, fileStream, callback);
});
}
function restore(apiConfig, backupId, destination, callback) {
function download(apiConfig, sourceFilePath, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(typeof sourceFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
debug('download: %s', sourceFilePath);
var sourceFilePath = getBackupFilePath(apiConfig, backupId);
debug('[%s] restore: %s -> %s', backupId, sourceFilePath, destination);
if (!fs.existsSync(sourceFilePath)) return callback(new BackupsError(BackupsError.NOT_FOUND, 'backup file does not exist'));
if (!safe.fs.existsSync(sourceFilePath)) return callback(new BackupsError(BackupsError.NOT_FOUND, 'File not found'));
var fileStream = fs.createReadStream(sourceFilePath);
fileStream.on('error', function (error) {
debug('restore: file stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
targz.extract(fileStream, destination, apiConfig.key || null, callback);
callback(null, fileStream);
}
function copyBackup(apiConfig, oldBackupId, newBackupId, callback) {
function downloadDir(apiConfig, backupFilePath, destDir) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldBackupId, 'string');
assert.strictEqual(typeof newBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof destDir, 'string');
callback = once(callback);
var events = new EventEmitter();
var oldFilePath = getBackupFilePath(apiConfig, oldBackupId);
var newFilePath = getBackupFilePath(apiConfig, newBackupId);
events.emit('progress', `downloadDir: ${backupFilePath} to ${destDir}`);
debug('copyBackup: %s -> %s', oldFilePath, newFilePath);
shell.exec('downloadDir', '/bin/cp', [ '-r', backupFilePath + '/.', destDir ], { }, function (error) {
if (error) return events.emit('done', new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
events.emit('done', null);
});
return events;
}
function copy(apiConfig, oldFilePath, newFilePath) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldFilePath, 'string');
assert.strictEqual(typeof newFilePath, 'string');
debug('copy: %s -> %s', oldFilePath, newFilePath);
var events = new EventEmitter();
mkdirp(path.dirname(newFilePath), function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
if (error) return events.emit('done', new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
var readStream = fs.createReadStream(oldFilePath);
var writeStream = fs.createWriteStream(newFilePath);
// this will hardlink backups saving space
var cpOptions = apiConfig.noHardlinks ? '-a' : '-al';
shell.exec('copy', '/bin/cp', [ cpOptions, oldFilePath, newFilePath ], { }, function (error) {
if (error) return events.emit('done', new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
readStream.on('error', function (error) {
debug('copyBackup: read stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
events.emit('done', null);
});
writeStream.on('error', function (error) {
debug('copyBackup: write stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
writeStream.on('close', function () {
if (!safe.child_process.execSync('chown -R ' + BACKUP_USER + ':' + BACKUP_USER + ' ' + path.dirname(newFilePath))) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, safe.error.message));
callback();
});
readStream.pipe(writeStream);
});
return events;
}
function removeBackups(apiConfig, backupIds, callback) {
function remove(apiConfig, filename, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert(Array.isArray(backupIds));
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
async.eachSeries(backupIds, function (id, iteratorCallback) {
var filePath = getBackupFilePath(apiConfig, id);
var stat = safe.fs.statSync(filename);
if (!stat) return callback();
if (!safe.fs.unlinkSync(filePath)) {
debug('removeBackups: Unable to remove %s : %s', filePath, safe.error.message);
}
if (stat.isFile()) {
if (!safe.fs.unlinkSync(filename)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, safe.error.message));
} else if (stat.isDirectory()) {
if (!safe.fs.rmdirSync(filename)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, safe.error.message));
}
safe.fs.rmdirSync(path.dirname(filePath)); // try to cleanup empty directories
callback(null);
}
iteratorCallback();
}, callback);
function removeDir(apiConfig, pathPrefix) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof pathPrefix, 'string');
var events = new EventEmitter();
events.emit('progress', `downloadDir: ${pathPrefix}`);
shell.exec('removeDir', '/bin/rm', [ '-rf', pathPrefix ], { }, function (error) {
if (error) return events.emit('done', new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
events.emit('done', null);
});
return events;
}
function testConfig(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
if ('backupFolder' in apiConfig && typeof apiConfig.backupFolder !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'backupFolder must be string'));
if (typeof apiConfig.backupFolder !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'backupFolder must be string'));
// default value will be used
if (!apiConfig.backupFolder) return callback();
if (!apiConfig.backupFolder) return callback(new BackupsError(BackupsError.BAD_FIELD, 'backupFolder is required'));
if ('noHardlinks' in apiConfig && typeof apiConfig.noHardlinks !== 'boolean') return callback(new BackupsError(BackupsError.BAD_FIELD, 'noHardlinks must be boolean'));
fs.stat(apiConfig.backupFolder, function (error, result) {
if (error) {
@@ -175,7 +174,8 @@ function testConfig(apiConfig, callback) {
});
}
function backupDone(backupId, appBackupIds, callback) {
function backupDone(apiConfig, backupId, appBackupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
+50 -28
View File
@@ -7,22 +7,26 @@
// -------------------------------------------
exports = module.exports = {
backup: backup,
restore: restore,
copyBackup: copyBackup,
removeBackups: removeBackups,
upload: upload,
download: download,
downloadDir: downloadDir,
copy: copy,
remove: remove,
removeDir: removeDir,
backupDone: backupDone,
testConfig: testConfig
};
var assert = require('assert');
var assert = require('assert'),
EventEmitter = require('events');
function backup(apiConfig, backupId, sourceDirectories, callback) {
function upload(apiConfig, backupFilePath, sourceStream, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(sourceDirectories));
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof sourceStream, 'object');
assert.strictEqual(typeof callback, 'function');
// Result: none
@@ -30,10 +34,38 @@ function backup(apiConfig, backupId, sourceDirectories, callback) {
callback(new Error('not implemented'));
}
function restore(apiConfig, backupId, destination, callback) {
function download(apiConfig, backupFilePath, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
// Result: download stream
callback(new Error('not implemented'));
}
function downloadDir(apiConfig, backupFilePath, destDir) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof destDir, 'string');
var events = new EventEmitter();
process.nextTick(function () { events.emit('done', null); });
return events;
}
function copy(apiConfig, oldFilePath, newFilePath) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldFilePath, 'string');
assert.strictEqual(typeof newFilePath, 'string');
var events = new EventEmitter();
process.nextTick(function () { events.emit('done', null); });
return events;
}
function remove(apiConfig, filename, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
// Result: none
@@ -41,25 +73,14 @@ function restore(apiConfig, backupId, destination, callback) {
callback(new Error('not implemented'));
}
function copyBackup(apiConfig, oldBackupId, newBackupId, callback) {
function removeDir(apiConfig, pathPrefix) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldBackupId, 'string');
assert.strictEqual(typeof newBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
assert.strictEqual(typeof pathPrefix, 'string');
// Result: none
callback(new Error('not implemented'));
}
function removeBackups(apiConfig, backupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert(Array.isArray(backupIds));
assert.strictEqual(typeof callback, 'function');
// Result: none
callback(new Error('not implemented'));
var events = new EventEmitter();
process.nextTick(function () { events.emit('done', new Error('not implemented')); });
return events;
}
function testConfig(apiConfig, callback) {
@@ -71,7 +92,8 @@ function testConfig(apiConfig, callback) {
callback(new Error('not implemented'));
}
function backupDone(backupId, appBackupIds, callback) {
function backupDone(apiConfig, backupId, appBackupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
+58 -28
View File
@@ -1,10 +1,13 @@
'use strict';
exports = module.exports = {
backup: backup,
restore: restore,
copyBackup: copyBackup,
removeBackups: removeBackups,
upload: upload,
download: download,
downloadDir: downloadDir,
copy: copy,
remove: remove,
removeDir: removeDir,
backupDone: backupDone,
@@ -12,62 +15,89 @@ exports = module.exports = {
};
var assert = require('assert'),
debug = require('debug')('box:storage/noop');
debug = require('debug')('box:storage/noop'),
EventEmitter = require('events');
function backup(apiConfig, backupId, sourceDirectories, callback) {
function upload(apiConfig, backupFilePath, sourceStream, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(sourceDirectories));
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof sourceStream, 'object');
assert.strictEqual(typeof callback, 'function');
debug('backup: %s %j', backupId, sourceDirectories);
debug('upload: %s', backupFilePath);
callback();
callback(null);
}
function restore(apiConfig, backupId, destination, callback) {
function download(apiConfig, backupFilePath, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
debug('restore: %s %s', backupId, destination);
debug('download: %s', backupFilePath);
callback(new Error('Cannot restore from noop backend'));
callback(new Error('Cannot download from noop backend'));
}
function copyBackup(apiConfig, oldBackupId, newBackupId, callback) {
function downloadDir(apiConfig, backupFilePath, destDir) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldBackupId, 'string');
assert.strictEqual(typeof newBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof destDir, 'string');
debug('copyBackup: %s -> %s', oldBackupId, newBackupId);
var events = new EventEmitter();
process.nextTick(function () {
debug('downloadDir: %s -> %s', backupFilePath, destDir);
callback();
events.emit('done', new Error('Cannot download from noop backend'));
});
return events;
}
function removeBackups(apiConfig, backupIds, callback) {
function copy(apiConfig, oldFilePath, newFilePath) {
assert.strictEqual(typeof apiConfig, 'object');
assert(Array.isArray(backupIds));
assert.strictEqual(typeof oldFilePath, 'string');
assert.strictEqual(typeof newFilePath, 'string');
debug('copy: %s -> %s', oldFilePath, newFilePath);
var events = new EventEmitter();
process.nextTick(function () { events.emit('done', null); });
return events;
}
function remove(apiConfig, filename, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
debug('removeBackups: %j', backupIds);
debug('remove: %s', filename);
callback();
callback(null);
}
function removeDir(apiConfig, pathPrefix) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof pathPrefix, 'string');
debug('removeDir: %s', pathPrefix);
var events = new EventEmitter();
process.nextTick(function () { events.emit('done', null); });
return events;
}
function testConfig(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
callback();
callback(null);
}
function backupDone(backupId, appBackupIds, callback) {
function backupDone(apiConfig, backupId, appBackupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
callback();
callback(null);
}
+384 -98
View File
@@ -1,10 +1,13 @@
'use strict';
exports = module.exports = {
backup: backup,
restore: restore,
copyBackup: copyBackup,
removeBackups: removeBackups,
upload: upload,
download: download,
downloadDir: downloadDir,
copy: copy,
remove: remove,
removeDir: removeDir,
backupDone: backupDone,
@@ -16,14 +19,21 @@ exports = module.exports = {
};
var assert = require('assert'),
async = require('async'),
AWS = require('aws-sdk'),
BackupsError = require('../backups.js').BackupsError,
chunk = require('lodash.chunk'),
config = require('../config.js'),
debug = require('debug')('box:storage/s3'),
once = require('once'),
EventEmitter = require('events'),
fs = require('fs'),
https = require('https'),
mkdirp = require('mkdirp'),
PassThrough = require('stream').PassThrough,
path = require('path'),
S3BlockReadStream = require('s3-block-read-stream'),
targz = require('./targz.js');
safe = require('safetydance'),
superagent = require('superagent');
// test only
var originalAWS;
@@ -36,87 +46,123 @@ function mockRestore() {
AWS = originalAWS;
}
// internal only
function getBackupCredentials(apiConfig, callback) {
var gCachedCaasCredentials = { issueDate: null, credentials: null };
function getCaasConfig(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
assert(apiConfig.token);
if ((new Date() - gCachedCaasCredentials.issueDate) <= (1.75 * 60 * 60 * 1000)) { // caas gives tokens with 2 hour limit
return callback(null, gCachedCaasCredentials.credentials);
}
debug('getCaasCredentials: getting new credentials');
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/awscredentials';
superagent.post(url).query({ token: apiConfig.token }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 201) return callback(new Error(result.text));
if (!result.body || !result.body.credentials) return callback(new Error('Unexpected response: ' + JSON.stringify(result.headers)));
var credentials = {
signatureVersion: 'v4',
accessKeyId: result.body.credentials.AccessKeyId,
secretAccessKey: result.body.credentials.SecretAccessKey,
sessionToken: result.body.credentials.SessionToken,
region: apiConfig.region || 'us-east-1',
maxRetries: 5,
retryDelayOptions: {
base: 20000 // 2^5 * 20 seconds
}
};
if (apiConfig.endpoint) credentials.endpoint = new AWS.Endpoint(apiConfig.endpoint);
gCachedCaasCredentials = {
issueDate: new Date(),
credentials: credentials
};
callback(null, credentials);
});
}
function getS3Config(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
assert(apiConfig.accessKeyId && apiConfig.secretAccessKey);
if (apiConfig.provider === 'caas') return getCaasConfig(apiConfig, callback);
var credentials = {
signatureVersion: apiConfig.signatureVersion || 'v4',
s3ForcePathStyle: true,
s3ForcePathStyle: true, // Force use path-style url (http://endpoint/bucket/path) instead of host-style (http://bucket.endpoint/path)
accessKeyId: apiConfig.accessKeyId,
secretAccessKey: apiConfig.secretAccessKey,
region: apiConfig.region || 'us-east-1'
region: apiConfig.region || 'us-east-1',
maxRetries: 5,
retryDelayOptions: {
base: 20000 // 2^5 * 20 seconds
}
};
if (apiConfig.endpoint) credentials.endpoint = apiConfig.endpoint;
if (apiConfig.acceptSelfSignedCerts === true) {
credentials.httpOptions.agent = {
agent: new https.Agent({ rejectUnauthorized: false })
};
}
callback(null, credentials);
}
function getBackupFilePath(apiConfig, backupId) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
const FILE_TYPE = apiConfig.key ? '.tar.gz.enc' : '.tar.gz';
return path.join(apiConfig.prefix, backupId.endsWith(FILE_TYPE) ? backupId : backupId+FILE_TYPE);
}
// storage api
function backup(apiConfig, backupId, sourceDirectories, callback) {
function upload(apiConfig, backupFilePath, sourceStream, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(sourceDirectories));
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof sourceStream, 'object');
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
function done(error) {
if (error) {
debug('[%s] upload: s3 upload error.', backupFilePath, error);
return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, `Error uploading ${backupFilePath}. Message: ${error.message} HTTP Code: ${error.code}`));
}
var backupFilePath = getBackupFilePath(apiConfig, backupId);
callback(null);
}
debug('[%s] backup: %j -> %s', backupId, sourceDirectories, backupFilePath);
getBackupCredentials(apiConfig, function (error, credentials) {
getS3Config(apiConfig, function (error, credentials) {
if (error) return callback(error);
var passThrough = new PassThrough();
var params = {
Bucket: apiConfig.bucket,
Key: backupFilePath,
Body: passThrough
Body: sourceStream
};
var s3 = new AWS.S3(credentials);
// exoscale does not like multi-part uploads. so avoid them for filesystem streams < 5GB
if (apiConfig.provider === 'exoscale-sos' && typeof sourceStream.path === 'string') {
var stat = safe.fs.statSync(sourceStream.path);
if (!stat) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, `Error detecting size ${sourceStream.path}. Message: ${safe.error.message}`));
if (stat.size <= 5 * 1024 * 1024 * 1024) return s3.putObject(params, done);
}
// s3.upload automatically does a multi-part upload. we set queueSize to 1 to reduce memory usage
s3.upload(params, { partSize: 10 * 1024 * 1024, queueSize: 1 }, function (error) {
if (error) {
debug('[%s] backup: s3 upload error.', backupId, error);
return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
}
callback(null);
});
targz.create(sourceDirectories, apiConfig.key || null, passThrough, callback);
// uploader will buffer at most queueSize * partSize bytes into memory at any given time.
return s3.upload(params, { partSize: 10 * 1024 * 1024, queueSize: 1 }, done);
});
}
function restore(apiConfig, backupId, destination, callback) {
function download(apiConfig, backupFilePath, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
callback = once(callback);
var backupFilePath = getBackupFilePath(apiConfig, backupId);
debug('[%s] restore: %s -> %s', backupId, backupFilePath, destination);
getBackupCredentials(apiConfig, function (error, credentials) {
getS3Config(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
@@ -126,90 +172,306 @@ function restore(apiConfig, backupId, destination, callback) {
var s3 = new AWS.S3(credentials);
var multipartDownload = new S3BlockReadStream(s3, params, { blockSize: 64 * 1024 * 1024, logCallback: debug });
var ps = new PassThrough();
var multipartDownload = new S3BlockReadStream(s3, params, { blockSize: 64 * 1024 * 1024 /*, logCallback: debug */ });
multipartDownload.on('error', function (error) {
// TODO ENOENT for the mock, fix upstream!
if (error.code === 'NoSuchKey' || error.code === 'ENOENT') return callback(new BackupsError(BackupsError.NOT_FOUND));
debug('[%s] restore: s3 stream error.', backupId, error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
if (error.code === 'NoSuchKey' || error.code === 'ENOENT') {
ps.emit('error', new BackupsError(BackupsError.NOT_FOUND));
} else {
debug('[%s] download: s3 stream error.', backupFilePath, error);
ps.emit('error', new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
}
});
targz.extract(multipartDownload, destination, apiConfig.key || null, callback);
multipartDownload.pipe(ps);
callback(null, ps);
});
}
function copyBackup(apiConfig, oldBackupId, newBackupId, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldBackupId, 'string');
assert.strictEqual(typeof newBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
getBackupCredentials(apiConfig, function (error, credentials) {
function listDir(apiConfig, backupFilePath, batchSize, iteratorCallback, callback) {
getS3Config(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: apiConfig.bucket,
Key: getBackupFilePath(apiConfig, newBackupId),
CopySource: path.join(apiConfig.bucket, getBackupFilePath(apiConfig, oldBackupId))
};
var s3 = new AWS.S3(credentials);
s3.copyObject(params, function (error) {
if (error && error.code === 'NoSuchKey') return callback(new BackupsError(BackupsError.NOT_FOUND, 'Old backup not found'));
if (error) {
debug('copyBackup: s3 copy error.', error);
return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
}
var listParams = {
Bucket: apiConfig.bucket,
Prefix: backupFilePath
};
callback(null);
async.forever(function listAndDownload(foreverCallback) {
s3.listObjects(listParams, function (error, listData) {
if (error) {
debug('remove: Failed to list %s. Not fatal.', error);
return foreverCallback(error);
}
var arr = batchSize === 1 ? listData.Contents : chunk(listData.Contents, batchSize);
if (arr.length === 0) return foreverCallback(new Error('Done'));
iteratorCallback(s3, arr, function (error) {
if (error) return foreverCallback(error);
if (!listData.IsTruncated) return foreverCallback(new Error('Done'));
listParams.Marker = listData.Contents[listData.Contents.length - 1].Key; // NextMarker is returned only with delimiter
foreverCallback();
});
});
}, function (error) {
if (error.message === 'Done') return callback(null);
callback(error);
});
});
}
function removeBackups(apiConfig, backupIds, callback) {
function downloadDir(apiConfig, backupFilePath, destDir) {
assert.strictEqual(typeof apiConfig, 'object');
assert(Array.isArray(backupIds));
assert.strictEqual(typeof backupFilePath, 'string');
assert.strictEqual(typeof destDir, 'string');
var events = new EventEmitter();
var total = 0;
function downloadFile(s3, content, iteratorCallback) {
var relativePath = path.relative(backupFilePath, content.Key);
events.emit('progress', `Downloading ${relativePath}`);
mkdirp(path.dirname(path.join(destDir, relativePath)), function (error) {
if (error) return iteratorCallback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
download(apiConfig, content.Key, function (error, sourceStream) {
if (error) return iteratorCallback(error);
var destStream = fs.createWriteStream(path.join(destDir, relativePath));
destStream.on('open', function () {
sourceStream.pipe(destStream);
});
destStream.on('error', function (error) {
return iteratorCallback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
destStream.on('finish', iteratorCallback);
});
});
}
const concurrency = 10, batchSize = 1;
listDir(apiConfig, backupFilePath, batchSize, function (s3, objects, done) {
total += objects.length;
async.eachLimit(objects, concurrency, downloadFile.bind(null, s3), done);
}, function (error) {
events.emit('progress', `Downloaded ${total} files`);
events.emit('done', error);
});
return events;
}
function copy(apiConfig, oldFilePath, newFilePath) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof oldFilePath, 'string');
assert.strictEqual(typeof newFilePath, 'string');
var events = new EventEmitter(), retryCount = 0;
function copyFile(s3, content, iteratorCallback) {
var relativePath = path.relative(oldFilePath, content.Key);
function done(error) {
if (error && error.code === 'NoSuchKey') return iteratorCallback(new BackupsError(BackupsError.NOT_FOUND, `Old backup not found: ${content.Key}`));
if (error) {
debug('copy: s3 copy error when copying %s %s', content.Key, error);
return iteratorCallback(new BackupsError(BackupsError.EXTERNAL_ERROR, `Error copying ${content.Key} : ${error.message} ${error.code}`));
}
iteratorCallback(null);
}
var copyParams = {
Bucket: apiConfig.bucket,
Key: path.join(newFilePath, relativePath)
};
// S3 copyObject has a file size limit of 5GB so if we have larger files, we do a multipart copy
if (content.Size < 5 * 1024 * 1024 * 1024 || apiConfig.provider === 'digitalocean-spaces') { // DO has not implemented this yet
events.emit('progress', `Copying ${relativePath}`);
// for exoscale, '/' should not be encoded
copyParams.CopySource = path.join(apiConfig.bucket, encodeURIComponent(content.Key)); // See aws-sdk-js/issues/1302
s3.copyObject(copyParams, done).on('retry', function (response) {
++retryCount;
events.emit('progress', `Retrying (${response.retryCount+1}) copy of ${relativePath}. Status code: ${response.httpResponse.statusCode}`);
});
return;
}
events.emit('progress', `Copying (multipart) ${relativePath}`);
s3.createMultipartUpload(copyParams, function (error, result) {
if (error) return done(error);
const CHUNK_SIZE = 1024 * 1024 * 1024; // 1GB - rather random size
var uploadId = result.UploadId;
var uploadedParts = [];
var partNumber = 1;
var startBytes = 0;
var endBytes = 0;
var size = content.Size-1;
function copyNextChunk() {
endBytes = startBytes + CHUNK_SIZE;
if (endBytes > size) endBytes = size;
var params = {
Bucket: apiConfig.bucket,
Key: path.join(newFilePath, relativePath),
CopySource: path.join(apiConfig.bucket, encodeURIComponent(content.Key)), // See aws-sdk-js/issues/1302
CopySourceRange: 'bytes=' + startBytes + '-' + endBytes,
PartNumber: partNumber,
UploadId: uploadId
};
s3.uploadPartCopy(params, function (error, result) {
if (error) return done(error);
uploadedParts.push({ ETag: result.CopyPartResult.ETag, PartNumber: partNumber });
if (endBytes < size) {
startBytes = endBytes + 1;
partNumber++;
return copyNextChunk();
}
var params = {
Bucket: apiConfig.bucket,
Key: path.join(newFilePath, relativePath),
MultipartUpload: { Parts: uploadedParts },
UploadId: uploadId
};
s3.completeMultipartUpload(params, done);
}).on('retry', function (response) {
++retryCount;
events.emit('progress', `Retrying (${response.retryCount+1}) multipart copy of ${relativePath}. Status code: ${response.httpResponse.statusCode}`);
});
}
copyNextChunk();
});
}
const batchSize = 1;
var total = 0, concurrency = 4;
listDir(apiConfig, oldFilePath, batchSize, function (s3, objects, done) {
total += objects.length;
if (retryCount === 0) concurrency = Math.min(concurrency + 1, 10); else concurrency = Math.max(concurrency - 1, 5);
events.emit('progress', `${retryCount} errors. concurrency set to ${concurrency}`);
retryCount = 0;
async.eachLimit(objects, concurrency, copyFile.bind(null, s3), done);
}, function (error) {
events.emit('progress', `Copied ${total} files`);
events.emit('done', error);
});
return events;
}
function remove(apiConfig, filename, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
getBackupCredentials(apiConfig, function (error, credentials) {
getS3Config(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
var s3 = new AWS.S3(credentials);
var deleteParams = {
Bucket: apiConfig.bucket,
Delete: {
Objects: [ ] // { Key }
Objects: [{ Key: filename }]
}
};
backupIds.forEach(function (backupId) {
params.Delete.Objects.push({ Key: getBackupFilePath(apiConfig, backupId) });
});
var s3 = new AWS.S3(credentials);
s3.deleteObjects(params, function (error, data) {
if (error) debug('removeBackups: Unable to remove %s. Not fatal.', params.Key, error);
else debug('removeBackups: Deleted: %j Errors: %j', data.Deleted, data.Errors);
s3.deleteObjects(deleteParams, function (error) {
if (error) debug('remove: Unable to remove %s. Not fatal.', deleteParams.Key, error);
callback(null);
});
});
}
function removeDir(apiConfig, pathPrefix) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof pathPrefix, 'string');
var events = new EventEmitter();
var total = 0;
function deleteFiles(s3, contents, iteratorCallback) {
var deleteParams = {
Bucket: apiConfig.bucket,
Delete: {
Objects: contents.map(function (c) { return { Key: c.Key }; })
}
};
total += contents.length;
events.emit('progress', `Removing ${contents.length} files from ${contents[0].Key} to ${contents[contents.length-1].Key}`);
s3.deleteObjects(deleteParams, function (error /*, deleteData */) {
if (error) {
events.emit('progress', `Unable to remove ${deleteParams.Key} ${error.message}`);
return iteratorCallback(error);
}
iteratorCallback();
});
}
const batchSize = apiConfig.provider !== 'digitalocean-spaces' ? 1000 : 100; // throttle requests per second
listDir(apiConfig, pathPrefix, batchSize, deleteFiles, function (error) {
events.emit('progress', `Removed ${total} files`);
events.emit('done', error);
});
return events;
}
function testConfig(apiConfig, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof callback, 'function');
if (typeof apiConfig.accessKeyId !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'accessKeyId must be a string'));
if (typeof apiConfig.secretAccessKey !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'secretAccessKey must be a string'));
if (apiConfig.provider === 'caas') {
if (typeof apiConfig.token !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'token must be a string'));
} else {
if (typeof apiConfig.accessKeyId !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'accessKeyId must be a string'));
if (typeof apiConfig.secretAccessKey !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'secretAccessKey must be a string'));
}
if (typeof apiConfig.bucket !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'bucket must be a string'));
if (typeof apiConfig.prefix !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'prefix must be a string'));
if ('signatureVersion' in apiConfig && typeof apiConfig.prefix !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'signatureVersion must be a string'));
if ('endpoint' in apiConfig && typeof apiConfig.prefix !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'endpoint must be a string'));
if ('signatureVersion' in apiConfig && typeof apiConfig.signatureVersion !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'signatureVersion must be a string'));
if ('endpoint' in apiConfig && typeof apiConfig.endpoint !== 'string') return callback(new BackupsError(BackupsError.BAD_FIELD, 'endpoint must be a string'));
// attempt to upload and delete a file with new credentials
getBackupCredentials(apiConfig, function (error, credentials) {
getS3Config(apiConfig, function (error, credentials) {
if (error) return callback(error);
var params = {
@@ -236,10 +498,34 @@ function testConfig(apiConfig, callback) {
});
}
function backupDone(backupId, appBackupIds, callback) {
function backupDone(apiConfig, backupId, appBackupIds, callback) {
assert.strictEqual(typeof apiConfig, 'object');
assert.strictEqual(typeof backupId, 'string');
assert(Array.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
callback();
if (apiConfig.provider !== 'caas') return callback();
// CaaS expects filenames instead of backupIds, this means no prefix but a file type extension
var FILE_TYPE = '.tar.gz.enc';
var boxBackupFilename = backupId + FILE_TYPE;
var appBackupFilenames = appBackupIds.map(function (id) { return id + FILE_TYPE; });
debug('[%s] backupDone: %s apps %j', backupId, boxBackupFilename, appBackupFilenames);
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/backupDone';
var data = {
boxVersion: config.version(),
restoreKey: boxBackupFilename,
appId: null, // now unused
appVersion: null, // now unused
appBackupIds: appBackupFilenames
};
superagent.post(url).send(data).query({ token: config.token() }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 200) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, result.text));
return callback(null);
});
}
-105
View File
@@ -1,105 +0,0 @@
'use strict';
exports = module.exports = {
create: create,
extract: extract
};
var assert = require('assert'),
BackupsError = require('../backups.js').BackupsError,
crypto = require('crypto'),
debug = require('debug')('box:storage/targz'),
mkdirp = require('mkdirp'),
progress = require('progress-stream'),
tar = require('tar-fs'),
zlib = require('zlib');
function create(sourceDirectories, key, outStream, callback) {
assert(Array.isArray(sourceDirectories));
assert(key === null || typeof key === 'string');
assert.strictEqual(typeof callback, 'function');
var pack = tar.pack('/', {
dereference: false, // pack the symlink and not what it points to
entries: sourceDirectories.map(function (m) { return m.source; }),
map: function(header) {
sourceDirectories.forEach(function (m) {
header.name = header.name.replace(new RegExp('^' + m.source + '(/?)'), m.destination + '$1');
});
return header;
},
strict: false // do not error for unknown types (skip fifo, char/block devices)
});
var gzip = zlib.createGzip({});
var progressStream = progress({ time: 10000 }); // display a progress every 10 seconds
pack.on('error', function (error) {
debug('backup: tar stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
gzip.on('error', function (error) {
debug('backup: gzip stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
progressStream.on('progress', function(progress) {
debug('backup: %s@%s', Math.round(progress.transferred/1024/1024) + 'M', Math.round(progress.speed/1024/1024) + 'Mbps');
});
if (key !== null) {
var encrypt = crypto.createCipher('aes-256-cbc', key);
encrypt.on('error', function (error) {
debug('backup: encrypt stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
pack.pipe(gzip).pipe(encrypt).pipe(progressStream).pipe(outStream);
} else {
pack.pipe(gzip).pipe(progressStream).pipe(outStream);
}
}
function extract(inStream, destination, key, callback) {
assert.strictEqual(typeof destination, 'string');
assert(key === null || typeof key === 'string');
assert.strictEqual(typeof callback, 'function');
mkdirp(destination, function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
var gunzip = zlib.createGunzip({});
var progressStream = progress({ time: 10000 }); // display a progress every 10 seconds
var extract = tar.extract(destination);
progressStream.on('progress', function(progress) {
debug('restore: %s@%s', Math.round(progress.transferred/1024/1024) + 'M', Math.round(progress.speed/1024/1024) + 'Mbps');
});
gunzip.on('error', function (error) {
debug('restore: gunzip stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
extract.on('error', function (error) {
debug('restore: extract stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
extract.on('finish', function () {
debug('restore: done.');
callback(null);
});
if (key !== null) {
var decrypt = crypto.createDecipher('aes-256-cbc', key);
decrypt.on('error', function (error) {
debug('restore: decrypt stream error.', error);
callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
});
inStream.pipe(progressStream).pipe(decrypt).pipe(gunzip).pipe(extract);
} else {
inStream.pipe(progressStream).pipe(gunzip).pipe(extract);
}
});
}
+2 -2
View File
@@ -40,10 +40,9 @@ SubdomainError.NOT_FOUND = 'No such domain';
SubdomainError.EXTERNAL_ERROR = 'External error';
SubdomainError.BAD_FIELD = 'Bad Field';
SubdomainError.STILL_BUSY = 'Still busy';
SubdomainError.MISSING_CREDENTIALS = 'Missing credentials';
SubdomainError.INTERNAL_ERROR = 'Internal error';
SubdomainError.ACCESS_DENIED = 'Access denied';
SubdomainError.INVALID_PROVIDER = 'provider must be route53, digitalocean, cloudflare, noop, manual or caas';
SubdomainError.INVALID_PROVIDER = 'provider must be route53, gcdns, digitalocean, cloudflare, noop, manual or caas';
// choose which subdomain backend we use for test purpose we use route53
function api(provider) {
@@ -53,6 +52,7 @@ function api(provider) {
case 'caas': return require('./dns/caas.js');
case 'cloudflare': return require('./dns/cloudflare.js');
case 'route53': return require('./dns/route53.js');
case 'gcdns': return require('./dns/gcdns.js');
case 'digitalocean': return require('./dns/digitalocean.js');
case 'noop': return require('./dns/noop.js');
case 'manual': return require('./dns/manual.js');
+151
View File
@@ -0,0 +1,151 @@
'use strict';
var assert = require('assert'),
async = require('async'),
debug = require('debug')('box:syncer'),
fs = require('fs'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance');
exports = module.exports = {
sync: sync
};
function readCache(cacheFile) {
assert.strictEqual(typeof cacheFile, 'string');
var cache = safe.fs.readFileSync(cacheFile, 'utf8');
if (!cache) return [ ];
var result = cache.trim().split('\n').map(JSON.parse);
return result;
}
function readTree(dir) {
assert.strictEqual(typeof dir, 'string');
var list = safe.fs.readdirSync(dir).sort();
if (!list) return [ ];
return list.map(function (e) { return { stat: safe.fs.lstatSync(path.join(dir, e)), name: e }; });
}
function ISDIR(x) {
return (x & fs.constants.S_IFDIR) === fs.constants.S_IFDIR;
}
function ISFILE(x) {
return (x & fs.constants.S_IFREG) === fs.constants.S_IFREG;
}
function sync(dir, taskProcessor, concurrency, callback) {
assert.strictEqual(typeof dir, 'string');
assert.strictEqual(typeof taskProcessor, 'function');
assert.strictEqual(typeof concurrency, 'number');
assert.strictEqual(typeof callback, 'function');
var curCacheIndex = 0, addQueue = [ ], delQueue = [ ];
var cacheFile = path.join(paths.BACKUP_INFO_DIR, path.basename(dir) + '.sync.cache'),
newCacheFile = path.join(paths.BACKUP_INFO_DIR, path.basename(dir) + '.sync.cache.new');
var cache = [ ];
// if cache is missing or if we crashed/errored in previous run, start out empty. TODO: do a remote listDir and rebuild
if (!safe.fs.existsSync(cacheFile)) {
delQueue.push({ operation: 'removedir', path: '', reason: 'nocache' });
} else if (safe.fs.existsSync(newCacheFile)) {
delQueue.push({ operation: 'removedir', path: '', reason: 'crash' });
} else {
cache = readCache(cacheFile);
}
var newCacheFd = safe.fs.openSync(newCacheFile, 'w'); // truncates any existing file
if (newCacheFd === -1) return callback(new Error('Error opening new cache file: ' + safe.error.message));
function advanceCache(entryPath) {
var lastRemovedDir = null;
for (; curCacheIndex !== cache.length && (entryPath === '' || cache[curCacheIndex].path < entryPath); ++curCacheIndex) {
// ignore subdirs of lastRemovedDir since it was removed already
if (lastRemovedDir && cache[curCacheIndex].path.startsWith(lastRemovedDir)) continue;
if (ISDIR(cache[curCacheIndex].stat.mode)) {
delQueue.push({ operation: 'removedir', path: cache[curCacheIndex].path, reason: 'missing' });
lastRemovedDir = cache[curCacheIndex].path;
} else {
delQueue.push({ operation: 'remove', path: cache[curCacheIndex].path, reason: 'missing' });
lastRemovedDir = null;
}
}
}
function traverse(relpath) {
var entries = readTree(path.join(dir, relpath));
for (var i = 0; i < entries.length; i++) {
var entryPath = path.join(relpath, entries[i].name);
var entryStat = entries[i].stat;
if (!entryStat) continue; // some stat error. prented it doesn't exist
if (!entryStat.isDirectory() && !entryStat.isFile()) continue; // ignore non-files and dirs
if (entryStat.isSymbolicLink()) continue;
safe.fs.appendFileSync(newCacheFd, JSON.stringify({ path: entryPath, stat: { mtime: entryStat.mtime.getTime(), size: entryStat.size, inode: entryStat.inode, mode: entryStat.mode } }) + '\n');
if (curCacheIndex !== cache.length && cache[curCacheIndex].path < entryPath) { // files disappeared. first advance cache as needed
advanceCache(entryPath);
}
const cachePath = curCacheIndex === cache.length ? null : cache[curCacheIndex].path;
const cacheStat = curCacheIndex === cache.length ? null : cache[curCacheIndex].stat;
if (cachePath === null || cachePath > entryPath) { // new files appeared
if (entryStat.isDirectory()) {
traverse(entryPath);
} else {
addQueue.push({ operation: 'add', path: entryPath, reason: 'new' });
}
} else if (ISDIR(cacheStat.mode) && entryStat.isDirectory()) { // dir names match
++curCacheIndex;
traverse(entryPath);
} else if (ISFILE(cacheStat.mode) && entryStat.isFile()) { // file names match
if (entryStat.mtime.getTime() !== cacheStat.mtime || entryStat.size != cacheStat.size || entryStat.inode !== cacheStat.inode) { // file changed
addQueue.push({ operation: 'add', path: entryPath, reason: 'changed' });
}
++curCacheIndex;
} else if (entryStat.isDirectory()) { // was a file, now a directory
delQueue.push({ operation: 'remove', path: cachePath, reason: 'wasfile' });
++curCacheIndex;
traverse(entryPath);
} else { // was a dir, now a file
delQueue.push({ operation: 'removedir', path: cachePath, reason: 'wasdir' });
while (curCacheIndex !== cache.length && cache[curCacheIndex].path.startsWith(cachePath)) ++curCacheIndex;
addQueue.push({ operation: 'add', path: entryPath, reason: 'wasdir' });
}
}
}
traverse('');
advanceCache(''); // remove rest of the cache entries
safe.fs.closeSync(newCacheFd);
debug('Processing %s deletes and %s additions', delQueue.length, addQueue.length);
async.eachLimit(delQueue, concurrency, taskProcessor, function (error) {
debug('Done processing deletes', error);
async.eachLimit(addQueue, concurrency, taskProcessor, function (error) {
debug('Done processing adds', error);
if (error) return callback(error);
safe.fs.unlinkSync(cacheFile);
if (!safe.fs.renameSync(newCacheFile, cacheFile)) debug('Unable to save new cache file');
callback();
});
});
}
+16 -6
View File
@@ -138,9 +138,19 @@ describe('apptask', function () {
});
});
it('delete volume', function (done) {
apptask._deleteVolume(APP, function (error) {
it('delete volume - removeDirectory (false) ', function (done) {
apptask._deleteVolume(APP, { removeDirectory: false }, function (error) {
expect(!fs.existsSync(paths.APPS_DATA_DIR + '/' + APP.id + '/data')).to.be(true);
expect(fs.existsSync(paths.APPS_DATA_DIR + '/' + APP.id)).to.be(true);
expect(fs.readdirSync(paths.APPS_DATA_DIR + '/' + APP.id).length).to.be(0); // empty
expect(error).to.be(null);
done();
});
});
it('delete volume - removeDirectory (true) ', function (done) {
apptask._deleteVolume(APP, { removeDirectory: true }, function (error) {
expect(!fs.existsSync(paths.APPS_DATA_DIR + '/' + APP.id)).to.be(true);
expect(error).to.be(null);
done();
});
@@ -171,7 +181,7 @@ describe('apptask', function () {
var badApp = _.extend({ }, APP);
badApp.manifest = { };
apptask._verifyManifest(badApp, function (error) {
apptask._verifyManifest(badApp.manifest, function (error) {
expect(error).to.be.ok();
done();
});
@@ -182,7 +192,7 @@ describe('apptask', function () {
badApp.manifest = _.extend({ }, APP.manifest);
delete badApp.manifest.id;
apptask._verifyManifest(badApp, function (error) {
apptask._verifyManifest(badApp.manifest, function (error) {
expect(error).to.be.ok();
done();
});
@@ -193,7 +203,7 @@ describe('apptask', function () {
badApp.manifest = _.extend({ }, APP.manifest);
badApp.manifest.maxBoxVersion = '0.0.0'; // max box version is too small
apptask._verifyManifest(badApp, function (error) {
apptask._verifyManifest(badApp.manifest, function (error) {
expect(error).to.be.ok();
done();
});
@@ -202,7 +212,7 @@ describe('apptask', function () {
it('verifies manifest', function (done) {
var goodApp = _.extend({ }, APP);
apptask._verifyManifest(goodApp, function (error) {
apptask._verifyManifest(goodApp.manifest, function (error) {
expect(error).to.be(null);
done();
});
+207 -13
View File
@@ -7,13 +7,88 @@
'use strict';
var async = require('async'),
appdb = require('../appdb.js'),
backupdb = require('../backupdb.js'),
backups = require('../backups.js'),
createTree = require('./common.js').createTree,
database = require('../database'),
DatabaseError = require('../databaseerror.js'),
expect = require('expect.js'),
settings = require('../settings.js');
fs = require('fs'),
os = require('os'),
mkdirp = require('mkdirp'),
readdirp = require('readdirp'),
path = require('path'),
progress = require('../progress.js'),
rimraf = require('rimraf'),
settings = require('../settings.js'),
SettingsError = require('../settings.js').SettingsError;
function compareDirectories(one, two, callback) {
readdirp({ root: one }, function (error, treeOne) {
if (error) return callback(error);
readdirp({ root: two }, function (error, treeTwo) {
if (error) return callback(error);
var mismatch = [];
function compareDirs(a, b) {
a.forEach(function (tmpA) {
var found = b.find(function (tmpB) {
return tmpA.path === tmpB.path;
});
if (!found) mismatch.push(tmpA);
});
}
function compareFiles(a, b) {
a.forEach(function (tmpA) {
var found = b.find(function (tmpB) {
// TODO check file or symbolic link
return tmpA.path === tmpB.path && tmpA.mode === tmpB.mode;
});
if (!found) mismatch.push(tmpA);
});
}
compareDirs(treeOne.directories, treeTwo.directories);
compareDirs(treeTwo.directories, treeOne.directories);
compareFiles(treeOne.files, treeTwo.files);
compareFiles(treeTwo.files, treeOne.files);
if (mismatch.length) {
console.error('Files not found in both: %j', mismatch);
return callback(new Error('file mismatch'));
}
callback(null);
});
});
}
function createBackup(callback) {
backups.backup({ username: 'test' }, function (error) { // this call does not wait for the backup!
if (error) return callback(error);
function waitForBackup() {
var p = progress.getAll();
if (p.backup.percent !== 100) return setTimeout(waitForBackup, 1000);
if (p.backup.message) return callback(new Error('backup failed:' + p.backup.message));
backups.getByStatePaged(backupdb.BACKUP_STATE_NORMAL, 1, 1, function (error, result) {
if (error) return callback(error);
if (result.length !== 1) return callback(new Error('result is not of length 1'));
callback(null, result[0]);
});
}
setTimeout(waitForBackup, 1000);
});
}
describe('backups', function () {
before(function (done) {
@@ -24,7 +99,9 @@ describe('backups', function () {
settings.setBackupConfig.bind(null, {
provider: 'filesystem',
key: 'enckey',
retentionSecs: 1
backupFolder: '/var/backups',
retentionSecs: 1,
format: 'tgz'
})
], done);
});
@@ -44,7 +121,8 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_BOX,
dependsOn: [ 'backup-app-00', 'backup-app-01' ],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
var BACKUP_0_APP_0 = {
@@ -52,7 +130,8 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_APP,
dependsOn: [],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
var BACKUP_0_APP_1 = {
@@ -60,7 +139,8 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_APP,
dependsOn: [],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
var BACKUP_1 = {
@@ -68,7 +148,8 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_BOX,
dependsOn: [ 'backup-app-10', 'backup-app-11' ],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
var BACKUP_1_APP_0 = {
@@ -76,7 +157,8 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_APP,
dependsOn: [],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
var BACKUP_1_APP_1 = {
@@ -84,11 +166,12 @@ describe('backups', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_APP,
dependsOn: [],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
it('succeeds without backups', function (done) {
backups.cleanup(done);
backups.cleanup({ username: 'test' }, done);
});
it('succeeds with box backups, keeps latest', function (done) {
@@ -100,7 +183,7 @@ describe('backups', function () {
}, function (error) {
expect(error).to.not.be.ok();
backups.cleanup(function (error) {
backups.cleanup({ username: 'test' }, function (error) {
expect(error).to.not.be.ok();
backupdb.getByTypePaged(backupdb.BACKUP_TYPE_BOX, 1, 1000, function (error, result) {
@@ -121,7 +204,7 @@ describe('backups', function () {
});
it('does not remove expired backups if only one left', function (done) {
backups.cleanup(function (error) {
backups.cleanup({ username: 'test' }, function (error) {
expect(error).to.not.be.ok();
backupdb.getByTypePaged(backupdb.BACKUP_TYPE_BOX, 1, 1000, function (error, result) {
@@ -146,7 +229,7 @@ describe('backups', function () {
// wait for expiration
setTimeout(function () {
backups.cleanup(function (error) {
backups.cleanup({ username: 'test' }, function (error) {
expect(error).to.not.be.ok();
backupdb.getByTypePaged(backupdb.BACKUP_TYPE_APP, 1, 1000, function (error, result) {
@@ -160,4 +243,115 @@ describe('backups', function () {
});
});
});
describe('fs meta data', function () {
var tmpdir;
before(function () {
tmpdir = fs.mkdtempSync(path.join(os.tmpdir(), 'backups-test'));
});
after(function () {
rimraf.sync(tmpdir);
});
it('saves special files', function (done) {
createTree(tmpdir, { 'data': { 'subdir': { 'emptydir': { } } }, 'dir2': { 'file': 'stuff' } });
fs.chmodSync(path.join(tmpdir, 'dir2/file'), parseInt('0755', 8));
backups._saveFsMetadata(tmpdir, function (error) {
expect(error).to.not.be.ok();
var emptyDirs = JSON.parse(fs.readFileSync(path.join(tmpdir, 'fsmetadata.json'), 'utf8')).emptyDirs;
expect(emptyDirs).to.eql(['./data/subdir/emptydir']);
var execFiles = JSON.parse(fs.readFileSync(path.join(tmpdir, 'fsmetadata.json'), 'utf8')).execFiles;
expect(execFiles).to.eql(['./dir2/file']);
done();
});
});
it('restores special files', function (done) {
rimraf.sync(path.join(tmpdir, 'data'));
expect(fs.existsSync(path.join(tmpdir, 'data/subdir/emptydir'))).to.be(false); // just make sure rimraf worked
backups._restoreFsMetadata(tmpdir, function (error) {
expect(error).to.not.be.ok();
expect(fs.existsSync(path.join(tmpdir, 'data/subdir/emptydir'))).to.be(true);
var mode = fs.statSync(path.join(tmpdir, 'dir2/file')).mode;
expect(mode & ~fs.constants.S_IFREG).to.be(parseInt('0755', 8));
done();
});
});
});
describe('filesystem', function () {
var backupInfo1;
var gBackupConfig = {
provider: 'filesystem',
backupFolder: path.join(os.tmpdir(), 'backups-test-filesystem'),
format: 'tgz'
};
before(function (done) {
rimraf.sync(gBackupConfig.backupFolder);
done();
});
after(function (done) {
rimraf.sync(gBackupConfig.backupFolder);
progress.clear(progress.BACKUP);
done();
});
it('fails to set backup config for non-existing folder', function (done) {
settings.setBackupConfig(gBackupConfig, function (error) {
expect(error).to.be.a(SettingsError);
expect(error.reason).to.equal(SettingsError.BAD_FIELD);
done();
});
});
it('succeeds to set backup config', function (done) {
mkdirp.sync(gBackupConfig.backupFolder);
settings.setBackupConfig(gBackupConfig, function (error) {
expect(error).to.be(null);
done();
});
});
it('can backup', function (done) {
this.timeout(6000);
createBackup(function (error, result) {
expect(error).to.be(null);
expect(fs.statSync(path.join(gBackupConfig.backupFolder, 'snapshot/box.tar.gz')).nlink).to.be(2); // hard linked to a rotated backup
expect(fs.statSync(path.join(gBackupConfig.backupFolder, `${result.id}.tar.gz`)).nlink).to.be(2);
backupInfo1 = result;
done();
});
});
it('can take another backup', function (done) {
this.timeout(6000);
createBackup(function (error, result) {
expect(error).to.be(null);
expect(fs.statSync(path.join(gBackupConfig.backupFolder, 'snapshot/box.tar.gz')).nlink).to.be(2); // hard linked to a rotated backup
expect(fs.statSync(path.join(gBackupConfig.backupFolder, `${result.id}.tar.gz`)).nlink).to.be(2); // hard linked to new backup
expect(fs.statSync(path.join(gBackupConfig.backupFolder, `${backupInfo1.id}.tar.gz`)).nlink).to.be(1); // not hard linked anymore
done();
});
});
});
});
+3 -4
View File
@@ -15,11 +15,10 @@ scripts=("${SOURCE_DIR}/src/scripts/rmappdir.sh" \
"${SOURCE_DIR}/src/scripts/reboot.sh" \
"${SOURCE_DIR}/src/scripts/update.sh" \
"${SOURCE_DIR}/src/scripts/collectlogs.sh" \
"${SOURCE_DIR}/src/scripts/reloadcollectd.sh" \
"${SOURCE_DIR}/src/scripts/configurecollectd.sh" \
"${SOURCE_DIR}/src/scripts/authorized_keys.sh" \
"${SOURCE_DIR}/src/scripts/node.sh" \
"${SOURCE_DIR}/src/scripts/mvlogrotateconfig.sh" \
"${SOURCE_DIR}/src/scripts/rmlogrotateconfig.sh")
"${SOURCE_DIR}/src/backuptask.js" \
"${SOURCE_DIR}/src/scripts/configurelogrotate.sh")
for script in "${scripts[@]}"; do
if [[ $(sudo -n "${script}" --check 2>/dev/null) != "OK" ]]; then
+33
View File
@@ -0,0 +1,33 @@
'use strict';
var fs = require('fs'),
mkdirp = require('mkdirp'),
path = require('path'),
rimraf = require('rimraf');
exports = module.exports = {
createTree: createTree
};
function createTree(root, obj) {
rimraf.sync(root);
mkdirp.sync(root);
function createSubTree(tree, curpath) {
for (var key in tree) {
if (typeof tree[key] === 'string') {
if (key.startsWith('link:')) {
fs.symlinkSync(tree[key], path.join(curpath, key.slice(5)));
} else {
fs.writeFileSync(path.join(curpath, key), tree[key], 'utf8');
}
} else {
fs.mkdirSync(path.join(curpath, key));
createSubTree(tree[key], path.join(curpath, key));
}
}
}
createSubTree(obj, root);
}
+4 -1
View File
@@ -17,7 +17,7 @@ describe('config', function () {
});
after(function () {
delete require.cache[require.resolve('../config.js')];
config._reset();
});
it('baseDir() is set', function (done) {
@@ -94,4 +94,7 @@ describe('config', function () {
done();
});
it('test machine has IPv6 support', function () {
expect(config.hasIPv6()).to.equal(true);
});
});
+6 -2
View File
@@ -538,6 +538,7 @@ describe('database', function () {
accessRestriction: null,
lastBackupId: null,
oldConfig: null,
newConfig: null,
memoryLimit: 4294967296,
altDomain: null,
xFrameOptions: 'DENY',
@@ -562,6 +563,7 @@ describe('database', function () {
accessRestriction: { users: [ 'foobar' ] },
lastBackupId: null,
oldConfig: null,
newConfig: null,
memoryLimit: 0,
altDomain: null,
xFrameOptions: 'SAMEORIGIN',
@@ -1025,7 +1027,8 @@ describe('database', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_BOX,
dependsOn: [ 'dep1' ],
restoreConfig: null
restoreConfig: null,
format: 'tgz'
};
backupdb.add(backup, function (error) {
@@ -1090,7 +1093,8 @@ describe('database', function () {
version: '1.0.0',
type: backupdb.BACKUP_TYPE_APP,
dependsOn: [ ],
restoreConfig: { manifest: { foo: 'bar' } }
restoreConfig: { manifest: { foo: 'bar' } },
format: 'tgz'
};
backupdb.add(backup, function (error) {
+43 -4
View File
@@ -12,6 +12,7 @@ var async = require('async'),
eventlog = require('../eventlog.js'),
expect = require('expect.js'),
mailer = require('../mailer.js'),
nock = require('nock'),
paths = require('../paths.js'),
safe = require('safetydance'),
settings = require('../settings.js'),
@@ -30,10 +31,15 @@ var AUDIT_SOURCE = {
ip: '1.2.3.4'
};
function checkMails(number, done) {
function checkMails(number, email, done) {
// mails are enqueued async
setTimeout(function () {
expect(mailer._getMailQueue().length).to.equal(number);
if (number && email) {
expect(mailer._getMailQueue()[0].to.indexOf(email)).to.not.equal(-1);
}
mailer._clearMailQueue();
done();
}, 500);
@@ -78,7 +84,7 @@ describe('digest', function () {
it('does not send mail with digest disabled', function (done) {
digest.maybeSend(function (error) {
if (error) return done(error);
checkMails(0, done);
checkMails(0, '', done);
});
});
@@ -93,7 +99,7 @@ describe('digest', function () {
digest.maybeSend(function (error) {
if (error) return done(error);
checkMails(1, done);
checkMails(1, '', done);
});
});
@@ -103,7 +109,40 @@ describe('digest', function () {
digest.maybeSend(function (error) {
if (error) return done(error);
checkMails(1, done);
checkMails(1, '', done);
});
});
it('sends mail for pending update to appstore account email (caas)', function (done) {
var subscription = {
id: 'caas',
created: 0,
canceled_at: 0,
status: 'active',
plan: { id: 'caas' }
};
updatechecker._setUpdateInfo({ box: null, apps: { 'appid': { manifest: { version: '1.2.5', changelog: 'noop\nreally' } } } });
var fake1 = nock(config.apiServerOrigin()).post(function (uri) { return uri.indexOf('/api/v1/users/test-user/cloudrons') >= 0; }).reply(201, { cloudron: { id: 'test-cloudron' }});
var fake2 = nock(config.apiServerOrigin()).get(function (uri) { return uri.indexOf('/api/v1/users/test-user/cloudrons/test-cloudron/subscription') >= 0; }).reply(200, { subscription: subscription });
var fake3 = nock(config.apiServerOrigin()).get('/api/v1/users/test-user?accessToken=test-token').reply(200, { profile: { id: 'test-user', email: 'test@email.com' } });
settings.setAppstoreConfig({ userId: 'test-user', token: 'test-token', cloudronId: 'test-cloudron' }, function (error) {
if (error) return done(error);
digest.maybeSend(function (error) {
if (error) return done(error);
checkMails(1, 'test@email.com', function (error) {
if (error) return done(error);
expect(fake1.isDone()).to.be.ok();
expect(fake2.isDone()).to.be.ok();
expect(fake3.isDone()).to.be.ok();
done();
});
});
});
});
});
+138 -7
View File
@@ -8,6 +8,7 @@
var async = require('async'),
AWS = require('aws-sdk'),
GCDNS = require('@google-cloud/dns'),
config = require('../config.js'),
database = require('../database.js'),
expect = require('expect.js'),
@@ -108,7 +109,7 @@ describe('dns provider', function () {
subdomains.upsert('test', 'A', [ '1.2.3.4' ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('unused');
expect(result).to.eql('3352892');
expect(req1.isDone()).to.be.ok();
expect(req2.isDone()).to.be.ok();
@@ -154,11 +155,11 @@ describe('dns provider', function () {
.reply(200, { domain_records: [ DOMAIN_RECORD_0, DOMAIN_RECORD_1 ] });
var req2 = nock(DIGITALOCEAN_ENDPOINT).filteringRequestBody(function () { return false; })
.put('/v2/domains/' + config.zoneName() + '/records/' + DOMAIN_RECORD_1.id)
.reply(200, { domain_records: DOMAIN_RECORD_1_NEW });
.reply(200, { domain_record: DOMAIN_RECORD_1_NEW });
subdomains.upsert('test', 'A', [ DOMAIN_RECORD_1_NEW.data ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('unused');
expect(result).to.eql('3352893');
expect(req1.isDone()).to.be.ok();
expect(req2.isDone()).to.be.ok();
@@ -234,17 +235,17 @@ describe('dns provider', function () {
.reply(200, { domain_records: [ DOMAIN_RECORD_0, DOMAIN_RECORD_1, DOMAIN_RECORD_2 ] });
var req2 = nock(DIGITALOCEAN_ENDPOINT).filteringRequestBody(function () { return false; })
.put('/v2/domains/' + config.zoneName() + '/records/' + DOMAIN_RECORD_1.id)
.reply(200, { domain_records: DOMAIN_RECORD_1_NEW });
.reply(200, { domain_record: DOMAIN_RECORD_1_NEW });
var req3 = nock(DIGITALOCEAN_ENDPOINT).filteringRequestBody(function () { return false; })
.put('/v2/domains/' + config.zoneName() + '/records/' + DOMAIN_RECORD_2.id)
.reply(200, { domain_records: DOMAIN_RECORD_2_NEW });
.reply(200, { domain_record: DOMAIN_RECORD_2_NEW });
var req4 = nock(DIGITALOCEAN_ENDPOINT).filteringRequestBody(function () { return false; })
.post('/v2/domains/' + config.zoneName() + '/records')
.reply(201, { domain_records: DOMAIN_RECORD_2_NEW });
.reply(201, { domain_record: DOMAIN_RECORD_2_NEW });
subdomains.upsert('', 'TXT', [ DOMAIN_RECORD_2_NEW.data, DOMAIN_RECORD_1_NEW.data, DOMAIN_RECORD_3_NEW.data ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('unused');
expect(result).to.eql('3352893');
expect(req1.isDone()).to.be.ok();
expect(req2.isDone()).to.be.ok();
expect(req3.isDone()).to.be.ok();
@@ -517,4 +518,134 @@ describe('dns provider', function () {
});
});
});
describe('gcdns', function () {
var HOSTED_ZONES = [];
var zoneQueue = [];
var _OriginalGCDNS;
before(function (done) {
var domain = 'example.com';
config.setFqdn(domain);
config.setZoneName(domain);
var dnsConfig = {
provider: 'gcdns',
projectId: 'my-dns-proj',
keyFilename: __dirname + '/syn-im-1ec6f9f870bf.json'
};
function mockery (queue) {
return function() {
var callback = arguments[--arguments.length];
var elem = queue.shift();
if (!util.isArray(elem)) throw(new Error('Mock answer required'));
// if no callback passed, return a req object with send();
if (typeof callback !== 'function') {
return {
httpRequest: { headers: {} },
send: function (callback) {
expect(callback).to.be.a(Function);
callback.apply(callback, elem);
}
};
} else {
callback.apply(callback, elem);
}
};
}
function fakeZone(name, ns, recordQueue) {
var zone = GCDNS().zone(name.replace('.', '-'));
zone.metadata.dnsName = name + '.';
zone.metadata.nameServers = ns || ['8.8.8.8', '8.8.4.4'];
zone.getRecords = mockery(recordQueue || zoneQueue);
zone.createChange = mockery(recordQueue || zoneQueue);
zone.replaceRecords = mockery(recordQueue || zoneQueue);
zone.deleteRecords = mockery(recordQueue || zoneQueue);
return zone;
}
HOSTED_ZONES = [fakeZone(domain), fakeZone('cloudron.us')];
_OriginalGCDNS = GCDNS.prototype.getZones;
GCDNS.prototype.getZones = mockery(zoneQueue);
settings.setDnsConfig(dnsConfig, config.fqdn(), config.zoneName(), done);
});
after(function () {
GCDNS.prototype.getZones = _OriginalGCDNS;
_OriginalGCDNS = null;
});
it('upsert non-existing record succeeds', function (done) {
zoneQueue.push([null, HOSTED_ZONES]); // getZone
zoneQueue.push([null, [ ]]); // getRecords
zoneQueue.push([null, {id: '1'}]);
subdomains.upsert('test', 'A', [ '1.2.3.4' ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('1');
expect(zoneQueue.length).to.eql(0);
done();
});
});
it('upsert existing record succeeds', function (done) {
zoneQueue.push([null, HOSTED_ZONES]);
zoneQueue.push([null, [GCDNS().zone('test').record('A', {'name': 'test', data:['5.6.7.8'], ttl: 1})]]);
zoneQueue.push([null, {id: '2'}]);
subdomains.upsert('test', 'A', [ '1.2.3.4' ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('2');
expect(zoneQueue.length).to.eql(0);
done();
});
});
it('upsert multiple record succeeds', function (done) {
zoneQueue.push([null, HOSTED_ZONES]);
zoneQueue.push([null, [ ]]); // getRecords
zoneQueue.push([null, {id: '3'}]);
subdomains.upsert('', 'TXT', [ 'first', 'second', 'third' ], function (error, result) {
expect(error).to.eql(null);
expect(result).to.eql('3');
expect(zoneQueue.length).to.eql(0);
done();
});
});
it('get succeeds', function (done) {
zoneQueue.push([null, HOSTED_ZONES]);
zoneQueue.push([null, [GCDNS().zone('test').record('A', {'name': 'test', data:['1.2.3.4', '5.6.7.8'], ttl: 1})]]);
subdomains.get('test', 'A', function (error, result) {
expect(error).to.eql(null);
expect(result).to.be.an(Array);
expect(result.length).to.eql(2);
expect(result).to.eql(['1.2.3.4', '5.6.7.8']);
expect(zoneQueue.length).to.eql(0);
done();
});
});
it('del succeeds', function (done) {
zoneQueue.push([null, HOSTED_ZONES]);
zoneQueue.push([null, [GCDNS().zone('test').record('A', {'name': 'test', data:['5.6.7.8'], ttl: 1})]]);
zoneQueue.push([null, {id: '5'}]);
subdomains.remove('test', 'A', ['1.2.3.4'], function (error) {
expect(error).to.eql(null);
expect(zoneQueue.length).to.eql(0);
done();
});
});
});
});
+2 -3
View File
@@ -6,12 +6,11 @@
'use strict';
var async = require('async'),
progress = require('../progress.js'),
config = require('../config.js'),
var config = require('../config.js'),
database = require('../database.js'),
expect = require('expect.js'),
nock = require('nock'),
progress = require('../progress.js'),
superagent = require('superagent'),
server = require('../server.js');
+20 -2
View File
@@ -9,18 +9,29 @@ var async = require('async'),
config = require('../config.js'),
database = require('../database.js'),
expect = require('expect.js'),
MockS3 = require('mock-aws-s3'),
nock = require('nock'),
os = require('os'),
path = require('path'),
rimraf = require('rimraf'),
s3 = require('../storage/s3.js'),
settings = require('../settings.js'),
settingsdb = require('../settingsdb.js');
function setup(done) {
config.set('provider', 'caas');
nock.cleanAll();
async.series([
database.initialize,
settings.initialize,
function (callback) {
MockS3.config.basePath = path.join(os.tmpdir(), 's3-settings-test-buckets/');
s3._mockInject(MockS3);
// a cloudron must have a backup config to startup
settings.setBackupConfig({ provider: 'caas', token: 'foo', key: 'key'}, function (error) {
settingsdb.set(settings.BACKUP_CONFIG_KEY, JSON.stringify({ provider: 'caas', token: 'foo', key: 'key', format: 'tgz'}), function (error) {
expect(error).to.be(null);
callback();
});
@@ -29,6 +40,9 @@ function setup(done) {
}
function cleanup(done) {
s3._mockRestore();
rimraf.sync(MockS3.config.basePath);
async.series([
settings.uninitialize,
database._clear
@@ -129,7 +143,11 @@ describe('Settings', function () {
});
it('can set backup config', function (done) {
settings.setBackupConfig({ provider: 'caas', token: 'TOKEN' }, function (error) {
var scope2 = nock(config.apiServerOrigin())
.post('/api/v1/boxes/' + config.fqdn() + '/awscredentials?token=TOKEN')
.reply(201, { credentials: { AccessKeyId: 'accessKeyId', SecretAccessKey: 'secretAccessKey', SessionToken: 'sessionToken' } });
settings.setBackupConfig({ provider: 'caas', token: 'TOKEN', format: 'tgz', prefix: 'boxid', bucket: 'bucket' }, function (error) {
expect(error).to.be(null);
done();
});
+4 -2
View File
@@ -10,11 +10,13 @@ readonly source_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)"
rm -rf $HOME/.cloudron_test 2>/dev/null || true # some of those docker container data requires sudo to be removed
mkdir -p $HOME/.cloudron_test
cd $HOME/.cloudron_test
mkdir -p appsdata boxdata/appicons platformdata/mail platformdata/addons/mail platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons configs boxdata/certs platformdata/mail/dkim/localhost platformdata/mail/dkim/foobar.com platformdata/logrotate.d/
mkdir -p configs
mkdir -p appsdata
mkdir -p boxdata/appicons boxdata/mail boxdata/certs boxdata/mail/dkim/localhost boxdata/mail/dkim/foobar.com
mkdir -p platformdata/addons/mail platformdata/nginx/cert platformdata/nginx/applications platformdata/collectd/collectd.conf.d platformdata/addons platformdata/logrotate.d platformdata/backup
# put cert
openssl req -x509 -newkey rsa:2048 -keyout platformdata/nginx/cert/host.key -out platformdata/nginx/cert/host.cert -days 3650 -subj '/CN=localhost' -nodes
# create docker network (while the infra code does this, most tests skip infra setup)
docker network create --subnet=172.18.0.0/16 cloudron || true
+3 -4
View File
@@ -12,7 +12,7 @@ var expect = require('expect.js'),
describe('shell', function () {
it('can run valid program', function (done) {
var cp = shell.exec('test', 'ls', [ '-l' ], function (error) {
var cp = shell.exec('test', 'ls', [ '-l' ], { }, function (error) {
expect(cp).to.be.ok();
expect(error).to.be(null);
done();
@@ -20,14 +20,14 @@ describe('shell', function () {
});
it('fails on invalid program', function (done) {
var cp = shell.exec('test', 'randomprogram', [ ], function (error) {
var cp = shell.exec('test', 'randomprogram', [ ], { }, function (error) {
expect(error).to.be.ok();
done();
});
});
it('fails on failing program', function (done) {
var cp = shell.exec('test', '/usr/bin/false', [ ], function (error) {
var cp = shell.exec('test', '/usr/bin/false', [ ], { }, function (error) {
expect(error).to.be.ok();
done();
});
@@ -63,4 +63,3 @@ describe('shell', function () {
done();
});
});
+197 -246
View File
@@ -5,227 +5,199 @@
'use strict';
var async = require('async'),
var BackupsError = require('../backups.js').BackupsError,
expect = require('expect.js'),
filesystem = require('../storage/filesystem.js'),
fs = require('fs'),
MockS3 = require('mock-aws-s3'),
noop = require('../storage/noop.js'),
os = require('os'),
path = require('path'),
readdirp = require('readdirp'),
MockS3 = require('mock-aws-s3'),
rimraf = require('rimraf'),
mkdirp = require('mkdirp'),
BackupsError = require('../backups.js').BackupsError,
config = require('../config.js'),
database = require('../database.js'),
caas = require('../storage/caas.js'),
s3 = require('../storage/s3.js'),
filesystem = require('../storage/filesystem.js'),
expect = require('expect.js'),
settings = require('../settings.js'),
SettingsError = settings.SettingsError,
stream = require('stream');
function setup(done) {
config.set('provider', 'caas');
async.series([
database.initialize,
settings.initialize,
function (callback) {
// a cloudron must have a backup config to startup
settings.setBackupConfig({ provider: 'caas', token: 'foo', key: 'key'}, function (error) {
expect(error).to.be(null);
callback();
});
}
], done);
}
function cleanup(done) {
async.series([
settings.uninitialize,
database._clear
], done);
}
function compareDirectories(one, two, callback) {
readdirp({ root: one }, function (error, treeOne) {
if (error) return callback(error);
readdirp({ root: two }, function (error, treeTwo) {
if (error) return callback(error);
var mismatch = [];
function compareDirs(a, b) {
a.forEach(function (tmpA) {
var found = b.find(function (tmpB) {
return tmpA.path === tmpB.path;
});
if (!found) mismatch.push(tmpA);
});
}
function compareFiles(a, b) {
a.forEach(function (tmpA) {
var found = b.find(function (tmpB) {
// TODO check file or symbolic link
return tmpA.path === tmpB.path && tmpA.mode === tmpB.mode;
});
if (!found) mismatch.push(tmpA);
});
}
compareDirs(treeOne.directories, treeTwo.directories);
compareDirs(treeTwo.directories, treeOne.directories);
compareFiles(treeOne.files, treeTwo.files);
compareFiles(treeTwo.files, treeOne.files);
if (mismatch.length) {
console.error('Files not found in both: %j', mismatch);
return callback(new Error('file mismatch'));
}
callback(null);
});
});
}
s3 = require('../storage/s3.js');
describe('Storage', function () {
describe('filesystem', function () {
var gBackupId_1 = 'someprefix/one';
var gBackupId_2 = 'someprefix/two';
var gTmpFolder;
var gSourceFolder;
var gDestinationFolder;
var gBackupConfig = {
provider: 'filesystem',
key: 'key',
backupFolder: null
backupFolder: null,
format: 'tgz'
};
before(function (done) {
setup(function (error) {
expect(error).to.be(null);
gTmpFolder = fs.mkdtempSync(path.join(os.tmpdir(), 'filesystem-storage-test_'));
gTmpFolder = fs.mkdtempSync(path.join(os.tmpdir(), 'filesystem-backup-test_'));
gBackupConfig.backupFolder = path.join(gTmpFolder, 'backups/');
gBackupConfig.backupFolder = path.join(gTmpFolder, 'backups/');
gSourceFolder = path.join(__dirname, 'storage');
gDestinationFolder = path.join(gTmpFolder, 'destination/');
done();
});
done();
});
after(function (done) {
cleanup(function (error) {
expect(error).to.be(null);
done()
// rimraf(gTmpFolder, done);
});
rimraf.sync(gTmpFolder);
done();
});
it('fails to set backup config for non-existing folder', function (done) {
settings.setBackupConfig(gBackupConfig, function (error) {
expect(error).to.be.a(SettingsError);
expect(error.reason).to.equal(SettingsError.BAD_FIELD);
it('can upload', function (done) {
var sourceFile = path.join(__dirname, 'storage/data/test.txt');
var sourceStream = fs.createReadStream(sourceFile);
var destFile = gTmpFolder + '/uploadtest/test.txt';
filesystem.upload(gBackupConfig, destFile, sourceStream, function (error) {
expect(error).to.be(null);
expect(fs.existsSync(destFile));
expect(fs.statSync(sourceFile).size).to.be(fs.statSync(destFile).size);
done();
});
});
it('succeeds to set backup config', function (done) {
mkdirp.sync(gBackupConfig.backupFolder);
settings.setBackupConfig(gBackupConfig, function (error) {
it('upload waits for empty file to be created', function (done) {
var sourceFile = path.join(__dirname, 'storage/data/empty');
var sourceStream = fs.createReadStream(sourceFile);
var destFile = gTmpFolder + '/uploadtest/empty';
filesystem.upload(gBackupConfig, destFile, sourceStream, function (error) {
expect(error).to.be(null);
expect(fs.existsSync(destFile));
expect(fs.statSync(sourceFile).size).to.be(fs.statSync(destFile).size);
done();
});
});
it('can backup', function (done) {
var backupMapping = [{
source: path.join(gSourceFolder, 'data'),
destination: '/datadir'
}, {
source: path.join(gSourceFolder, 'addon'),
destination: '/addondir/'
}];
filesystem.backup(gBackupConfig, gBackupId_1, backupMapping, function (error) {
it('upload unlinks old file', function (done) {
var sourceFile = path.join(__dirname, 'storage/data/test.txt');
var sourceStream = fs.createReadStream(sourceFile);
var destFile = gTmpFolder + '/uploadtest/test.txt';
var oldStat = fs.statSync(destFile);
filesystem.upload(gBackupConfig, destFile, sourceStream, function (error) {
expect(error).to.be(null);
expect(fs.existsSync(destFile)).to.be(true);
expect(fs.statSync(sourceFile).size).to.be(fs.statSync(destFile).size);
expect(oldStat.inode).to.not.be(fs.statSync(destFile).size);
done();
});
});
it('can restore', function (done) {
filesystem.restore(gBackupConfig, gBackupId_1, gDestinationFolder, function (error) {
it('can download file', function (done) {
var sourceFile = gTmpFolder + '/uploadtest/test.txt';
filesystem.download(gBackupConfig, sourceFile, function (error, stream) {
expect(error).to.be(null);
compareDirectories(path.join(gSourceFolder, 'data'), path.join(gDestinationFolder, 'datadir'), function (error) {
expect(error).to.equal(null);
compareDirectories(path.join(gSourceFolder, 'addon'), path.join(gDestinationFolder, 'addondir'), function (error) {
expect(error).to.equal(null);
rimraf(gDestinationFolder, done);
});
});
});
});
it('can copy backup', function (done) {
// will be verified after removing the first and restoring from the copy
filesystem.copyBackup(gBackupConfig, gBackupId_1, gBackupId_2, done);
});
it('can remove backup', function (done) {
// will be verified with next test trying to restore the removed one
filesystem.removeBackups(gBackupConfig, [ gBackupId_1 ], done);
});
it('cannot restore deleted backup', function (done) {
filesystem.restore(gBackupConfig, gBackupId_1, gDestinationFolder, function (error) {
expect(error).to.be.an('object');
expect(error.reason).to.equal(BackupsError.NOT_FOUND);
expect(stream).to.be.an('object');
done();
});
});
it('can restore backup copy', function (done) {
filesystem.restore(gBackupConfig, gBackupId_2, gDestinationFolder, function (error) {
expect(error).to.be(null);
it('download errors for missing file', function (done) {
var sourceFile = gTmpFolder + '/uploadtest/missing';
compareDirectories(path.join(gSourceFolder, 'data'), path.join(gDestinationFolder, 'datadir'), function (error) {
expect(error).to.equal(null);
compareDirectories(path.join(gSourceFolder, 'addon'), path.join(gDestinationFolder, 'addondir'), function (error) {
expect(error).to.equal(null);
rimraf(gDestinationFolder, done);
});
});
filesystem.download(gBackupConfig, sourceFile, function (error) {
expect(error.reason).to.be(BackupsError.NOT_FOUND);
done();
});
});
it('can remove backup copy', function (done) {
filesystem.removeBackups(gBackupConfig, [ gBackupId_2 ], done);
it('download dir copies contents of source dir', function (done) {
var sourceDir = path.join(__dirname, 'storage');
var events = filesystem.downloadDir(gBackupConfig, sourceDir, gTmpFolder);
events.on('done', function (error) {
expect(error).to.be(null);
expect(fs.statSync(path.join(gTmpFolder, 'data/empty')).size).to.be(0);
done();
});
});
it('can copy', function (done) {
var sourceFile = gTmpFolder + '/uploadtest/test.txt'; // keep the test within save device
var destFile = gTmpFolder + '/uploadtest/test-hardlink.txt';
var events = filesystem.copy(gBackupConfig, sourceFile, destFile);
events.on('done', function (error) {
expect(error).to.be(null);
expect(fs.statSync(destFile).nlink).to.be(2); // created a hardlink
done();
});
});
it('can remove file', function (done) {
var sourceFile = gTmpFolder + '/uploadtest/test-hardlink.txt';
filesystem.remove(gBackupConfig, sourceFile, function (error) {
expect(error).to.be(null);
expect(fs.existsSync(sourceFile)).to.be(false);
done();
});
});
it('can remove empty dir', function (done) {
var sourceDir = gTmpFolder + '/emptydir';
fs.mkdirSync(sourceDir);
filesystem.remove(gBackupConfig, sourceDir, function (error) {
expect(error).to.be(null);
expect(fs.existsSync(sourceDir)).to.be(false);
done();
});
});
});
describe('noop', function () {
var gBackupConfig = {
provider: 'noop',
format: 'tgz'
};
it('upload works', function (done) {
noop.upload(gBackupConfig, 'file', { }, function (error) {
expect(error).to.be(null);
done();
});
});
it('can download file', function (done) {
noop.download(gBackupConfig, 'file', function (error) {
expect(error).to.be.an(Error);
done();
});
});
it('download dir copies contents of source dir', function (done) {
var events = noop.downloadDir(gBackupConfig, 'sourceDir', 'destDir');
events.on('done', function (error) {
expect(error).to.be.an(Error);
done();
});
});
it('can copy', function (done) {
var events = noop.copy(gBackupConfig, 'sourceFile', 'destFile');
events.on('done', function (error) {
expect(error).to.be(null);
done();
});
});
it('can remove file', function (done) {
noop.remove(gBackupConfig, 'sourceFile', function (error) {
expect(error).to.be(null);
done();
});
});
it('can remove empty dir', function (done) {
noop.remove(gBackupConfig, 'sourceDir', function (error) {
expect(error).to.be(null);
done();
});
});
});
describe('s3', function () {
this.timeout(10000);
var gBackupId_1 = 'someprefix/one';
var gBackupId_2 = 'someprefix/two';
var gTmpFolder;
var gSourceFolder;
var gDestinationFolder;
var gS3Folder;
var gBackupConfig = {
provider: 's3',
key: 'key',
@@ -233,108 +205,87 @@ describe('Storage', function () {
bucket: 'cloudron-storage-test',
accessKeyId: 'testkeyid',
secretAccessKey: 'testsecret',
region: 'eu-central-1'
region: 'eu-central-1',
format: 'tgz'
};
before(function (done) {
before(function () {
MockS3.config.basePath = path.join(os.tmpdir(), 's3-backup-test-buckets/');
rimraf.sync(MockS3.config.basePath);
gS3Folder = path.join(MockS3.config.basePath, gBackupConfig.bucket);
s3._mockInject(MockS3);
setup(function (error) {
expect(error).to.be(null);
gTmpFolder = fs.mkdtempSync(path.join(os.tmpdir(), 's3-backup-test_'));
gSourceFolder = path.join(__dirname, 'storage');
gDestinationFolder = path.join(gTmpFolder, 'destination/');
settings.setBackupConfig(gBackupConfig, function (error) {
expect(error).to.be(null);
done();
});
});
});
after(function (done) {
after(function () {
s3._mockRestore();
rimraf.sync(MockS3.config.basePath);
});
cleanup(function (error) {
it('can upload', function (done) {
var sourceFile = path.join(__dirname, 'storage/data/test.txt');
var sourceStream = fs.createReadStream(sourceFile);
var destKey = 'uploadtest/test.txt';
s3.upload(gBackupConfig, destKey, sourceStream, function (error) {
expect(error).to.be(null);
rimraf(gTmpFolder, done);
expect(fs.existsSync(path.join(gS3Folder, destKey))).to.be(true);
expect(fs.statSync(path.join(gS3Folder, destKey)).size).to.be(fs.statSync(sourceFile).size);
done();
});
});
it('can backup', function (done) {
var backupMapping = [{
source: path.join(gSourceFolder, 'data'),
destination: '/datadir'
}, {
source: path.join(gSourceFolder, 'addon'),
destination: '/addondir/'
}];
s3.backup(gBackupConfig, gBackupId_1, backupMapping, function (error) {
it('can download file', function (done) {
var sourceKey = 'uploadtest/test.txt';
s3.download(gBackupConfig, sourceKey, function (error, stream) {
expect(error).to.be(null);
expect(stream).to.be.an('object');
done();
});
});
it('download dir copies contents of source dir', function (done) {
var sourceFile = path.join(__dirname, 'storage/data/test.txt');
var sourceKey = '';
var destDir = path.join(os.tmpdir(), 's3-destdir');
var events = s3.downloadDir(gBackupConfig, sourceKey, destDir);
events.on('done', function (error) {
expect(error).to.be(null);
expect(fs.statSync(path.join(destDir, 'uploadtest/test.txt')).size).to.be(fs.statSync(sourceFile).size);
done();
});
});
it('can copy', function (done) {
fs.writeFileSync(path.join(gS3Folder, 'uploadtest/C++.gitignore'), 'special', 'utf8');
var sourceKey = 'uploadtest';
var events = s3.copy(gBackupConfig, sourceKey, 'uploadtest-copy');
events.on('done', function (error) {
var sourceFile = path.join(__dirname, 'storage/data/test.txt');
expect(error).to.be(null);
expect(fs.statSync(path.join(gS3Folder, 'uploadtest-copy/test.txt')).size).to.be(fs.statSync(sourceFile).size);
expect(fs.statSync(path.join(gS3Folder, 'uploadtest-copy/C++.gitignore')).size).to.be(7);
done();
});
});
it('can restore', function (done) {
s3.restore(gBackupConfig, gBackupId_1, gDestinationFolder, function (error) {
it('can remove file', function (done) {
s3.remove(gBackupConfig, 'uploadtest-copy/test.txt', function (error) {
expect(error).to.be(null);
compareDirectories(path.join(gSourceFolder, 'data'), path.join(gDestinationFolder, 'datadir'), function (error) {
expect(error).to.equal(null);
compareDirectories(path.join(gSourceFolder, 'addon'), path.join(gDestinationFolder, 'addondir'), function (error) {
expect(error).to.equal(null);
rimraf(gDestinationFolder, done);
});
});
});
});
it('can copy backup', function (done) {
// will be verified after removing the first and restoring from the copy
s3.copyBackup(gBackupConfig, gBackupId_1, gBackupId_2, done);
});
it('can remove backup', function (done) {
// will be verified with next test trying to restore the removed one
s3.removeBackups(gBackupConfig, [ gBackupId_1 ], done);
});
it('cannot restore deleted backup', function (done) {
s3.restore(gBackupConfig, gBackupId_1, gDestinationFolder, function (error) {
expect(error).to.be.an('object');
expect(error.reason).to.equal(BackupsError.NOT_FOUND);
expect(fs.existsSync(path.join(gS3Folder, 'uploadtest-copy/test.txt'))).to.be(false);
done();
});
});
it('can restore backup copy', function (done) {
s3.restore(gBackupConfig, gBackupId_2, gDestinationFolder, function (error) {
it('can remove non-existent dir', function (done) {
noop.remove(gBackupConfig, 'blah', function (error) {
expect(error).to.be(null);
compareDirectories(path.join(gSourceFolder, 'data'), path.join(gDestinationFolder, 'datadir'), function (error) {
expect(error).to.equal(null);
compareDirectories(path.join(gSourceFolder, 'addon'), path.join(gDestinationFolder, 'addondir'), function (error) {
expect(error).to.equal(null);
rimraf(gDestinationFolder, done);
});
});
done();
});
});
it('can remove backup copy', function (done) {
s3.removeBackups(gBackupConfig, [ gBackupId_2 ], done);
});
});
});
View File
+12
View File
@@ -0,0 +1,12 @@
{
"type": "service_account",
"project_id": "second-metrics-123822",
"private_key_id": "12345678912345678659bacc6a9ba2bd495304e8",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvwIBADANBgkqhkiG9w0BAQEFAA123456789lAgEAAoIBAQDbogLCpOPRIgFD\nQGMB1A3RO234567898+gQIEzr3wM+kgDuh5lsm5C5YBGHppC5dD2GdWDGFxBhSGP\nYWW4F8593TDsSoeg3tsffIHQ4gTA3q/V4q5M4fEi+XWL123450KUjLYVs6DgZk28\nF5TbHFHxQjRvbQTj1245lQXsqR/WEBIShA0HwrF6ayz2n123456FuR7kCQ6Srxv6\n4d3i2S3KrkabcEzCHoo3NKeXj3f/cbWapoJaaaaaaaNfK3dEyuicoJaoFYspK194\n9qStl9opEwWzeiW95Ps5P4WFdQq5bSWw4LatEcdxnre7IR11ZVjktZwg6bU19Aes\ntfvSOiMhAgMBAAECggEAUm5y5MToMDS4DpqazjPdX7pFdSDnZgzxfy7WjyR8xY4l\n+ygegoK+eWMTir2vng4NKGC3zvUUow6pctvWRorA2GJtGzI5xzn9OcsMGe7KY+zw\nR7FFQ9vFGiBQasOdNfh8/630JR7+8VnUMRUUrEvrwUXc0jkzjYYaZuCtkY5EQZ2i\nfGm4I25uNckd2BNCvyc+bMT/zXD33vaMUe9B/Dy8lnKHstKds6C9lweocu/7XpMd\nRhzOeIxLYoPpdNveVNsfztj5y3bKaopxaEQoGIdXrNrBANiHumbEjF6VcOmQ6xnp\n5L+5ht1c4hLeydn/O7/xQeuik72+aD9PQE3Xz1aOlQKBgQDuGf/PURwVC/h2Jiz6\nSdZNAeCpeE/7S4iumksjXdp1gT+6rLv4UumpDX7DXNbRELwFBJMOZu9tM6DrHnMZ\nt+5U9qkS4r5tKoOBHv3YvabH5hkj5oeSFQfi2CDHS9hURuQUwvTYfZluJXDseswX\nAHF3NJzsnGxYqPQ+t3SMLvCi7wKBgQDsJJoMzkfsxExlhGH3zwmhcp5t7DBAYCKC\nz/IpZhY48iFdEAS1NA35F8EofN5d3TWPzVW2oT4f4Qz7VtKp9OHwtFpHnsSpBLGX\nB66gh5wO+yscvnRTFIGPNBg9fHNn70OQJaixelhDp7/BBfpcusF6ByND4F29vRiv\ng9n99lZa7wKBgQCsjpseHKJFfo9q0O/31FtDJAE10MPmUy+Tmq6pyvLwBeOx3k28\nAhrlMaqU20uz6HTbDh2lamRKuAf/Xen80Zgga0LNNRbc9tqnUVaXJZshdFjz87Z8\n4FD+zbOzu/vj2BykD0ZzP1NayDe2qqgOY3vX8IFp2VOMTaX1be9BSSOMcQKBgQCw\nObBtFhQ+8U9CA0VJNcyuG2d4COcJY7Tdgmnp0zGKVcfoN2gMAkjbN4sCuA0KZ2bt\nZgMtQ6+lAsI5X1XfV8y1YSJuiGGi8MnHOAht7EXeODq4PLl6trbpM6tTV2iYi8oT\n7Mazi+YKt0k2t0tboFN6yZDburi6PEAL2433JLrVKQKBgQDZOwr1XOEuOos1LeQC\nMqO6c4bHo615jtX0NcWLhpN/Z8oHEgIbz0/O0KpcuLrxaCquxpDX9n0pALzg3AoX\nXnvuuq4r9pjoD5LN2qD4+SCIwUhB8ubGnlc97vWtkWnK7Wn49QMnqo2OfeCctk6S\nF36saiVnMV5alSVlU+ThqI8aLw==\n-----END PRIVATE KEY-----\n",
"client_email": "123456789349-compute@developer.gserviceaccount.com",
"client_id": "123456789012345506547",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/735900230349-compute%40developer.gserviceaccount.com"
}
+330
View File
@@ -0,0 +1,330 @@
/* global it:false */
/* global describe:false */
/* global before:false */
/* global after:false */
'use strict';
var createTree = require('./common.js').createTree,
execSync = require('child_process').execSync,
expect = require('expect.js'),
fs = require('fs'),
os = require('os'),
path = require('path'),
paths = require('../paths.js'),
safe = require('safetydance'),
syncer = require('../syncer.js');
var gTasks = [ ],
gTmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'syncer-test')),
gCacheFile = path.join(paths.BACKUP_INFO_DIR, path.basename(gTmpDir) + '.sync.cache');
function collectTasks(task, callback) {
gTasks.push(task);
callback();
}
describe('Syncer', function () {
before(function () {
console.log('Tests are run in %s with cache file %s', gTmpDir, gCacheFile)
});
it('missing cache - removes remote dir', function (done) {
gTasks = [ ];
safe.fs.unlinkSync(gCacheFile);
createTree(gTmpDir, { });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'removedir', path: '', reason: 'nocache' }
]);
done();
});
});
it('empty cache - adds all', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'add', path: 'src/index.js', reason: 'new' },
{ operation: 'add', path: 'test/test.js', reason: 'new' },
{ operation: 'add', path: 'walrus', reason: 'new' }
]);
done();
});
});
it('empty cache - deep', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { a: { b: { c: { d: { e: 'some code' } } } } });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'add', path: 'a/b/c/d/e', reason: 'new' }
]);
done();
});
});
it('ignores special files', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'link:file': '/tmp', 'readme': 'this is readme' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'add', path: 'readme', reason: 'new' }
]);
done();
});
});
it('adds changed files', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(3);
execSync(`touch src/index.js test/test.js`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'add', path: 'src/index.js', reason: 'changed' },
{ operation: 'add', path: 'test/test.js', reason: 'changed' }
]);
done();
});
});
});
it('removes missing files', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(3);
execSync(`rm src/index.js walrus`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'remove', path: 'src/index.js', reason: 'missing' },
{ operation: 'remove', path: 'walrus', reason: 'missing' }
]);
done();
});
});
});
it('removes missing dirs', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(3);
execSync(`rm -rf src test`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'removedir', path: 'src', reason: 'missing' },
{ operation: 'removedir', path: 'test', reason: 'missing' }
]);
done();
});
});
});
it('all files disappeared', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(3);
execSync(`find . -delete`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'removedir', path: 'src', reason: 'missing' },
{ operation: 'removedir', path: 'test', reason: 'missing' },
{ operation: 'remove', path: 'walrus', reason: 'missing' }
]);
done();
});
});
});
it('no redundant deletes', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { a: { b: { c: { d: { e: 'some code' } } } } });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(1);
execSync(`rm -r a/b; touch a/f`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'removedir', path: 'a/b', reason: 'missing' },
{ operation: 'add', path: 'a/f', reason: 'new' }
]);
done();
});
});
});
it('file became dir', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'data': { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'This is a README' }, 'walrus': 'animal' } });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(3);
execSync(`rm data/test/test.js; mkdir data/test/test.js; touch data/test/test.js/trick`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'remove', path: 'data/test/test.js', reason: 'wasfile' },
{ operation: 'add', path: 'data/test/test.js/trick', reason: 'new' }
]);
done();
});
});
});
it('dir became file', function (done) {
gTasks = [ ];
fs.writeFileSync(gCacheFile, '', 'utf8');
createTree(gTmpDir, { 'src': { 'index.js': 'some code' }, 'test': { 'test.js': 'this', 'test2.js': 'test' }, 'walrus': 'animal' });
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(4);
execSync(`rm -r test; touch test`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'removedir', path: 'test', reason: 'wasdir' },
{ operation: 'add', path: 'test', reason: 'wasdir' }
]);
done();
});
});
});
it('is complicated', function (done) {
gTasks = [ ];
createTree(gTmpDir, {
a: 'data',
a2: 'data',
b: 'data',
file: 'data',
g: {
file: 'data'
},
j: {
k: { },
l: {
file: 'data'
},
m: { }
}
});
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
execSync(`rm a; \
mkdir a; \
touch a/file; \
rm a2; \
touch b; \
rm file g/file; \
ln -s /tmp h; \
rm -r j/l;
touch j/k/file; \
rmdir j/m;`, { cwd: gTmpDir });
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks).to.eql([
{ operation: 'remove', path: 'a', reason: 'wasfile' },
{ operation: 'remove', path: 'a2', reason: 'missing' },
{ operation: 'remove', path: 'file', reason: 'missing' },
{ operation: 'remove', path: 'g/file', reason: 'missing' },
{ operation: 'removedir', path: 'j/l', reason: 'missing' },
{ operation: 'removedir', path: 'j/m', reason: 'missing' },
{ operation: 'add', path: 'a/file', reason: 'new' },
{ operation: 'add', path: 'b', reason: 'changed' },
{ operation: 'add', path: 'j/k/file', reason: 'new' },
]);
gTasks = [ ];
syncer.sync(gTmpDir, collectTasks, 10, function (error) {
expect(error).to.not.be.ok();
expect(gTasks.length).to.be(0);
done();
});
});
});
});
});
File diff suppressed because one or more lines are too long
+4
View File
@@ -66,6 +66,10 @@
};
term._sendData = function (data) {
if (socket.readyState !== 1) {
return;
}
socket.send(data);
};
+14 -23
View File
@@ -3,16 +3,9 @@
<head>
<meta charset="utf-8" />
<meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height" />
<meta http-equiv="Content-Security-Policy" content="default-src 'unsafe-inline' 'self' <%= apiOriginHostname %>; img-src 'self' <%= apiOriginHostname %>;" />
<title> Cloudron App Error </title>
<!-- fonts and CSS -->
<link href="3rdparty/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="3rdparty/bootstrap-3.3.7.min.css" type="text/css">
<!-- Use static style as we can't include local stylesheets -->
<style>
@@ -31,17 +24,12 @@
text-align: center;
}
.wrapper {
display: table;
width: 100%;
height: 100%;
}
.content {
display: table-cell;
display: flex;
width: 100%;
height: 100%;
vertical-align: middle;
flex-direction: column;
justify-content: center;
}
footer {
@@ -58,6 +46,11 @@
footer a {
color: #62bdfc;
padding: 10px;
text-decoration: none;
}
h1, h2, p {
margin: 10px;
}
</style>
@@ -66,17 +59,15 @@
<body>
<div class="wrapper">
<div class="content">
<h1><i class="fa fa-frown-o fa-fw text-danger"></i></h1>
<h2>Something has gone wrong</h2>
This app is currently not running. Try refreshing the page.
</div>
<div class="content">
<h1>&#128577;</h1>
<h2>Something has gone wrong</h2>
<p>This app is currently not responding. Try refreshing the page.</p>
</div>
<footer class="text-center">
<span class="text-muted">&copy;2017 <a href="https://cloudron.io" target="_blank">Cloudron</a></span>
<span class="text-muted"><a href="https://twitter.com/cloudron_io" target="_blank">Twitter <i class="fa fa-twitter"></i></a></span>
<span class="text-muted"><a href="https://cloudron.io" target="_blank">Cloudron</a></span>
<span class="text-muted"><a href="https://twitter.com/cloudron_io" target="_blank">Twitter</a></span>
<span class="text-muted"><a href="https://chat.cloudron.io" target="_blank">Chat <i class="fa fa-comments"></i></a></span>
</footer>
+2 -9
View File
@@ -52,6 +52,7 @@
<script src="3rdparty/js/Chart.js"></script>
<script src="3rdparty/js/ansi_up.js"></script>
<script src="3rdparty/js/clipboard.min.js"></script>
<!-- Showdown (markdown converter) -->
<script src="3rdparty/js/showdown-1.6.4.min.js"></script>
@@ -130,14 +131,6 @@
<div ng-hide="config.provider !== 'caas' && config.update.box.upgrade">
<fieldset>
<form name="update_form" role="form" ng-submit="doUpdate()" autocomplete="off">
<div class="form-group" ng-class="{ 'has-error': (update_form.password.$dirty && update_form.password.$invalid) || (!update_form.password.$dirty && update.error.password) }">
<label class="control-label" for="inputUpdatePassword">Give your password to verify that you are performing that action</label>
<div class="control-label" ng-show="(update_form.password.$dirty && update_form.password.$invalid) || (!update_form.password.$dirty && update.error.password)">
<small ng-show=" update_form.password.$dirty && update_form.password.$invalid">Password required</small>
<small ng-show="!update_form.password.$dirty && update.error.password">Wrong password</small>
</div>
<input type="password" class="form-control" ng-model="update.password" id="inputUpdatePassword" name="password" placeholder="Password" required autofocus>
</div>
<input class="ng-hide" type="submit" ng-disabled="update_form.$invalid || update.busy"/>
</form>
</fieldset>
@@ -217,7 +210,7 @@
<a ng-class="{ active: isActive('/users')}" href="#/users"><i class="fa fa-users fa-fw"></i> Users</a>
</li>
<li class="dropdown">
<a href="" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><img ng-src="{{user.gravatar}}"/> {{user.username}} <span class="caret"></span></a>
<a href="" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><img ng-src="{{user.gravatar}}" style="margin-top: -4px;"/> {{user.username}} <span class="caret"></span></a>
<ul class="dropdown-menu" role="menu">
<li><a href="#/account"><i class="fa fa-user fa-fw"></i> Account</a></li>
<li ng-show="user.admin"><a href="#/activity"><i class="fa fa-list-alt fa-fw"></i> Activity</a></li>
+40 -6
View File
@@ -105,6 +105,7 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
function Client() {
this._ready = false;
this._configListener = [];
this._appsListener = [];
this._readyListener = [];
this._userInfo = {
id: null,
@@ -198,6 +199,11 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
if (this._config && this._config.apiServerOrigin) callback(this._config);
};
Client.prototype.onApps = function (callback) {
this._appsListener.push(callback);
callback(this._installedApps);
};
Client.prototype.resetAvatar = function () {
this.avatar = this.apiOrigin + '/api/v1/cloudron/avatar?' + String(Math.random()).slice(2);
@@ -339,10 +345,9 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}).error(defaultErrorHandler(callback));
};
Client.prototype.configureApp = function (id, password, config, callback) {
Client.prototype.configureApp = function (id, config, callback) {
var data = {
appId: id,
password: password,
location: config.location,
portBindings: config.portBindings,
accessRestriction: config.accessRestriction,
@@ -361,10 +366,9 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}).error(defaultErrorHandler(callback));
};
Client.prototype.updateApp = function (id, manifest, portBindings, password, callback) {
Client.prototype.updateApp = function (id, manifest, portBindings, callback) {
var data = {
appStoreId: manifest.id + '@' + manifest.version,
password: password,
portBindings: portBindings
};
@@ -388,6 +392,21 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}).error(defaultErrorHandler(callback));
};
Client.prototype.debugApp = function (id, state, callback) {
var data = {
appId: id,
debugMode: state ? {
readonlyRootfs: true,
cmd: [ '/bin/bash', '-c', 'echo "Repair mode. Use the webterminal or cloudron exec to repair. Sleeping" && sleep infinity' ]
} : null
};
post('/api/v1/apps/' + id + '/configure', data).success(function (data, status) {
if (status !== 202) return callback(new ClientError(status, data));
callback(null);
}).error(defaultErrorHandler(callback));
};
Client.prototype.progress = function (callback, errorCallback) {
// this is used in the defaultErrorHandler itself, and avoids a loop
if (typeof errorCallback !== 'function') errorCallback = defaultErrorHandler(callback);
@@ -800,8 +819,8 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}).error(defaultErrorHandler(callback));
};
Client.prototype.update = function (password, callback) {
var data = { password: password };
Client.prototype.update = function (callback) {
var data = { };
post('/api/v1/cloudron/update', data).success(function(data, status) {
if (status !== 202 || typeof data !== 'object') return callback(new ClientError(status, data));
@@ -1053,6 +1072,10 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}
}
that._appsListener.forEach(function (callback) {
callback(that._installedApps);
});
callback(null);
});
};
@@ -1134,6 +1157,17 @@ angular.module('Application').service('Client', ['$http', 'md5', 'Notification',
}).error(defaultErrorHandler(callback));
};
Client.prototype.sentTestMail = function (email, callback) {
var data = {
email: email
};
post('/api/v1/cloudron/send_test_mail', data).success(function(data, status) {
if (status !== 202) return callback(new ClientError(status, data));
callback(null);
}).error(defaultErrorHandler(callback));
};
client = new Client();
return client;
}]);
+13 -3
View File
@@ -3,9 +3,18 @@
/* global angular:false */
/* global showdown:false */
// deal with accessToken in the query, this is passed for example on password reset
// deal with accessToken in the query, this is passed for example on password reset and account setup upon invite
var search = decodeURIComponent(window.location.search).slice(1).split('&').map(function (item) { return item.split('='); }).reduce(function (o, k) { o[k[0]] = k[1]; return o; }, {});
if (search.accessToken) localStorage.token = search.accessToken;
if (search.accessToken) {
localStorage.token = search.accessToken;
// strip the accessToken and expiresAt, then preserve the rest
delete search.accessToken;
delete search.expiresAt;
// this will reload the page as this is not a hash change
window.location.search = encodeURIComponent(Object.keys(search).map(function (key) { return key + '=' + search[key]; }).join('&'));
}
// create main application module
@@ -286,10 +295,10 @@ var ACTION_APP_INSTALL = 'app.install';
var ACTION_APP_RESTORE = 'app.restore';
var ACTION_APP_UNINSTALL = 'app.uninstall';
var ACTION_APP_UPDATE = 'app.update';
var ACTION_APP_UPDATE = 'app.update';
var ACTION_APP_LOGIN = 'app.login';
var ACTION_BACKUP_FINISH = 'backup.finish';
var ACTION_BACKUP_START = 'backup.start';
var ACTION_BACKUP_CLEANUP = 'backup.cleanup';
var ACTION_CERTIFICATE_RENEWAL = 'certificate.renew';
var ACTION_CLI_MODE = 'settings.climode';
var ACTION_START = 'cloudron.start';
@@ -316,6 +325,7 @@ app.filter('eventLogDetails', function() {
case ACTION_APP_LOGIN: return 'App ' + data.appId + ' logged in';
case ACTION_BACKUP_START: return 'Backup started';
case ACTION_BACKUP_FINISH: return 'Backup finished. ' + (errorMessage ? ('error: ' + errorMessage) : ('id: ' + data.filename));
case ACTION_BACKUP_CLEANUP: return 'Backup ' + data.backup.id + ' removed';
case ACTION_CERTIFICATE_RENEWAL: return 'Certificate renewal for ' + data.domain + (errorMessage ? ' failed' : 'succeeded');
case ACTION_CLI_MODE: return 'CLI mode was ' + (data.enabled ? 'enabled' : 'disabled');
case ACTION_START: return 'Cloudron started with version ' + data.version;
+3 -15
View File
@@ -12,8 +12,7 @@ angular.module('Application').controller('MainController', ['$scope', '$route',
$scope.update = {
busy: false,
error: {},
password: ''
error: {}
};
$scope.isActive = function (url) {
@@ -77,8 +76,6 @@ angular.module('Application').controller('MainController', ['$scope', '$route',
$scope.showUpdateModal = function (form) {
$scope.update.error.generic = null;
$scope.update.error.password = null;
$scope.update.password = '';
form.$setPristine();
form.$setUntouched();
@@ -98,21 +95,12 @@ angular.module('Application').controller('MainController', ['$scope', '$route',
$scope.doUpdate = function () {
$scope.update.error.generic = null;
$scope.update.error.password = null;
$scope.update.busy = true;
Client.update($scope.update.password, function (error) {
Client.update(function (error) {
if (error) {
if (error.statusCode === 403) {
$scope.update.error.password = true;
$scope.update.password = '';
$scope.update_form.password.$setPristine();
$('#inputUpdatePassword').focus();
} else if (error.statusCode === 409) {
if (error.statusCode === 409) {
$scope.update.error.generic = 'Please try again later. The Cloudron is creating a backup at the moment.';
$scope.update.password = '';
$scope.update_form.password.$setPristine();
$('#inputUpdatePassword').focus();
} else {
$scope.update.error.generic = error.message;
console.error('Unable to update.', error);
+43 -2
View File
@@ -21,6 +21,7 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
$scope.showDNSSetup = false;
$scope.instanceId = '';
$scope.explicitZone = search.zone || '';
$scope.isEnterprise = !!search.enterprise;
$scope.isDomain = false;
$scope.isSubdomain = false;
@@ -40,8 +41,9 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
// keep in sync with certs.js
$scope.dnsProvider = [
{ name: 'AWS Route53', value: 'route53' },
{ name: 'Digital Ocean', value: 'digitalocean' },
{ name: 'Cloudflare (DNS only)', value: 'cloudflare' },
{ name: 'Digital Ocean', value: 'digitalocean' },
{ name: 'Google Cloud DNS', value: 'gcdns' },
{ name: 'Wildcard', value: 'wildcard' },
{ name: 'Manual (not recommended)', value: 'manual' },
{ name: 'No-op (only for development)', value: 'noop' }
@@ -52,10 +54,29 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
domain: '',
accessKeyId: '',
secretAccessKey: '',
gcdnsKey: { keyFileName: '', content: '' },
digitalOceanToken: '',
provider: 'route53'
};
function readFileLocally(obj, file, fileName) {
return function (event) {
$scope.$apply(function () {
obj[file] = null;
obj[fileName] = event.target.files[0].name;
var reader = new FileReader();
reader.onload = function (result) {
if (!result.target || !result.target.result) return console.error('Unable to read local file');
obj[file] = result.target.result;
};
reader.readAsText(event.target.files[0]);
});
};
}
document.getElementById('gcdnsKeyFileInput').onchange = readFileLocally($scope.dnsCredentials.gcdnsKey, 'content', 'keyFileName');
$scope.setDnsCredentials = function () {
$scope.dnsCredentials.busy = true;
$scope.dnsCredentials.error = null;
@@ -77,6 +98,23 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
if (data.provider === 'route53') {
data.accessKeyId = $scope.dnsCredentials.accessKeyId;
data.secretAccessKey = $scope.dnsCredentials.secretAccessKey;
} else if (data.provider === 'gcdns'){
try {
var serviceAccountKey = JSON.parse($scope.dnsCredentials.gcdnsKey.content);
data.projectId = serviceAccountKey.project_id;
data.credentials = {
client_email: serviceAccountKey.client_email,
private_key: serviceAccountKey.private_key
};
if (!data.projectId || !data.credentials || !data.credentials.client_email || !data.credentials.private_key) {
throw "fields_missing";
}
} catch(e) {
$scope.dnsCredentials.error = "Cannot parse Google Service Account Key";
$scope.dnsCredentials.busy = false;
return;
}
} else if (data.provider === 'digitalocean') {
data.token = $scope.dnsCredentials.digitalOceanToken;
} else if (data.provider === 'cloudflare') {
@@ -103,7 +141,9 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
$scope.busy = true;
Client.getStatus(function (error, status) {
if (!error && status.adminFqdn && status.webadminStatus.dns && status.webadminStatus.tls) {
// webadminStatus.dns is intentionally not tested. it can be false if dns creds are invalid
// runConfigurationChecks() in main.js will pick the .dns and show a notification
if (!error && status.adminFqdn && status.webadminStatus.tls) {
window.location.href = 'https://' + status.adminFqdn + '/setup.html';
}
@@ -121,6 +161,7 @@ app.controller('SetupDNSController', ['$scope', '$http', 'Client', function ($sc
if (status.adminFqdn) return waitForDnsSetup();
if (status.provider === 'digitalocean') $scope.dnsCredentials.provider = 'digitalocean';
if (status.provider === 'gcp') $scope.dnsCredentials.provider = 'gcdns';
if (status.provider === 'ami') {
// remove route53 on ami
$scope.dnsProvider.shift();
+16 -3
View File
@@ -58,7 +58,7 @@
<div class="form-group" style="margin-bottom: 0;" ng-class="{ 'has-error': dnsCredentialsForm.domain.$dirty && dnsCredentialsForm.domain.$invalid }">
<input type="text" class="form-control" ng-model="dnsCredentials.domain" name="domain" placeholder="example.com" required autofocus ng-disabled="dnsCredentials.busy">
</div>
<p ng-show="isSubdomain" class="text-bold">Installing Cloudron on a subdomain requires an enterprise subscription.</p>
<p ng-show="isSubdomain" class="text-bold">Installing on a subdomain requires an enterprise subscription. <a href="mailto:support@cloudron.io">Contact us</a></p>
</div>
</div>
<div class="row">
@@ -80,6 +80,19 @@
<span ng-show="isDomain || explicitZone"><b>{{ explicitZone ? explicitZone : (dnsCredentials.domain | zoneName) }}</b> must be hosted on <a href="https://aws.amazon.com/route53/?nc2=h_m1" target="_blank">AWS Route53</a>.</span>
</div>
<!-- Google Cloud DNS -->
<div class="form-group" ng-class="{ 'has-error': false }" ng-show="dnsCredentials.provider === 'gcdns'">
<div class="input-group">
<input type="file" id="gcdnsKeyFileInput" style="display:none"/>
<input type="text" class="form-control" placeholder="Service Account Key" ng-model="dnsCredentials.gcdnsKey.keyFileName" id="gcdnsKeyInput" name="cert" onclick="getElementById('gcdnsKeyFileInput').click();" style="cursor: pointer;" ng-required="dnsCredentials.provider === 'gcdns'" ng-disabled="dnsCredentials.busy">
<span class="input-group-addon">
<i class="fa fa-upload" onclick="getElementById('gcdnsKeyFileInput').click();"></i>
</span>
</div>
<br/>
<span ng-show="isDomain || explicitZone"><b>{{ explicitZone ? explicitZone : (dnsCredentials.domain | zoneName) }}</b> must be hosted on <a href="https://console.cloud.google.com/net-services/dns/zones" target="_blank">Google Cloud DNS</a>.</span>
</div>
<!-- DigitalOcean -->
<div class="form-group" ng-class="{ 'has-error': dnsCredentialsForm.digitalOceanToken.$dirty && dnsCredentialsForm.digitalOceanToken.$invalid }" ng-show="dnsCredentials.provider === 'digitalocean'">
<input type="text" class="form-control" ng-model="dnsCredentials.digitalOceanToken" name="digitalOceanToken" placeholder="API Token" ng-required="dnsCredentials.provider === 'digitalocean'" ng-disabled="dnsCredentials.busy">
@@ -94,7 +107,7 @@
<div class="form-group" ng-class="{ 'has-error': dnsCredentialsForm.cloudflareEmail.$dirty && dnsCredentialsForm.cloudflareEmail.$invalid }" ng-show="dnsCredentials.provider === 'cloudflare'">
<input type="email" class="form-control" ng-model="dnsCredentials.cloudflareEmail" name="cloudflareEmail" placeholder="Cloudflare Account Email" ng-required="dnsCredentials.provider === 'cloudflare'" ng-disabled="dnsCredentials.busy">
<br/>
<span>{{ dnsCredentials.domain || 'The domain' }} must be hosted on <a href="https://www.cloudflare.com" target="_blank">Cloudflare</a>.</span>
<span ng-show="isDomain || explicitZone"><b>{{ explicitZone ? explicitZone : (dnsCredentials.domain | zoneName) }}</b> must be hosted on <a href="https://www.cloudflare.com" target="_blank">Cloudflare</a>.</span>
</div>
<!-- Wildcard -->
@@ -124,7 +137,7 @@
</div>
<div class="row">
<div class="col-md-12 text-center">
<button type="submit" class="btn btn-primary" ng-disabled="dnsCredentialsForm.$invalid"/><i class="fa fa-circle-o-notch fa-spin" ng-show="dnsCredentials.busy"></i> Next</button>
<button type="submit" class="btn btn-primary" ng-disabled="dnsCredentialsForm.$invalid || (isSubdomain && !isEnterprise)"/><i class="fa fa-circle-o-notch fa-spin" ng-show="dnsCredentials.busy"></i> Next</button>
</div>
</div>
</form>
+36 -6
View File
@@ -102,6 +102,12 @@ $table-border-color: transparent !default;
width: 300px;
}
.dropdown-menu>li>a:focus, .dropdown-menu>li>a:hover {
background-color: $brand-primary;
color: white;
}
// ----------------------------
// Main classes
// ----------------------------
@@ -142,13 +148,17 @@ html, body {
}
.content {
width: 720px;
width: 100%;
max-width: 720px;
margin: 0 auto;
&.content-large {
width: 970px;
max-width: 970px;
@media(min-width:768px) {
width: 720px;
&.content-large {
width: 970px;
max-width: 970px;
}
}
}
@@ -514,7 +524,7 @@ h1, h2, h3 {
.grid-item-top .progress {
border-radius: 0;
box-shadown: none;
box-shadow: none;
margin-top: 10px;
width: inherit;
height: 10px;
@@ -522,7 +532,7 @@ h1, h2, h3 {
.grid-item-top .progress-bar {
border-radius: 0;
box-shadown: none;
box-shadow: none;
}
.app-icon {
@@ -1136,4 +1146,24 @@ footer {
color: #00FFFF;
}
}
.dont-overflow {
overflow: hidden;
}
&.placeholder {
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
}
}
.contextMenuBackdrop {
display: none;
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
+1 -1
View File
@@ -10,7 +10,7 @@
<div class="filter">
<input type="text" class="form-control" ng-model="search" ng-model-options="{ debounce: 1000 }" ng-change="updateFilter()" placeholder="Search"/>
<select class="form-control" ng-model="action" ng-options="a.name for a in actions" ng-change="updateFilter()">
<option value="">-- all actions --</option>
<option value="">-- All actions --</option>
</select>
<select class="form-control" ng-model="pageItems" ng-options="a.name for a in pageItemCount" ng-change="updateFilter(true)">
</select>
+1
View File
@@ -15,6 +15,7 @@ angular.module('Application').controller('ActivityController', ['$scope', '$loca
{ name: 'app.uninstall', value: 'app.uninstall' },
{ name: 'app.update', value: 'app.update' },
{ name: 'app.login', value: 'app.login' },
{ name: 'backup.cleanup', value: 'backup.cleanup' },
{ name: 'backup.finish', value: 'backup.finish' },
{ name: 'backup.start', value: 'backup.start' },
{ name: 'certificate.renew', value: 'certificate.renew' },
+14 -24
View File
@@ -42,8 +42,8 @@
<br>
</p>
<p class="text-center" ng-show="appConfigure.location && dnsConfig.provider === 'manual' && !dnsConfig.wildcard">
<b>Do not forget, to add an A record for {{ appConfigure.location }}.{{ config.fqdn }}</b>
<p class="text-center" ng-show="!appConfigure.usingAltDomain && appConfigure.location && dnsConfig.provider === 'manual' && !dnsConfig.wildcard">
<b>Do not forget to add an A record for {{ appConfigure.location }}.{{ config.fqdn }}</b>
<br>
</p>
@@ -57,8 +57,6 @@
</ng-form>
</div>
<br/>
<div class="form-group" ng-show="appConfigure.customAuth && !appConfigure.app.manifest.addons.email">
<label class="control-label">User management</label>
<p>
@@ -135,7 +133,7 @@
<div class="form-group">
<input type="checkbox" id="appConfigureEnableBackup" ng-model="appConfigure.enableBackup">
<label class="control-label" for="appConfigureEnableBackup">Enable backups</label>
<label class="control-label" for="appConfigureEnableBackup">Enable automatic daily backups</label>
</div>
<div class="hide">
@@ -162,16 +160,6 @@
</div>
</div>
<br/>
<br/>
<div class="form-group" ng-class="{ 'has-error': (appConfigureForm.password.$dirty && appConfigureForm.password.$invalid) || (!appConfigureForm.password.$dirty && appConfigure.error.password) }">
<label class="control-label" for="appConfigurePasswordInput">Provide your password to confirm this action</label>
<div class="control-label" ng-show="(appConfigureForm.password.$dirty && appConfigureForm.password.$invalid) || (!appConfigureForm.password.$dirty && appConfigure.error.password)">
<small ng-show=" appConfigureForm.password.$dirty && appConfigureForm.password.$invalid">Password required</small>
<small ng-show="!appConfigureForm.password.$dirty && appConfigure.error.password">Wrong password</small>
</div>
<input type="password" class="form-control" ng-model="appConfigure.password" id="appConfigurePasswordInput" name="password" required>
</div>
<input class="ng-hide" type="submit" ng-disabled="appConfigureForm.$invalid || appConfigure.busy || (appConfigure.accessRestrictionOption === 'groups' && !appConfigure.isAccessRestrictionValid()) || (appConfigure.usingAltDomain && !appConfigure.isAltDomainValid())"/>
</form>
</fieldset>
@@ -191,10 +179,13 @@
<div class="modal-header">
<h4 class="modal-title">Restore {{ appRestore.app.fqdn }}</h4>
</div>
<div class="modal-body" ng-show="appRestore.backups.length === 0">
<p class="text-danger">This app has no backups.</p>
<div class="modal-body" ng-show="appRestore.backups.length === 0 && appRestore.busy">
<h4 class="text-center"><i class="fa fa-circle-o-notch fa-spin"></i> Fetching backups</h4>
</div>
<div class="modal-body" ng-show="appRestore.backups.length !== 0">
<div class="modal-body" ng-show="appRestore.backups.length === 0 && !appRestore.busy">
<h4 class="text-danger">This app has no backups.</h4>
</div>
<div class="modal-body" ng-show="appRestore.backups.length !== 0 && !appRestore.busy">
<p>Restoring the app will lose all content generated since the backup.</p>
<label class="control-label">Select backup</label>
<div class="dropdown">
@@ -223,7 +214,7 @@
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-success" ng-click="doRestore()" ng-disabled="appRestoreForm.$invalid || appRestore.busy || !appRestore.selectedBackup"><i class="fa fa-circle-o-notch fa-spin" ng-show="appRestore.busy"></i> Restore</button>
<button type="button" class="btn btn-success" ng-click="doRestore()" ng-show="appRestore.backups.length !== 0" ng-disabled="appRestoreForm.$invalid || appRestore.busy || !appRestore.selectedBackup"><i class="fa fa-circle-o-notch fa-spin" ng-show="appRestore.busy"></i> Restore</button>
</div>
</div>
</div>
@@ -307,7 +298,7 @@
</div>
<div class="modal-body">
<p>Recent Changes for new version <b>{{ appUpdate.manifest.version}}</b>:</p>
<pre>{{ appUpdate.manifest.changelog }}</pre>
<div ng-bind-html="appUpdate.manifest.changelog | markdown2html"></div>
<br/>
<fieldset>
<form role="form" name="appUpdateForm" ng-submit="doUpdate(appUpdateForm)" autocomplete="off">
@@ -328,10 +319,6 @@
</div>
</ng-form>
</div>
<div class="form-group" ng-class="{ 'has-error': (!appUpdateForm.password.$dirty && appUpdate.error.password) || (appUpdateForm.password.$dirty && appUpdateForm.password.$invalid) }">
<label class="control-label" for="inputUpdatePassword">Provide your password to confirm this action</label>
<input type="password" class="form-control" ng-model="appUpdate.password" id="inputUpdatePassword" name="password" required autofocus>
</div>
<input class="ng-hide" type="submit" ng-disabled="appUpdateForm.$invalid || busy"/>
</form>
</fieldset>
@@ -362,6 +349,9 @@
<div class="content content-large">
<!-- Workaround for select-all issue, see commit message -->
<div style="font-size: 1px;">&nbsp;</div>
<div class="animateMeOpacity ng-hide" ng-show="installedApps.length === 0 && user.admin">
<div class="col-md-12" style="text-align: center;">
<br/><br/><br/><br/>

Some files were not shown because too many files have changed in this diff Show More