Compare commits

...

1156 Commits

Author SHA1 Message Date
Johannes Zellner
10f1ad5cfe Until we can distinguish by the return data, mark both fields red 2016-01-26 19:44:33 +01:00
Johannes Zellner
fd11eb8da0 Ensure the user add form is reset on show 2016-01-26 19:40:57 +01:00
girish@cloudron.io
62d5e99802 Version 0.7.2 changes 2016-01-26 10:34:09 -08:00
girish@cloudron.io
48305f0e95 call retire after sending off the http response 2016-01-26 10:29:32 -08:00
girish@cloudron.io
8170b490f2 add retire.sh
this is a sudo script that retires the box
2016-01-26 09:37:25 -08:00
girish@cloudron.io
072962bbc3 add internal route to retire 2016-01-26 08:46:48 -08:00
girish@cloudron.io
33bc1cf7d9 Add cloudron.retire 2016-01-26 08:39:42 -08:00
Johannes Zellner
85df9d1472 We use 'wrong password' now everywhere 2016-01-26 16:09:14 +01:00
Johannes Zellner
109ba3bf56 Do not perform any client side password validation in user delete form 2016-01-26 16:06:42 +01:00
Johannes Zellner
8083362e71 Ensure we enforce the password pattern for setup view 2016-01-26 15:55:02 +01:00
Johannes Zellner
9b4c385a64 Ensure we send proper password requirements on password reset 2016-01-26 15:21:03 +01:00
girish@cloudron.io
ee9c8ba4eb fix dead comment 2016-01-25 20:15:54 -08:00
girish@cloudron.io
000a64d54a remove installer.retire 2016-01-25 17:52:35 -08:00
girish@cloudron.io
eba74d77a6 remove installer certs
the installer is no more a public server
2016-01-25 17:47:57 -08:00
girish@cloudron.io
714a1bcb1d installer: remove provisioning server
the box will retire itself
2016-01-25 17:46:24 -08:00
girish@cloudron.io
02d17dc2e4 fix typo 2016-01-25 16:51:14 -08:00
girish@cloudron.io
4b54e776cc reset update info after short-circuit 2016-01-25 16:46:54 -08:00
girish@cloudron.io
ba6f05b119 Clear the progress for web ui to work 2016-01-25 16:35:03 -08:00
girish@cloudron.io
1d9ae120dc strip prerelease from box version and not the released version 2016-01-25 16:27:22 -08:00
girish@cloudron.io
3ce841e050 add test for checkDiskSpace 2016-01-25 16:03:12 -08:00
girish@cloudron.io
436fc2ba13 checking capacity does not work well for /
Just check if we have around 1.25GB left.

/root (data image file) is 26G
/home (our code) is 1G
/apps.swap is 2G
/backup.swap is 1G
/var is 6.2G (docker)

Rest of the system is 1.5G

Fixes #577
2016-01-25 15:56:46 -08:00
girish@cloudron.io
77d652fc2b add test for config.setVersion 2016-01-25 15:12:22 -08:00
girish@cloudron.io
ac3681296e short-circuit updates from prerelease to release version
Fixes #575
2016-01-25 14:48:12 -08:00
girish@cloudron.io
5254d3325f add comment on fields on box update info object 2016-01-25 13:57:59 -08:00
girish@cloudron.io
ce0a24a95d comment out public graphite paths 2016-01-25 12:51:37 -08:00
Johannes Zellner
1bb596bf58 Add changes for 0.7.1 2016-01-25 16:40:57 +01:00
Johannes Zellner
c384ac6080 Use local cache when looking up apps 2016-01-25 16:25:25 +01:00
Johannes Zellner
61c2ce0f47 Keep appstore url bar up-to-date with the selected app
This allows sharing links directly

Fixes #526
2016-01-25 16:21:44 +01:00
Johannes Zellner
7a71315d33 Adjust angular's to allow non reload path changes 2016-01-25 16:21:20 +01:00
Johannes Zellner
0a658e5862 Ensure the focus is setup correctly 2016-01-25 15:30:29 +01:00
Johannes Zellner
5f8c99aa0e There is no body here 2016-01-25 15:29:52 +01:00
Johannes Zellner
4c6f1e4b4a Allow admins or users to operate on themselves 2016-01-25 15:29:52 +01:00
Johannes Zellner
226ae627f9 Move updateUser() to where it belongs 2016-01-25 15:29:52 +01:00
Johannes Zellner
27a02aa918 Make user edit form submit with enter 2016-01-25 15:29:52 +01:00
Johannes Zellner
3c43503df8 Add businesslogic for user edit form 2016-01-25 14:58:12 +01:00
Johannes Zellner
35c926d504 Ensure we actually update the correct user, not the user holding the token 2016-01-25 14:58:02 +01:00
Johannes Zellner
ea18ca5c60 Fix copy and paste error in the user add form 2016-01-25 14:46:51 +01:00
Johannes Zellner
55a56355d5 Add email change fields to user edit form 2016-01-25 14:28:47 +01:00
Johannes Zellner
dc83ba2686 Require displayName in updateUser() 2016-01-25 14:26:42 +01:00
Johannes Zellner
62615dfd0f Make email in user change optional 2016-01-25 14:12:09 +01:00
Johannes Zellner
a6998550a7 Add displayName change unit tests 2016-01-25 14:08:35 +01:00
Johannes Zellner
3b199170be Support changing the displayName 2016-01-25 14:08:11 +01:00
Johannes Zellner
1f93787a63 Also send displayName for users 2016-01-25 13:36:51 +01:00
Johannes Zellner
199c5b926a Show tooltip for user action buttons 2016-01-25 13:31:49 +01:00
Johannes Zellner
d9ad7085c3 Shorten the user action buttons 2016-01-25 13:28:48 +01:00
Johannes Zellner
df12f31800 Add ui components for user edit 2016-01-25 13:28:30 +01:00
Johannes Zellner
ad205da3db Match any loop back device
This is just a quick fix and will potentially match all
loop* devices.
2016-01-25 13:06:38 +01:00
Johannes Zellner
34aab65db3 Use the first part of the dn to get the common name in ldap
It is no must to have the first part named 'cn' but the first
part is always the id we want to verify
2016-01-25 11:31:57 +01:00
Johannes Zellner
63c06a508e Make /api available on just the IP
We might want to also show something else than
the naked domain placeholder page when just
accessing the ip
2016-01-24 12:08:10 +01:00
Girish Ramakrishnan
a2899c9c65 add appupdate tests 2016-01-24 00:44:46 -08:00
Girish Ramakrishnan
ff6d5e9efc complete the box update tests 2016-01-24 00:06:32 -08:00
Girish Ramakrishnan
f48fe0a7c0 Get started with updatechecker tests 2016-01-23 22:38:46 -08:00
Girish Ramakrishnan
5f6c8ca520 fix crash 2016-01-23 13:37:21 -08:00
Girish Ramakrishnan
0eaa3a8d94 clear update info so that we use the latest settings 2016-01-23 11:08:01 -08:00
Girish Ramakrishnan
8ad190fa83 make updateConfig a provision argument 2016-01-23 10:15:09 -08:00
girish@cloudron.io
70f096c820 check settingsdb whether to notify for app prerelease
fixes #573
2016-01-23 05:56:08 -08:00
girish@cloudron.io
2840251862 check if updateInfo is null earlier 2016-01-23 05:37:22 -08:00
girish@cloudron.io
b43966df22 code without callback is hard to read 2016-01-23 05:35:57 -08:00
girish@cloudron.io
cc22285beb Check settingsdb whether to notify for prereleases
part of #573
2016-01-23 05:33:16 -08:00
girish@cloudron.io
b72d48b49f set default update config 2016-01-23 05:07:12 -08:00
girish@cloudron.io
3a6b9c23c6 settings: add update config 2016-01-23 05:06:09 -08:00
girish@cloudron.io
b2da364345 fix typo in comment 2016-01-22 17:58:38 -08:00
girish@cloudron.io
de7a6abc50 Check for out of disk space
Fixes #567
2016-01-22 17:46:23 -08:00
girish@cloudron.io
10f74349ca collectd: disable vmem plugin 2016-01-22 15:44:46 -08:00
girish@cloudron.io
05a771c365 collectd: disable process plugin 2016-01-22 15:43:47 -08:00
girish@cloudron.io
cfa2089d7b collectd: Remove ping metric 2016-01-22 15:36:13 -08:00
girish@cloudron.io
d56abd94a9 collectd uses the data lo partition that is resized by box-setup.sh 2016-01-22 15:06:43 -08:00
girish@cloudron.io
2f20ff8def use loop1 instead of loop0
box-setup.sh always resizes the loopback partition on reboot. This
means that it always be loop1.
2016-01-22 15:03:23 -08:00
girish@cloudron.io
9706daf330 Just track ext4 and btrfs file systems 2016-01-22 14:33:02 -08:00
girish@cloudron.io
a246b3e90c box-setup needs to run after mounting to prevent race in script 2016-01-22 14:21:36 -08:00
girish@cloudron.io
e28e1b239f fix comment 2016-01-22 14:21:20 -08:00
girish@cloudron.io
4aead483de This hack is not needed in 15.10 anymore
collectd is still same version in 15.10 but it collects info correctly
as df-vda1 now.
2016-01-22 14:00:40 -08:00
girish@cloudron.io
f8cc6e471e 0.7.0 changes 2016-01-22 11:02:04 -08:00
girish@cloudron.io
6b9ed9472d upgrade to 15.10 2016-01-22 10:46:13 -08:00
girish@cloudron.io
a763b08c41 pin packages
fixes #558
2016-01-22 10:46:13 -08:00
Johannes Zellner
178f904143 Put installer.sh location in README 2016-01-22 13:03:31 +01:00
Girish Ramakrishnan
bb88fa3620 Restart node processes if journald crashes
Note that we cannot simply ignore EPIPE in the node programs.
Doing so results in no logs anymore :-( This is supposedly
fixed in systemd 228.

Fixes #550
2016-01-21 22:13:19 -08:00
Girish Ramakrishnan
1e1249d8e0 Give journald more time to sync
Part of #550
2016-01-21 21:43:49 -08:00
girish@cloudron.io
bcb0e61bfc Kill child processes
On Unix, child processes are not killed when parent dies.

Each process is part of a process group (pgid). When pgid == pid,
it is the process group leader.

node creates child processes with the parent as the group leader
(detached = false).

You can send a signal to entire group using kill(-pgid), as in,
negative value in argument. Systemd can be made to do this by
setting the KillMode=control-group.

Unrelated: Process groups reside inside session groups. Each session
group has a controlling terminal. Only one process in the session
group has access to the terminal. Process group is basically like
a bash pipeline. A session group is the entire login session with only
one process having terminal access at a time.

Fixes #543
2016-01-21 17:44:17 -08:00
girish@cloudron.io
022ff89836 Add 0.6.6 changes 2016-01-21 15:57:22 -08:00
girish@cloudron.io
b9d4b8f6e8 Remove docker images by tag
docker pull previously used to pull down all tags.
    docker pull tag1 # might pull down tag2, tag3 if they are all same!
    docker rm tag1 # oops, tag2 and tag3 still hold on to the image

However, the above feature was not possible with registry v2 (some
technical stuff to do with each tag being separately signed). As a result,
this feature was removed from v1 as well - https://github.com/docker/docker/pull/10571

This means we can now do:
    docker pull tag1 # nice
    docker rm tag1 # image goes away if no one else is using it

references:
https://github.com/docker/docker/issues/8689
https://github.com/docker/docker/pull/8193 (added this feature to v1)
https://github.com/docker/docker/issues/8141 (the request)
https://github.com/docker/docker/pull/10571 (removes the v1 feature)

Fixes #563
2016-01-21 15:53:51 -08:00
Johannes Zellner
0f5ce651cc Show errors if passwords do not match for reset and setup 2016-01-21 16:33:51 +01:00
Johannes Zellner
6b8d5f92de Set meaningful page title for oauth rendered pages 2016-01-21 16:19:38 +01:00
Johannes Zellner
55e556c725 Also provide client side password validation for password setup and reset forms 2016-01-21 16:08:51 +01:00
Johannes Zellner
19bb0a6ec2 Move form feedback below in setup screen 2016-01-21 16:03:46 +01:00
Johannes Zellner
290132f432 Add warning in password.js to update the UI parts 2016-01-21 16:00:12 +01:00
Johannes Zellner
4a8be8e62d Add changes for 0.6.5 2016-01-21 15:57:48 +01:00
Johannes Zellner
23b61aef0c Use client side pattern validation for setup password 2016-01-21 15:52:24 +01:00
Johannes Zellner
24cc433a3d Also take care of the developermode toggle form 2016-01-21 15:17:49 +01:00
Johannes Zellner
e014b7de81 It should be called 'Wrong password' 2016-01-21 15:11:42 +01:00
Johannes Zellner
0895a2bdea Same procedure for the app uninstall dialog 2016-01-21 15:09:22 +01:00
Johannes Zellner
03ca4887ba Streamline the app restore password validation 2016-01-21 15:06:22 +01:00
Johannes Zellner
9eeb17c397 Fixup app configure password validation 2016-01-21 15:03:51 +01:00
Johannes Zellner
6a5da2745a Simplify password validation in email edit 2016-01-21 14:59:39 +01:00
Johannes Zellner
e1111ba2bb Simplify password validation for cloudron update 2016-01-21 14:57:21 +01:00
Johannes Zellner
d186084835 Use password regexp instead of min-max to do client side validation also 2016-01-21 14:47:21 +01:00
Johannes Zellner
06c2ba9fa9 Fixup email edit form with password changes 2016-01-21 14:34:36 +01:00
Johannes Zellner
b82e5fd8c6 Remove console.log() 2016-01-21 14:29:04 +01:00
Johannes Zellner
6e1f96a832 Set min and max length for all password fields 2016-01-21 14:26:24 +01:00
Johannes Zellner
f68135c7aa Fixup password requirement feedback in account settings 2016-01-21 14:03:24 +01:00
Johannes Zellner
f48cbb457b Call reset avatar to trigger favicon change 2016-01-20 17:15:13 +01:00
Johannes Zellner
8d192dc992 Add Client.resetAvatar() 2016-01-20 17:14:50 +01:00
Johannes Zellner
b70324aa24 Give favicon an id 2016-01-20 17:14:36 +01:00
Johannes Zellner
390afaf614 Remove dead code 2016-01-20 16:56:14 +01:00
Johannes Zellner
5112322e7d Ensure the avatar is always updated in all places
Fixes #549
2016-01-20 16:55:44 +01:00
Johannes Zellner
2cb498d500 Improve singleUser configure dialog
Fixes #565
2016-01-20 16:27:43 +01:00
Johannes Zellner
2bd6e02cdc Do not show oauth proxy settings for singleUser apps 2016-01-20 16:23:31 +01:00
Johannes Zellner
85423cbc20 Actually send displayName instead of name in cloudron activation tests 2016-01-20 16:14:44 +01:00
Johannes Zellner
1c0d027bd3 Fix error message if displayName has wrong type 2016-01-20 16:14:21 +01:00
Johannes Zellner
5a8a023039 Fixup all the route tests with new password requirement 2016-01-20 16:06:51 +01:00
Johannes Zellner
196b059cfb Immediately set icon fallback to avoid flickering 2016-01-20 16:01:39 +01:00
Johannes Zellner
2d930b9c3d Explicitly set icon urls to null if we dont have them 2016-01-20 16:01:21 +01:00
Johannes Zellner
a5ba3faa49 Correctly report password errors 2016-01-20 15:41:29 +01:00
Johannes Zellner
02ba91f1bb Move password generation into separate file and ensure we generate strong passwords 2016-01-20 15:33:11 +01:00
Johannes Zellner
bfa917e057 Add password strength unit tests 2016-01-20 14:50:06 +01:00
Johannes Zellner
909dd0725a Fix copy and paste error 2016-01-20 14:49:45 +01:00
Johannes Zellner
74860f2d16 Fix tests for password strength change 2016-01-20 14:39:08 +01:00
Johannes Zellner
132ebb4e74 Require strong passwords
Fixes #568
2016-01-20 14:38:41 +01:00
Johannes Zellner
698158cd93 Change some of the mail added email text 2016-01-20 13:14:19 +01:00
Johannes Zellner
5bfc684f1b Fix crash due to missing event 2016-01-20 13:10:16 +01:00
Johannes Zellner
c944c9b65b Add changelog for 0.6.4 2016-01-20 12:43:40 +01:00
Johannes Zellner
d61698b894 Send user_added email instead of generic user event to admins
Fixes #569
2016-01-20 12:40:56 +01:00
Johannes Zellner
a4d32009ad Make it clear why this if condition is there 2016-01-20 12:39:28 +01:00
Johannes Zellner
3007875e35 Add user_added.ejs email template
This allows us to also send an invitation link to admins
in case the user was not invited already.
2016-01-20 12:38:07 +01:00
Johannes Zellner
b4aad138fc Fixup the update badge for mobile 2016-01-20 12:00:19 +01:00
Johannes Zellner
8df7eb2acb Check if the versions for app updates match
Fixes #566
2016-01-20 11:56:46 +01:00
girish@cloudron.io
18cab6f861 initialize displayName from activation link 2016-01-20 00:16:48 -08:00
girish@cloudron.io
b2071c65d8 Fix typo 2016-01-20 00:05:06 -08:00
girish@cloudron.io
402dba096e webadmin: display name for users 2016-01-19 23:58:52 -08:00
girish@cloudron.io
abf0c81de4 provide displayName in createAdmin route 2016-01-19 23:58:08 -08:00
girish@cloudron.io
613985a17c Set default displayName as empty 2016-01-19 23:47:29 -08:00
girish@cloudron.io
bfc9801699 provide displayName in ldap response when available 2016-01-19 23:47:24 -08:00
girish@cloudron.io
ee705eb979 Add displayName to create user and activate routes 2016-01-19 23:34:49 -08:00
girish@cloudron.io
67b94c7fde give message for development mode 2016-01-19 10:20:24 -08:00
Johannes Zellner
77e5d3f4bb Retry checking for app start state in test 2016-01-19 16:45:58 +01:00
Johannes Zellner
30618b8644 add missing argument 2016-01-19 14:03:01 +01:00
Johannes Zellner
57a2613286 Remove leftover js-error target reference in gulpfile 2016-01-19 13:36:32 +01:00
Johannes Zellner
e15bd89ba2 Add route to list application backups 2016-01-19 13:35:28 +01:00
Johannes Zellner
d2ed816f44 Add apps.listBackups() 2016-01-19 13:35:18 +01:00
Johannes Zellner
e51234928b Add FIXME for selfhost backup listing 2016-01-19 13:32:11 +01:00
Johannes Zellner
3aa668aea3 Fixup tests 2016-01-19 12:42:19 +01:00
Johannes Zellner
870edab78a Set empty displayName for users 2016-01-19 12:40:50 +01:00
Johannes Zellner
ebc9d9185d Use displayName in userdb 2016-01-19 12:39:54 +01:00
Johannes Zellner
093150d4e3 Add displayName to users table 2016-01-19 12:37:22 +01:00
Johannes Zellner
de80a6692d Remove unused error.js the code is inline 2016-01-19 11:22:06 +01:00
Johannes Zellner
c28f564a47 Remove unused gulp target for error.js 2016-01-19 11:21:43 +01:00
Johannes Zellner
eb6a09c2bd Try to fetch the cloudron status to offer a way to reload 2016-01-19 11:20:32 +01:00
Johannes Zellner
19f404e092 Changes for 0.6.3 2016-01-19 11:00:29 +01:00
girish@cloudron.io
55799ebb2d cli tool demuxes stream now 2016-01-18 21:36:05 -08:00
girish@cloudron.io
fdf4d8fdcf maybe stream is duplex 2016-01-18 13:39:18 -08:00
girish@cloudron.io
6dc11edafe make exec route more debugging friedly
allow upto 30 minutes of idle connection
2016-01-18 12:49:06 -08:00
girish@cloudron.io
c82ca1c69d disable http server timeout 2016-01-18 12:28:53 -08:00
girish@cloudron.io
7ef3d55cbf add tty option to exec 2016-01-18 11:39:09 -08:00
Johannes Zellner
44e4f53827 Change user creation api to require the invite flag 2016-01-18 16:53:51 +01:00
Johannes Zellner
643e490cbb Allow to specify if a new user gets invited immediately 2016-01-18 16:44:11 +01:00
Johannes Zellner
e61498c3b6 Ensure the avatar is also based on the apiOrigin 2016-01-18 16:35:25 +01:00
Johannes Zellner
bb6b61d810 Remove unused Client.prototype.getAppLogUrl() 2016-01-18 16:29:33 +01:00
Johannes Zellner
cff173c2e6 Ensure all api calls are absolute 2016-01-18 16:29:13 +01:00
Johannes Zellner
226501d103 Clear user remove form 2016-01-18 16:25:42 +01:00
Johannes Zellner
c5b8b0e3db Split up userAdd and sendInvite mailer calls 2016-01-18 16:11:00 +01:00
Johannes Zellner
46878e4363 Add button to trigger the invite email 2016-01-18 15:45:54 +01:00
Johannes Zellner
f77682365e Add client.sendInvite() 2016-01-18 15:45:44 +01:00
Johannes Zellner
d9850fa660 Add send invite route 2016-01-18 15:37:03 +01:00
Johannes Zellner
9258585746 add user.sendInvite() with tests 2016-01-18 15:16:18 +01:00
Johannes Zellner
e635aaaa58 Fix oauth2 tests 2016-01-18 14:31:25 +01:00
Johannes Zellner
d0d6725df5 Remove obsolete comments 2016-01-18 14:19:38 +01:00
Johannes Zellner
61f4fea9c3 Check for emails sent in users tests 2016-01-18 14:19:20 +01:00
Johannes Zellner
66d59c1d6c Do not require the invitor in the invite mail 2016-01-18 14:18:57 +01:00
Johannes Zellner
f9725965e2 Add test hooks to check if mails are queued 2016-01-18 14:00:35 +01:00
Johannes Zellner
4629739a14 Fix ldap tests 2016-01-18 13:56:59 +01:00
Johannes Zellner
e9b3a1e99c Fixup user tests 2016-01-18 13:50:54 +01:00
Johannes Zellner
8ac27b9dc7 Adjust api to set a flag if invitiation should be sent on user creation 2016-01-18 13:48:10 +01:00
Johannes Zellner
2edd434474 Support more notification types 2016-01-18 13:39:27 +01:00
Johannes Zellner
bebf480321 Use appdb.exists() instead of a apps.get() 2016-01-17 16:05:47 +01:00
Johannes Zellner
10c09d9def Fix wrong conditional in appdb 2016-01-17 16:01:17 +01:00
Johannes Zellner
6ce6b96e5c Fix linter issues 2016-01-17 15:59:11 +01:00
Johannes Zellner
16a9cae80e Allow to specify the restore id 2016-01-17 15:50:20 +01:00
Johannes Zellner
e865e2ae6d Employ hack to ensure chrome and firefox do not autocomplete
http://stackoverflow.com/questions/12374442/chrome-browser-ignoring-autocomplete-off
2016-01-16 17:38:18 +01:00
Johannes Zellner
06363a43f9 Offset the status bar 2016-01-16 16:48:47 +01:00
girish@cloudron.io
09a88e6a1c Version 0.6.2 changes 2016-01-15 18:12:20 -08:00
girish@cloudron.io
28baef8929 Go back to using docker exec in cloudron exec
The main issue is that multiple cloudron exec sessions do not
share the same rootfs. Which makes it annoying to debug.

We also have some nginx timeout which drops you out of exec
now and then resulting in loss of all state.
2016-01-15 15:24:46 -08:00
girish@cloudron.io
9b061a4c7c make the command work 2016-01-15 14:50:13 -08:00
girish@cloudron.io
0b542dfbdf Pause app container in developmentMode
This allows us to share the network namespace with the app container
2016-01-15 14:34:15 -08:00
girish@cloudron.io
d3b039ebd8 support developmentMode flag
- disables readonly rootfs
- disables memory limit
2016-01-15 11:28:43 -08:00
Johannes Zellner
c22924eed7 Use user.js instead of userdb.js in mailer 2016-01-15 16:33:13 +01:00
Johannes Zellner
033ccb121f Add tests for user.getAllAdmins() 2016-01-15 16:33:13 +01:00
Johannes Zellner
ecd91e8f2a Add user.getAllAdmins() 2016-01-15 16:33:13 +01:00
girish@cloudron.io
1cdb954967 0.6.1 changes 2016-01-14 13:30:08 -08:00
girish@cloudron.io
f309f87f55 use no-reply for naked domain apps 2016-01-14 12:56:35 -08:00
girish@cloudron.io
989ab3094d Set initial progress so that tools can wait on it 2016-01-14 11:34:49 -08:00
girish@cloudron.io
70ac18d139 add internal route to update the cloudron
need to way to trigger updates of cloudron using the caas tool
2016-01-14 11:13:02 -08:00
girish@cloudron.io
8f43236e2e adjust update mail text 2016-01-14 11:02:05 -08:00
girish@cloudron.io
38416d46a6 insist on node 4.1.1
this is probably FUD but I think we get npm rebuild issues when
trying to rebuild packages of a newer node to an older node.
2016-01-14 10:55:53 -08:00
girish@cloudron.io
fb96b00922 just keep rebuilding
Jan 14 07:03:27 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:03:27 GMT installer:installer provision (stdout):
Jan 14 07:03:27 rudolf.cloudron.me server.js[541]: > bson@0.2.22 install
/tmp/box-src-a4tklr/node_modules/db-migrate/node_modules/mongodb/node_modules/bson

Jan 14 07:03:27 rudolf.cloudron.me server.js[541]: > (node-gyp rebuild
2> builderror.log) || (exit 0)
Jan 14 07:03:31 rudolf.cloudron.me ntpdate[1344]: step time server
91.189.89.199 offset 0.000661 sec
Jan 14 07:03:31 rudolf.cloudron.me systemd[1]: Time has been changed
Jan 14 07:03:44 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=79.174.70.237 DST=178.62.202.80 LEN=40 TOS=0x00 PREC=0x00 TTL=248
ID=54321 PROTO=TCP SPT=49152 DPT=22 WINDOW=65535 RES=0x00 SYN URGP=0
Jan 14 07:04:02 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=124.6.36.197 DST=178.62.202.80 LEN=638 TOS=0x00 PREC=0x00 TTL=59
ID=61522 DF PROTO=TCP SPT=443 DPT=58535 WINDOW=95 RES=0x00 ACK PSH URGP=0
Jan 14 07:04:08 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:04:08 GMT installer:installer provision (stdout):
Jan 14 07:04:08 rudolf.cloudron.me server.js[541]: > kerberos@0.0.11
install
/tmp/box-src-a4tklr/node_modules/db-migrate/node_modules/mongodb/node_modules/kerberos

Jan 14 07:04:08 rudolf.cloudron.me server.js[541]: > (node-gyp rebuild
2> builderror.log) || (exit 0)
Jan 14 07:04:47 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=58.218.205.83 DST=178.62.202.80 LEN=40 TOS=0x00 PREC=0x00 TTL=112
ID=256 PROTO=TCP SPT=49127 DPT=5555 WINDOW=512 RES=0x00 SYN URGP=0
Jan 14 07:05:18 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 91.207.136.55:123 (2.ubuntu.pool.ntp.org).
Jan 14 07:05:28 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 194.190.168.1:123 (2.ubuntu.pool.ntp.org).
Jan 14 07:05:49 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=218.77.79.38 DST=178.62.202.80 LEN=40 TOS=0x00 PREC=0x00 TTL=240
ID=54321 PROTO=TCP SPT=44094 DPT=3306 WINDOW=65535 RES=0x00 SYN URGP=0
Jan 14 07:06:02 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=124.6.36.197 DST=178.62.202.80 LEN=638 TOS=0x00 PREC=0x00 TTL=59
ID=61523 DF PROTO=TCP SPT=443 DPT=58535 WINDOW=95 RES=0x00 ACK PSH URGP=0
Jan 14 07:06:21 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:06:21 GMT installer:installer provision (stdout):
Jan 14 07:06:21 rudolf.cloudron.me server.js[541]: > sqlite3@3.1.1
install /tmp/box-src-a4tklr/node_modules/db-migrate/node_modules/sqlite3
Jan 14 07:06:21 rudolf.cloudron.me server.js[541]: > node-pre-gyp
install --fallback-to-build
Jan 14 07:06:21 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:06:21 GMT installer:installer provision (stdout): [sqlite3] Success:
"/tmp/box-src-a4tklr/node_modules/db-migrate/node_modules/sqlite3/lib/binding/node-v46-linux-x64/node_sqlite3.node"
already installed
Jan 14 07:06:21 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:06:21 GMT installer:installer provision (stdout): Pass
--update-binary to reinstall or --build-from-source to recompile
Jan 14 07:06:23 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:06:23 GMT installer:installer provision (stdout):
Jan 14 07:06:23 rudolf.cloudron.me server.js[541]: >
dtrace-provider@0.2.8 install
/tmp/box-src-a4tklr/node_modules/ldapjs/node_modules/dtrace-provider
Jan 14 07:06:23 rudolf.cloudron.me server.js[541]: > node-gyp rebuild
Jan 14 07:07:47 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 91.206.16.3:123 (2.ubuntu.pool.ntp.org).
Jan 14 07:07:57 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 46.254.216.12:123 (2.ubuntu.pool.ntp.org).
Jan 14 07:08:02 rudolf.cloudron.me kernel: IPTables Packet Dropped:
IN=eth0 OUT= MAC=04:01:9a:dd:a9:01:84:b5:9c:fa:08:30:08:00
SRC=124.6.36.197 DST=178.62.202.80 LEN=638 TOS=0x00 PREC=0x00 TTL=59
ID=61524 DF PROTO=TCP SPT=443 DPT=58535 WINDOW=95 RES=0x00 ACK PSH URGP=0
Jan 14 07:08:08 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 85.255.214.66:123 (3.ubuntu.pool.ntp.org).
Jan 14 07:08:18 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 83.98.201.134:123 (3.ubuntu.pool.ntp.org).
Jan 14 07:08:28 rudolf.cloudron.me systemd-timesyncd[448]: Timed out
waiting for reply from 194.171.167.130:123 (3.ubuntu.pool.ntp.org).
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): WARN
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): install
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  got an error,
rolling back install

<...>

07:08:31 GMT installer:installer provision (stderr):  Error: connect
ETIMEDOUT 104.20.23.46:443
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): stack
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):      at
Object.exports._errnoException (util.js:837:11)
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): stack
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):      at
exports._exceptionWithHostPort (util.js:860:20)
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): stack
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):      at
TCPConnectWrap.afterConnect [as oncomplete] (net.js:1060:14)
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): System Linux
3.19.0-31-generic
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: gyp ERR! command
"/usr/local/node-4.1.1/bin/node"
"/usr/local/node-4.1.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js"
"rebuild"
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: gyp ERR! cwd
/tmp/box-src-a4tklr/node_modules/ldapjs/node_modules/dtrace-provider
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: gyp ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): node -v
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  v4.1.1
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): gyp
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): node-gyp -v
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  v3.0.3
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: gyp ERR! not ok
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): npm
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  Linux
3.19.0-31-generic
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): npm
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): argv
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
"/usr/local/node-4.1.1/bin/node" "/usr/bin/npm" "rebuild"
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): npm ERR! node v4.1.1
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! npm  v2.14.4
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! code ELIFECYCLE
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!
dtrace-provider@0.2.8 install: `node-gyp rebuild`
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! Exit status 1
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! Failed at
the dtrace-provider@0.2.8 install script 'node-gyp rebuild'.
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  This is most
likely a problem with the dtrace-provider package,
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! not with npm
itself.
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! Tell the
author that this fails on your system:
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!     node-gyp
rebuild
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! You can get
their info via:
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!     npm
owner ls dtrace-provider
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR! There is
likely additional logging output above.
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): npm
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr): ERR!
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision (stderr):  Please include the
following file with any support request:
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: npm ERR!
/tmp/box-src-a4tklr/npm-debug.log
Jan 14 07:08:31 rudolf.cloudron.me sudo[1284]: pam_unix(sudo:session):
session closed for user root
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: Thu, 14 Jan 2016
07:08:31 GMT installer:installer provision : child process exited. code:
1 signal: 0
Jan 14 07:08:31 rudolf.cloudron.me server.js[541]: [Error: Exited with
code 1]
2016-01-14 10:51:21 -08:00
Johannes Zellner
0601ea2f39 Show actionable notification in case we miss aws credentials
Fixes #553
2016-01-14 16:01:10 +01:00
Johannes Zellner
a92d4f2af7 add some comments on how to use notifications 2016-01-14 15:56:23 +01:00
Johannes Zellner
658305c969 Tweak the notification appearance 2016-01-14 15:54:32 +01:00
Johannes Zellner
8aed2be19b Support action scopes 2016-01-14 15:54:14 +01:00
Johannes Zellner
263f6e49d8 Use custom notification template 2016-01-14 15:53:47 +01:00
Johannes Zellner
d822e38016 Ensure templates are available 2016-01-14 15:53:32 +01:00
Johannes Zellner
6cf78f19bb Add custom notification template to make actionable 2016-01-14 15:53:16 +01:00
Johannes Zellner
4a3319406c Add Client api to show notifications 2016-01-14 15:08:27 +01:00
Johannes Zellner
cb4418b973 Set default values for the notifications 2016-01-14 15:07:41 +01:00
Johannes Zellner
5a797124b3 Update the angular notification library 2016-01-14 15:07:10 +01:00
Johannes Zellner
63430fbce6 Adjust update email text to let the user know about auto updates
So far the emails make no sense in case the user only clicks the update
link after the Cloudron has already auto updated. We might want to
adjust the text, once the user can disable auto updates, or set the
preferred update time.
2016-01-14 13:55:02 +01:00
girish@cloudron.io
bc2085139e ensure no trailing slash in redis password
Fixes #539
2016-01-13 19:42:28 -08:00
girish@cloudron.io
f98c710f5b use password generator module 2016-01-13 19:02:15 -08:00
girish@cloudron.io
eb7101deff setupToken is property of wizard
fixes #559
2016-01-13 18:41:08 -08:00
girish@cloudron.io
826f50da7e expose wizard in SetupController 2016-01-13 18:04:07 -08:00
girish@cloudron.io
4e94c8ea56 updateContact gets 202 and not 200 2016-01-13 16:46:01 -08:00
girish@cloudron.io
3120eca721 status api should set provider 2016-01-13 16:09:36 -08:00
girish@cloudron.io
26c9bcbc28 fix this and that 2016-01-13 15:00:33 -08:00
girish@cloudron.io
7a2e73a5d6 acme: update account with owner email
fixes #544
2016-01-13 14:21:59 -08:00
girish@cloudron.io
cd35ab5932 acme: update contact information before getting a cert
part of #544

there were two approaches considered:
1. pipe through owner email from appstore. this requires to save this
   value in settingsdb and we need to remember this in case user changes
   the email. another issue is that selfhost installer tooling needs to
   require this new value.

2. simply update owner email each time. this is the chosen approach.
2016-01-13 14:06:31 -08:00
girish@cloudron.io
efaacdb534 Add getOwner 2016-01-13 12:37:56 -08:00
girish@cloudron.io
5eb3c208f1 allow email to be configured 2016-01-13 12:15:27 -08:00
Johannes Zellner
8347b62c1b Mention lets encrypt in the ssl settings 2016-01-13 16:24:15 +01:00
Johannes Zellner
48c3c7b4dc Also hide the webadmin domain cert ui 2016-01-13 16:17:46 +01:00
Johannes Zellner
faefe078af Make links in app description go to a new page 2016-01-13 16:16:40 +01:00
Johannes Zellner
66eb0481b5 Hide app per app certificate upload fields
Fixes #555
2016-01-13 16:00:55 +01:00
Johannes Zellner
0a0fc130d4 Fix linter errors 2016-01-13 16:00:55 +01:00
Johannes Zellner
44afd7b657 No need to check for version equality for the update badge 2016-01-13 14:50:05 +01:00
Johannes Zellner
165b572a5f Fixup the app grid icon action layout 2016-01-13 14:49:00 +01:00
Johannes Zellner
1a30e622cc Only register app updates for apps where the available version is actually bigger
Fixes #533
2016-01-13 14:48:52 +01:00
Johannes Zellner
aec3238e42 Add changlog for 0.5.0 2016-01-13 11:51:21 +01:00
Johannes Zellner
249868dba7 Add CHANGES file 2016-01-13 11:51:13 +01:00
Johannes Zellner
f82e714b3c Remove executable flag for scripts not intended to be called directly 2016-01-13 10:58:20 +01:00
Johannes Zellner
baa7eae77c Move images script to appstore since it is specific to caas 2016-01-13 10:45:23 +01:00
Johannes Zellner
c69542a34d cleanup outdated README.md 2016-01-13 10:42:45 +01:00
Johannes Zellner
9d45892603 Move baseimage relevant scripts to baseimage/ 2016-01-13 10:30:01 +01:00
Johannes Zellner
e65f247b99 Fix path to initializeBaseUbuntuImage.sh in installer.sh 2016-01-12 16:53:36 +01:00
Johannes Zellner
600f061e47 The box tarball is now public, no need for a signed url 2016-01-12 16:50:34 +01:00
Johannes Zellner
8ac0e9e751 Provide sourceTarballUrl with update info 2016-01-12 16:50:14 +01:00
Johannes Zellner
772787fc22 Do not ignore scripts/ for export 2016-01-12 16:15:21 +01:00
Johannes Zellner
985b33b65b Add missing dev modules for images tool 2016-01-12 12:55:10 +01:00
Johannes Zellner
ef7c5c2d2b Remove vultr caas support 2016-01-12 12:55:10 +01:00
Johannes Zellner
0d49aafb54 Move installer/images scripts to scripts/ 2016-01-12 12:55:10 +01:00
Johannes Zellner
98aae5ddc6 Correctly copy installer files 2016-01-11 15:37:40 +01:00
Johannes Zellner
070e8606fa Use json from box/ 2016-01-11 15:20:29 +01:00
Johannes Zellner
511b2848c3 Fixup the paths in the createImage script 2016-01-11 15:14:07 +01:00
Johannes Zellner
7dd96d9a75 Fixup create box tarball script paths 2016-01-11 14:53:18 +01:00
Johannes Zellner
7d80f69ee8 Use the unified tarball instead of separate installer 2016-01-11 14:49:49 +01:00
Johannes Zellner
85491cb7b5 Merge branch 'selfhost' of ../installer into selfhost 2016-01-11 14:45:33 +01:00
Johannes Zellner
3c0b88a1ee Move to subfolder installer/ 2016-01-11 14:42:20 +01:00
Johannes Zellner
33c072f544 Support box and installer in one tarball 2016-01-11 14:25:04 +01:00
Johannes Zellner
a8d08bca3f Merge branch 'master' of ssh://gitlab.smartserver.io:6000/yellowtent/cloudron-installer 2016-01-11 12:53:52 +01:00
Johannes Zellner
d7ddc56ab3 Not applicable for ubuntu 15.10 2016-01-11 11:15:04 +01:00
Johannes Zellner
b562cd5c73 Fix typo when specifying the provisionEnv environment var 2016-01-08 13:15:07 +01:00
Johannes Zellner
2549a41eb3 Add a valid hostname entry in /etc/hosts if not found
Using 127.0.1.1 as per http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_the_hostname_resolution
2016-01-06 16:52:02 +01:00
Johannes Zellner
464f0fc231 add some unit tests for ensureVersion 2016-01-06 16:02:42 +01:00
Johannes Zellner
5914fd9fb7 Return the whole object, not just the string 2016-01-06 14:52:21 +01:00
Johannes Zellner
2e03217551 Make sysinfo provider selection explicit 2016-01-06 14:34:23 +01:00
Johannes Zellner
e0dc974f87 Add provider argument 2016-01-06 10:52:35 +01:00
Girish Ramakrishnan
fdf1ed829d Revert "use 15.10 as base image"
This reverts commit 50807f3046fdf715cb3bf2afc08436f64995f36a.

15.10 requires more work
2016-01-05 20:32:09 -08:00
Girish Ramakrishnan
ba90490ad9 Simply remove the old sudoers file that we installed
This is alternate fix to 743b8e757b
2016-01-05 20:24:05 -08:00
Girish Ramakrishnan
910be97f54 return calling callback 2016-01-05 16:16:25 -08:00
Girish Ramakrishnan
e162582045 Add collectd hack 2016-01-05 15:38:56 -08:00
Girish Ramakrishnan
cfd197d26c region is not required 2016-01-05 14:06:16 -08:00
Girish Ramakrishnan
aa0486bc2b use 15.10 as base image 2016-01-05 13:27:18 -08:00
Johannes Zellner
97a1fc62ae This rule is obsolete
It should protect the metadata from apps, but that is already
covered with the FORWARD dropping rule below
2016-01-05 21:13:45 +01:00
Johannes Zellner
fd9dcd065a Bring back the sleep 10 to wait for docker's iptable rules
See comment in code for further details
2016-01-05 21:11:43 +01:00
Johannes Zellner
59997560eb Do not remove all files from /etc/sudoers.d/
On DO with caas, there are no other files initially, but
on the ec2 ubuntu images, the files have set the rules for the
ubuntu user to be able to sudo without password, which we want to
keep
2016-01-05 17:06:01 +01:00
Johannes Zellner
026e71cc6e Default provider is caas 2016-01-05 15:58:16 +01:00
Johannes Zellner
98ecc24425 Allow metadata access for selfhosters for now 2016-01-05 15:57:50 +01:00
Johannes Zellner
3886006343 Implement ec2 sysinfo backend 2016-01-05 14:33:33 +01:00
Johannes Zellner
6d539c9203 Fix sysinfo api usage 2016-01-05 13:18:56 +01:00
Johannes Zellner
5f778e61dd Use new getIP() api in certificates.js 2016-01-05 12:23:07 +01:00
Johannes Zellner
9860489f05 Use new getIP() api in cloudron.js 2016-01-05 12:16:48 +01:00
Johannes Zellner
21ca8ac883 Use new getIP() api in apptask 2016-01-05 12:16:39 +01:00
Johannes Zellner
ec93becb17 Add missing asserts 2016-01-05 12:14:39 +01:00
Johannes Zellner
0319445888 Make sysinfo based on provider 2016-01-05 12:10:25 +01:00
Johannes Zellner
7cba9f50c8 Docker startup is fixed with new service file, no need to wait 2016-01-05 10:07:58 +01:00
Johannes Zellner
9483c3afbc Also support /dev/xvda1 with collectd, needed for ec2 2016-01-04 17:26:02 +01:00
Johannes Zellner
e518976534 Also support /dev/xvda for box-setup.sh which is used in ec2 2016-01-04 15:57:52 +01:00
Johannes Zellner
3626cc2394 Skip some code during tests for now 2016-01-02 16:58:27 +01:00
Johannes Zellner
3a61fc7181 We want all strings in the array as separate strings 2016-01-02 16:52:25 +01:00
Johannes Zellner
4a95fa5e87 Cleanup the installer script 2016-01-02 16:37:28 +01:00
Johannes Zellner
8e3d1422f3 Fix typo, linter could do some work ;-) 2016-01-02 15:39:48 +01:00
Johannes Zellner
640a0b2627 Try to get the latest box release if no sourceTarballUrl is specified in the provisioning data 2016-01-02 15:30:01 +01:00
Johannes Zellner
30e0cb6515 Upload box tarballs with public acl 2016-01-02 15:29:20 +01:00
Johannes Zellner
e3253aacdb Add semver 2016-01-02 15:28:49 +01:00
Johannes Zellner
32f49d2122 Use 10GB for system, since it now includes docker images 2016-01-01 16:29:16 +01:00
Johannes Zellner
b4bef44135 Do not put docker images into te btrfs volume 2016-01-01 16:10:16 +01:00
Johannes Zellner
ce38742caf Fix cloudron tests 2015-12-31 11:55:01 +01:00
Johannes Zellner
a7d39cc8d4 Merge branch 'selfhost' 2015-12-31 11:41:46 +01:00
Johannes Zellner
cb7eb660b9 Do not fail to list apps if we are in developer mode and do not have an appstore token 2015-12-31 10:42:02 +01:00
Johannes Zellner
bad28c60ae Do not rely on the VPS name but just get the memory from the system 2015-12-31 10:30:42 +01:00
Johannes Zellner
ab4c04085c Use image init script from within the installer tar ball 2015-12-31 10:01:08 +01:00
Johannes Zellner
761002f39d Include the image scripts in the installer tar 2015-12-31 09:37:55 +01:00
Johannes Zellner
996f9c7f5d Set letsencrypt as tls config 2015-12-31 09:31:50 +01:00
Johannes Zellner
a6eca44a0d Do not attempt to purchase an app if we dont have an appstore token 2015-12-31 09:15:27 +01:00
Girish Ramakrishnan
1820751801 show all fields 2015-12-30 19:48:10 -08:00
Johannes Zellner
030faaa5d1 Remove unused information within backup listing 2015-12-30 20:31:00 +01:00
Johannes Zellner
95e1947352 Merge branch 'selfhost' 2015-12-30 18:54:33 +01:00
Johannes Zellner
7deb11a0a6 Support s3 and route53 configs 2015-12-30 18:45:19 +01:00
Johannes Zellner
41c597801f Sort backups by creationTime 2015-12-30 18:31:44 +01:00
Johannes Zellner
128a138e74 Remove the backup prefix from the key 2015-12-30 18:30:41 +01:00
Johannes Zellner
ca74b5740a Partly implement backup listing for s3 backend 2015-12-30 18:21:38 +01:00
Johannes Zellner
9afcbe1565 Fix the email require detection in setup 2015-12-30 14:10:20 +01:00
Johannes Zellner
9e531a05e1 Do not ask for aws credentials for selfhosting 2015-12-30 13:28:14 +01:00
Johannes Zellner
7c3562cea2 Ensure focus and form validation is correct in the setup wizard 2015-12-29 22:10:10 +01:00
Johannes Zellner
1edddf79d2 token is not yet required for provisioning anymore 2015-12-29 19:47:13 +01:00
Johannes Zellner
114f03e434 'null' does not survive the shell script hopping well 2015-12-29 18:51:49 +01:00
Johannes Zellner
3ee1487985 We might support more than just caas and selfhosted 2015-12-29 17:57:11 +01:00
Johannes Zellner
a9eda2176e Only send heartbeats and fetch cloudron details if we have a token 2015-12-29 17:43:54 +01:00
Johannes Zellner
bb90bafb62 Fix crash when the appstore server does not respond correctly on setup 2015-12-29 16:07:04 +01:00
Johannes Zellner
cea8783fec Skip calling back home on activation in non caas case 2015-12-29 15:56:37 +01:00
Johannes Zellner
789d1fef84 Remove commented dom elements 2015-12-29 15:46:13 +01:00
Johannes Zellner
d83939b165 Fix setup wizard dependency on setupToken 2015-12-29 15:27:46 +01:00
Johannes Zellner
3a4ec5c86a The var is named 2015-12-29 12:25:22 +01:00
Johannes Zellner
013c14530b Provide provider in cloudron.conf 2015-12-29 11:30:03 +01:00
Johannes Zellner
ebf1cfc113 Read provider field from cloudron.conf 2015-12-29 11:29:08 +01:00
Johannes Zellner
ec4d04c338 Skip setupToken auth for non caas cloudrons 2015-12-29 11:24:45 +01:00
Johannes Zellner
584b7790e4 Get the cloudron vps provider from config 2015-12-29 11:24:34 +01:00
Johannes Zellner
ef25f66107 Ask for the user's email if not provided 2015-12-29 11:17:08 +01:00
Johannes Zellner
3060b34bdd customDomain during setup is a special caas case 2015-12-29 11:04:42 +01:00
Johannes Zellner
3fb5c682f8 Remove Client.isServerFirstTime() 2015-12-29 11:00:32 +01:00
Johannes Zellner
bb5dfa13ee Add Client.getStatus() 2015-12-29 10:58:26 +01:00
Johannes Zellner
9e391941c5 Redirect the browser to /setup.html in non caas mode 2015-12-29 10:54:00 +01:00
Johannes Zellner
8b1d3e5fba Send provider field with cloudron config 2015-12-29 10:53:22 +01:00
Girish Ramakrishnan
07e322df96 default targetBoxVersion to the maximum possible version ever
Apps that do not provide a targetBoxVersion are assumed to be
capable of running everywhere.
2015-12-27 16:43:50 -08:00
Girish Ramakrishnan
a8959cbf26 fix path to cpu metrics 2015-12-24 12:46:42 -08:00
Johannes Zellner
d793e5bae5 Remove obsolete generate_certificate.sh 2015-12-24 10:01:00 +01:00
Johannes Zellner
29954fa9e8 Generate certs inline 2015-12-24 10:00:45 +01:00
Girish Ramakrishnan
0384fa9a51 fix debugs 2015-12-23 15:22:36 -08:00
Girish Ramakrishnan
75b19d3883 scheduler: remove scheduler.json
don't bother saving state across restarts. needlessly complicated.
2015-12-23 14:27:26 -08:00
Girish Ramakrishnan
c15f84da08 scheduler: do not bother tracking containerIds 2015-12-23 13:29:00 -08:00
Girish Ramakrishnan
8539d4caf1 scheduler: delete containers by name
scheduler.json gets nuked during updates. When the box code restarts,
the scheduler is unable to remove old container because the state file
scheduler.json is now gone. It proceeds to create new container but that
does not work because of name conflict.

Fixes #531
2015-12-23 13:23:49 -08:00
Johannes Zellner
3eb1fe5e4b Ensure we reload the systemd daemon to pickup the new service files 2015-12-23 13:58:35 +01:00
Johannes Zellner
16b88b697e Ensure docker is running correctly 2015-12-23 13:54:49 +01:00
Johannes Zellner
08ba6ac831 Docker does not have a -d option anymore
This was depricated in 1.8 and is now gone
https://github.com/docker/docker/blob/master/CHANGELOG.md#cli
2015-12-23 13:32:37 +01:00
Johannes Zellner
49710618ff Docker does not have a -d option anymore
This was depricated in 1.8 and is now gone
https://github.com/docker/docker/blob/master/CHANGELOG.md#cli
2015-12-23 13:28:15 +01:00
Johannes Zellner
b4ba001617 Use multipart upload for s3 by reducing the chunk size
This avoids file upload issues for larger files
2015-12-23 13:27:54 +01:00
Johannes Zellner
87e0876cce Only pull infra images if we have an INFRA_VERSION file 2015-12-23 13:27:33 +01:00
Girish Ramakrishnan
87f5e3f102 workaround journalctl logging bug 2015-12-22 13:05:00 -08:00
Johannes Zellner
e921d0db6e do not rely on INFRA_VERSION to be present 2015-12-22 06:25:13 +01:00
Girish Ramakrishnan
b7a85580fa why is the linter not finding this again? 2015-12-21 16:14:30 -08:00
Johannes Zellner
2e93bc2e1d Generate self signed certs for now 2015-12-21 20:44:37 +01:00
Johannes Zellner
5ac15c8c49 Remove trailing slash 2015-12-21 15:06:44 +01:00
Johannes Zellner
98c05f3614 Remove comma 2015-12-21 15:06:44 +01:00
Johannes Zellner
05af56defc Fix typo 2015-12-20 14:58:05 +01:00
Johannes Zellner
ce48a2fc12 Some small fixes for selfhost 2015-12-20 12:01:18 +01:00
Johannes Zellner
01e910af79 Prepare provisioning data for installer 2015-12-20 11:12:59 +01:00
Johannes Zellner
3e2ce9e94c make cloudron.conf file path a 'const' 2015-12-20 10:25:12 +01:00
Johannes Zellner
9db602b274 Decide if we run in caas or selfhost mode and fetch user data accordingly 2015-12-20 10:23:25 +01:00
Johannes Zellner
a2d0ac7ee3 Run installer with selfhost flag 2015-12-20 10:22:55 +01:00
Girish Ramakrishnan
24cbd1a345 if i wrote a linter, these are the bugs it would catch 2015-12-19 13:48:14 -08:00
Girish Ramakrishnan
8b3e6742d5 better debugs 2015-12-19 13:47:48 -08:00
Johannes Zellner
62bd3f6e83 Add installer.sh 2015-12-19 21:56:33 +01:00
Johannes Zellner
20ac2ff6e7 Do not move ssh port in selfhosting case 2015-12-19 21:56:00 +01:00
Johannes Zellner
aa7c9e06a4 Initial commit 2015-12-19 18:47:24 +01:00
Johannes Zellner
0c2fb7c0d9 Use multipart upload for s3 by reducing the chunk size
This avoids file upload issues for larger files
2015-12-19 17:50:54 +01:00
Girish Ramakrishnan
7ec2b1da8c fix function name in debug 2015-12-17 20:30:30 -08:00
Girish Ramakrishnan
190c2b2756 firefox is unhappy with incorrect chain 2015-12-17 19:42:49 -08:00
Girish Ramakrishnan
7c975384cd better error messages 2015-12-17 19:35:52 -08:00
Girish Ramakrishnan
fe042891a3 Add acme.getCertificate 2015-12-17 13:31:28 -08:00
Girish Ramakrishnan
a9b594373d do not pass accountKeyPem everywhere 2015-12-17 13:27:10 -08:00
Girish Ramakrishnan
5edc3cde2a set prod option based on provider 2015-12-17 13:17:46 -08:00
Girish Ramakrishnan
a636731764 allow configuring prod/staging of LE url 2015-12-17 13:12:54 -08:00
Girish Ramakrishnan
b4433af9b5 remove unused require 2015-12-17 12:55:47 -08:00
Girish Ramakrishnan
72cc318607 install docker 1.9.1
We hit this error:
https://github.com/docker/docker/issues/18283
https://github.com/docker/docker/issues/17083
2015-12-15 17:17:28 -08:00
Girish Ramakrishnan
5ae45381e2 fix metrics path
See 03da5cc6b382f6f7aad69395d9f8a9d29d18ec26 in installer

We now use the cgroupfs driver instead of systemd cgroup driver
2015-12-15 15:54:19 -08:00
Girish Ramakrishnan
b533d325a4 Creating containers fails sporadically
HTTP code is 500 which indicates error: server error - Cannot start container redis-9d0ae0eb-a08f-4d0d-a980-ac6fa15d1a3d: [8] System error: write /sys/fs/cgroup/memory/system.slice/docker-fa6d6f3fce88f15844710e6ce4a8ac4d3a42e329437501416991b4c55ea3d078.scope/memory.memsw.limit_in_bytes: invalid argument

https://github.com/docker/docker/issues/16256
https://github.com/docker/docker/pull/17704
https://github.com/docker/docker/issues/17653
2015-12-15 15:02:45 -08:00
Girish Ramakrishnan
9dad7ff563 Fix sed 2015-12-15 14:43:01 -08:00
Girish Ramakrishnan
1ae2e07883 leave note on 429 error code 2015-12-15 14:25:23 -08:00
Girish Ramakrishnan
aa34850d4e fix typo 2015-12-15 12:52:41 -08:00
Girish Ramakrishnan
9f524da642 use admin@cloudron.io for email
registrations are failing because the LE server is doing a MX check.
we don't have a proper email to provide here since the box is not
activated yet. we should "update" the email at some point with
the owner information.
2015-12-15 10:39:03 -08:00
Girish Ramakrishnan
8b707e23ca update shrinkwrap 2015-12-15 10:04:45 -08:00
Girish Ramakrishnan
a4ea693c3c update superagent
the latest superchanged changed the meaning of 'error'. Previously,
error implied a network error. With the latest superagent, error means
a REST api error i.e 4xx, 5xx are flagged as errors.

error && !error.response means network error
2015-12-15 09:53:37 -08:00
Girish Ramakrishnan
aca443a909 update redis 2015-12-15 08:36:47 -08:00
Girish Ramakrishnan
2ae5223da9 update password-generator, validator and nock 2015-12-15 08:34:13 -08:00
Girish Ramakrishnan
b5b67f2e6a define CA_ORIGIN 2015-12-15 00:49:00 -08:00
Girish Ramakrishnan
fe723f5a53 remove trailing slash in url 2015-12-15 00:42:18 -08:00
Girish Ramakrishnan
c55e1ff6b7 debug output the error 2015-12-15 00:23:57 -08:00
Girish Ramakrishnan
4bd88e1220 create acme data dir 2015-12-15 00:21:29 -08:00
Girish Ramakrishnan
f46af93528 do not installAdminCertificate for upgrades 2015-12-14 23:37:52 -08:00
Girish Ramakrishnan
8ead0e662a detect apptask crashes 2015-12-14 19:11:26 -08:00
Girish Ramakrishnan
365ee01f96 show error dialog 2015-12-14 18:56:15 -08:00
Girish Ramakrishnan
fca6de3997 add function to show the error dialog 2015-12-14 17:57:58 -08:00
Girish Ramakrishnan
dceb265742 add error dialog 2015-12-14 17:52:56 -08:00
Girish Ramakrishnan
409096cbff Use production LE 2015-12-14 17:31:41 -08:00
Girish Ramakrishnan
e5a40faf82 simply use fallback certs if LE fails
currently, it fails if we cannot get a cert.

This means that we need to provide some option to simply use fallback
cert. This requires UI changes that I want to avoid :-)
2015-12-14 17:13:54 -08:00
Girish Ramakrishnan
859c78c785 fix fallback cert message 2015-12-14 16:42:34 -08:00
Girish Ramakrishnan
89bff16053 fix crash 2015-12-14 14:08:45 -08:00
Girish Ramakrishnan
a89476c538 fix renewal check 2015-12-14 13:52:54 -08:00
Girish Ramakrishnan
f51b61e407 do not dump the csr 2015-12-14 13:41:30 -08:00
Girish Ramakrishnan
177103bccd update safetydance for readdirSync 2015-12-14 13:10:04 -08:00
Girish Ramakrishnan
f31d63aabd implement cert auto-renewal 2015-12-14 12:40:39 -08:00
Girish Ramakrishnan
fd20246e8b ensureCertificate: check if cert needs renewal 2015-12-14 12:38:19 -08:00
Girish Ramakrishnan
0c1ea39a02 add getApi 2015-12-14 12:28:00 -08:00
Girish Ramakrishnan
a409dd026d use url file to download cert if present 2015-12-14 12:22:57 -08:00
Girish Ramakrishnan
4731f8e5a7 move key creation into the acme flow 2015-12-14 12:21:41 -08:00
Girish Ramakrishnan
7e05259b0e save url for renewal in .url files 2015-12-14 12:17:57 -08:00
Girish Ramakrishnan
14ab85dc4f do not pass outdir 2015-12-14 11:42:59 -08:00
Girish Ramakrishnan
0651bfc4b8 provide cert and key file in callback 2015-12-14 09:29:48 -08:00
Girish Ramakrishnan
21b94b2655 fix debug message 2015-12-14 08:52:43 -08:00
Girish Ramakrishnan
4e40c2341a code now uses backend 2015-12-14 08:50:57 -08:00
Girish Ramakrishnan
d9a83eacd2 explicitly prune out second argument 2015-12-13 20:35:23 -08:00
Girish Ramakrishnan
7b40674c0d add a backend for caas 2015-12-13 19:09:57 -08:00
Girish Ramakrishnan
936c1989f1 refactor code a bit for renewal 2015-12-13 12:26:31 -08:00
Girish Ramakrishnan
cfe336c37c fix path to acme key 2015-12-13 11:54:17 -08:00
Girish Ramakrishnan
d8a1e4aab0 more debug messages 2015-12-12 20:39:24 -08:00
Girish Ramakrishnan
be4d2afff3 fix path to cert 2015-12-12 20:30:50 -08:00
Girish Ramakrishnan
c2a4ef5f93 maybe this gets the certificate 2015-12-12 20:30:50 -08:00
Girish Ramakrishnan
b389d30728 max-time is per retry. it cannot take more than 3 mins to download the tarball 2015-12-12 17:36:34 -08:00
Girish Ramakrishnan
22634b4ceb tlsConfig is part of the database 2015-12-12 15:43:42 -08:00
Girish Ramakrishnan
abc4975b3d add tls configuration to database 2015-12-12 15:40:33 -08:00
Girish Ramakrishnan
36d81ff8d1 do not write tls config in this version 2015-12-12 14:21:50 -08:00
Girish Ramakrishnan
fe94190c2f do not save certs in database 2015-12-12 13:29:10 -08:00
Girish Ramakrishnan
f32027e15b Try alternative configuration for systemd restart rate limit 2015-12-12 13:15:41 -08:00
Girish Ramakrishnan
4b6a92955b configure to get only 1 email every 10 minutes 2015-12-12 11:47:32 -08:00
Girish Ramakrishnan
35a2da744c fix typo 2015-12-11 23:29:07 -08:00
Girish Ramakrishnan
9d91340223 add settings.setTlsConfig 2015-12-11 22:39:13 -08:00
Girish Ramakrishnan
e0a56f75c3 typo 2015-12-11 22:27:00 -08:00
Girish Ramakrishnan
4cfd30f9e8 use tlsConfig to determine acme or not 2015-12-11 22:25:57 -08:00
Girish Ramakrishnan
3fbcbf0e5d store tls config in database 2015-12-11 22:14:56 -08:00
Girish Ramakrishnan
8b7833e8b1 fix debug namespacing 2015-12-11 21:49:24 -08:00
Girish Ramakrishnan
66441f133d fix typo 2015-12-11 20:09:16 -08:00
Girish Ramakrishnan
8a12d6019a assert assert everywhere, hope none fires! 2015-12-11 14:50:30 -08:00
Girish Ramakrishnan
39c626dc75 more moving of nginx code 2015-12-11 14:48:39 -08:00
Girish Ramakrishnan
a7480c3f29 implement installation of admin certificate via acme 2015-12-11 14:37:55 -08:00
Girish Ramakrishnan
8af682acf1 add attempt 2015-12-11 14:20:37 -08:00
Girish Ramakrishnan
95eba1db81 Add certificates.ensureCertificate which gets cert via acme 2015-12-11 14:15:44 -08:00
Girish Ramakrishnan
0b8fde7d8d rename app.setAppCertificate 2015-12-11 14:13:29 -08:00
Girish Ramakrishnan
2f7517152a rename certificates.initialize 2015-12-11 14:02:58 -08:00
Girish Ramakrishnan
3e2ea0e087 refactor certificate settings 2015-12-11 13:58:43 -08:00
Girish Ramakrishnan
723556d6a2 Add CertificatesError 2015-12-11 13:43:33 -08:00
Girish Ramakrishnan
1f53d76cef wait forever by default 2015-12-11 13:41:17 -08:00
Girish Ramakrishnan
d15488431b add waitfordns.js (refactored from appstore) 2015-12-11 13:14:27 -08:00
Girish Ramakrishnan
cf80fd7dc5 rename certificatemanager 2015-12-11 12:24:52 -08:00
Girish Ramakrishnan
73d891b98e move validateCertificate to certificateManager 2015-12-10 20:38:49 -08:00
Girish Ramakrishnan
875ec1028d remove backward compat code now that we have migrated 2015-12-10 16:31:22 -08:00
Girish Ramakrishnan
fd985c2011 configure nginx as the last step
this allow us to wait for certificate (in the case of LE)
2015-12-10 15:26:36 -08:00
Girish Ramakrishnan
47981004c9 split port reserving to separate function
this allows us to move nginx configuration to the bottom of apptask
(required for tls cert download support)
2015-12-10 15:25:15 -08:00
Girish Ramakrishnan
e3f7c8f63d use fqdn to save admin certs as well 2015-12-10 14:29:54 -08:00
Girish Ramakrishnan
853db53f82 rename admin.cert/.key to {admin_fqdn}.cert/.key 2015-12-10 14:05:44 -08:00
Girish Ramakrishnan
5992c0534a remove dead comment 2015-12-10 13:56:00 -08:00
Girish Ramakrishnan
1874c93c5c no need to template main nginx config 2015-12-10 13:54:53 -08:00
Girish Ramakrishnan
3c4adb1aed fix config path 2015-12-10 13:36:44 -08:00
Girish Ramakrishnan
66db918273 add certificate manager stub 2015-12-10 13:35:02 -08:00
Girish Ramakrishnan
69845d5ddd add config.adminFqdn() 2015-12-10 13:14:13 -08:00
Girish Ramakrishnan
42181d597b keep the requires sorted 2015-12-10 13:08:38 -08:00
Girish Ramakrishnan
b56e9ca745 do not log the token 2015-12-10 12:50:54 -08:00
Girish Ramakrishnan
5fc4788269 remove test code 2015-12-10 11:09:37 -08:00
Girish Ramakrishnan
d0f8293b73 treat acme as a cert backend 2015-12-10 11:08:22 -08:00
Girish Ramakrishnan
44582bcd4b download the certificate as binary 2015-12-10 11:07:10 -08:00
Girish Ramakrishnan
5c73aed953 remove unused require 2015-12-10 09:54:21 -08:00
Girish Ramakrishnan
e1ec48530e acme: create cert file with the chain 2015-12-10 09:11:08 -08:00
Girish Ramakrishnan
54c4053728 add LE cross signed
https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem.txt
2015-12-10 09:06:36 -08:00
Girish Ramakrishnan
79ffb0df5c acme: openssl does not play well with buffers. use files instead 2015-12-10 08:57:53 -08:00
Girish Ramakrishnan
c510952c88 s/privateKeyPem/accountKeyPem 2015-12-09 19:23:19 -08:00
Girish Ramakrishnan
6109da531d acme: use safe 2015-12-09 19:22:53 -08:00
Girish Ramakrishnan
56877332db pull in urlBase64Encode 2015-12-09 18:34:27 -08:00
Girish Ramakrishnan
6fc972d160 set default response type to text/plain 2015-12-09 18:34:13 -08:00
Girish Ramakrishnan
5346153d9b add ursa 2015-12-09 18:33:35 -08:00
Girish Ramakrishnan
aaf266d272 convert cert to pem 2015-12-08 20:05:14 -08:00
Girish Ramakrishnan
0750db9aae rename function 2015-12-08 19:54:37 -08:00
Girish Ramakrishnan
316976d295 generate the acme account key on first run 2015-12-08 19:42:33 -08:00
Girish Ramakrishnan
593b5d945b use this fake email as the account owner for now 2015-12-08 19:15:17 -08:00
Girish Ramakrishnan
88f0240757 serve acme directory from nginx 2015-12-08 19:04:48 -08:00
Girish Ramakrishnan
f5c2f8849d Add LE staging url for testing 2015-12-08 18:25:45 -08:00
Girish Ramakrishnan
5c4a8f7803 add acme support
this is not used anywhere since we want to wait for rate limits to be
fixed.

The current limits are :

    Rate limit on registrations per IP is currently 10 per 3 hours
    Rate limit on certificates per Domain is currently 5 per 7 days

The domains are counted based on https://publicsuffix.org/list/ (not TLD). Like appspot.com, herokuapp.com while not a TLD, it a public suffix. This list allows browser authors to limit how cookies can be manipulated by the subdomain of those domains. like app1.appspot.com cannot go and change things of app2.appspot.com.

This means
a) we cannot use LE for cloudron.me, cloudron.us (or we have to get on that list)

b) even for custom domains we get only 5 certs every 7 days. And one of them is taken for my.xx domain.

https://community.letsencrypt.org/t/public-beta-rate-limits/4772/38
2015-12-08 15:52:30 -08:00
Girish Ramakrishnan
5b8fdad5cb Revert "remove targetBoxVersion checks since all apps are now ported"
This reverts commit d104f2a077.

gitlab is not ported :-(
2015-12-05 02:29:06 -08:00
Girish Ramakrishnan
fe819f95ec always return logs regardless of state 2015-12-04 13:13:54 -08:00
Girish Ramakrishnan
be6728f8cb send support an email for app crashes 2015-12-02 16:50:00 -08:00
Girish Ramakrishnan
24d3a81bc8 remove targetBoxVersion checks since all apps are now ported 2015-12-02 15:02:16 -08:00
Girish Ramakrishnan
268c7b5bcf always create an isolated network ns 2015-12-01 13:59:45 -08:00
Girish Ramakrishnan
64716a2de5 cloudron exec: disable links for subcontainers
Dec 01 08:36:53 girish.cloudron.us node[5431]: Error: HTTP code is 409 which indicates error: undefined - Conflicting options: --net=container can't be used with links. This would result in undefined behavior
2015-12-01 00:51:41 -08:00
Girish Ramakrishnan
d2c8457ab1 reset health when app is stopped 2015-11-30 15:41:56 -08:00
Johannes Zellner
667cb84af7 Protect from crash on shutdown 2015-11-27 10:05:57 +01:00
Girish Ramakrishnan
df8653cdd5 Do not set Hostname for subcontainers 2015-11-26 19:26:29 -08:00
Girish Ramakrishnan
32f677ca0d make app subcontainers share network namespace with app
pid namespace sharing is coming in https://github.com/docker/docker/issues/10163
2015-11-26 19:18:31 -08:00
Johannes Zellner
6f5408f0d6 Make all json blobs in db TEXT fields 2015-11-26 12:17:02 +01:00
Johannes Zellner
23c04fb10b Use console.error() to report update errors 2015-11-26 12:04:39 +01:00
Johannes Zellner
0c5d6b1045 Set app backup progress only after we check the error 2015-11-26 12:00:44 +01:00
Johannes Zellner
33f30decd1 Support redirectURIs which already contain query params 2015-11-25 17:50:39 +01:00
Johannes Zellner
9595b63939 Correctly encode the redirectURI in oauth callback 2015-11-25 17:45:18 +01:00
Johannes Zellner
b9695b09cd Fix crash due to wrong AppsError usage 2015-11-25 13:49:20 +01:00
Girish Ramakrishnan
606885b23c fix typo 2015-11-23 13:51:14 -08:00
Girish Ramakrishnan
bc7b8aadc4 vultr: fix waitForSnapshot call 2015-11-23 13:39:02 -08:00
Girish Ramakrishnan
d136b2065f ignore vultr transfer image call 2015-11-23 13:35:36 -08:00
Girish Ramakrishnan
3b2683463d localize transfer logic for DO 2015-11-23 13:35:05 -08:00
Girish Ramakrishnan
989730d402 wait for snapshot 2015-11-23 13:19:23 -08:00
Girish Ramakrishnan
50f7209ba2 print the provider 2015-11-23 12:46:08 -08:00
Girish Ramakrishnan
44b728c660 remove get_image_id api 2015-11-23 12:45:09 -08:00
Girish Ramakrishnan
9abc5bbf96 better error handling 2015-11-23 12:37:30 -08:00
Girish Ramakrishnan
56dd936e9c create systemd log dir if needed 2015-11-23 12:33:45 -08:00
Girish Ramakrishnan
e982281cd4 install acl 2015-11-23 11:32:05 -08:00
Girish Ramakrishnan
a6b7b5fa94 complete vultr backend 2015-11-23 11:30:24 -08:00
Girish Ramakrishnan
ef00114aab rename arg box to name 2015-11-23 11:20:21 -08:00
Girish Ramakrishnan
ba4edc5c0e implement some vultr api 2015-11-23 11:01:52 -08:00
Girish Ramakrishnan
dae2d81764 remove image_region as well 2015-11-23 10:49:09 -08:00
Girish Ramakrishnan
cee9cd14c0 hardcode the box size to smallest 2015-11-23 10:46:16 -08:00
Girish Ramakrishnan
f1ec110673 vultr: getSshKeyId 2015-11-23 10:27:27 -08:00
Girish Ramakrishnan
7104a3b738 use debug to put messages in stderr 2015-11-23 10:14:40 -08:00
Girish Ramakrishnan
114951b18c add get_image_id command 2015-11-23 09:31:42 -08:00
Girish Ramakrishnan
3c85a602a4 add vultr backend 2015-11-23 09:22:43 -08:00
Girish Ramakrishnan
a6415b8689 remove droplet from command names 2015-11-23 09:13:30 -08:00
Girish Ramakrishnan
b37670de84 make DO backend a binary 2015-11-23 08:59:45 -08:00
Girish Ramakrishnan
c9053bb0bc rename image creation schript 2015-11-23 08:39:21 -08:00
Girish Ramakrishnan
5362102be6 add --provider 2015-11-23 08:38:56 -08:00
Girish Ramakrishnan
bf4601470b remove functions not part of vps api 2015-11-23 08:33:57 -08:00
Girish Ramakrishnan
bd6274282b s/droplet/server 2015-11-23 08:32:54 -08:00
Girish Ramakrishnan
5a0f7df377 handle scheduler error 2015-11-22 21:17:17 -08:00
Girish Ramakrishnan
2e54be3df8 Revert "fix crash in scheduler"
This reverts commit 3b5e30f922.
2015-11-22 21:13:05 -08:00
Girish Ramakrishnan
6625610aca fix crash in scheduler 2015-11-22 17:22:06 -08:00
Girish Ramakrishnan
5c9abfe97a debug output the changeIds 2015-11-19 17:49:30 -08:00
Johannes Zellner
e06f3d4180 Docker bridge default ip has changed 2015-11-19 16:32:03 +01:00
Girish Ramakrishnan
331b4d8524 use docker 1.9.0 2015-11-18 18:28:28 -08:00
Girish Ramakrishnan
e3cc12da4f new addon images based on docker 1.9.0 2015-11-18 17:53:58 -08:00
Girish Ramakrishnan
1e19f68cb5 Install docker binaries instead of apt
The apt binaries lxc-* are obsolete and replaced with 'docker-engine'
packages. The new repos however do not allow pinning to a specific version.

so brain dead.

https://docs.docker.com/engine/installation/binaries/#get-the-linux-binary
2015-11-18 13:02:51 -08:00
Girish Ramakrishnan
c286b491d6 Fix debug output 2015-11-16 16:43:53 -08:00
Girish Ramakrishnan
c7acdbf20d add empty certs dir 2015-11-16 15:21:42 -08:00
Girish Ramakrishnan
3cd0cc01c4 Add test certs
This is simply a self-signed cert
2015-11-16 15:20:08 -08:00
Girish Ramakrishnan
87d109727a fix path to secrets 2015-11-16 15:08:26 -08:00
Girish Ramakrishnan
47b6819ec8 scp does not require this option 2015-11-16 14:48:05 -08:00
Girish Ramakrishnan
00ee89a693 fix paths 2015-11-16 14:31:39 -08:00
Girish Ramakrishnan
ac14b08af4 we have to use 4.1.1 2015-11-16 14:31:39 -08:00
Girish Ramakrishnan
db97d7e836 Fix options usage 2015-11-16 13:15:12 -08:00
Girish Ramakrishnan
5a0c80611e better error message 2015-11-16 12:15:44 -08:00
Girish Ramakrishnan
4e872865a3 use different keys for different env 2015-11-16 12:15:15 -08:00
Girish Ramakrishnan
aea39a83b6 change yellowtent key to caas 2015-11-16 12:12:07 -08:00
Johannes Zellner
3d80821203 Give correct feedback if an app cannot be found in the appstore 2015-11-13 10:35:29 +01:00
Johannes Zellner
d9bfcc7c8a Change manifestJson column from VARCHAR to TEXT 2015-11-13 10:21:03 +01:00
Johannes Zellner
8bd9a6c109 Do not serve up the status page for 500 upstream errors 2015-11-13 09:39:33 +01:00
Johannes Zellner
d89db24bfc Fix indentantion 2015-11-13 09:30:33 +01:00
Johannes Zellner
352b5ca736 Update supererror 2015-11-13 09:23:32 +01:00
Girish Ramakrishnan
6bd9173a9d this docker registry keeps going down 2015-11-12 16:22:53 -08:00
Girish Ramakrishnan
0cef3e1090 do not trust the health state blindly 2015-11-12 16:16:05 -08:00
Girish Ramakrishnan
6bd68961d1 typo 2015-11-12 16:13:15 -08:00
Girish Ramakrishnan
7f8ad917d9 filter out non-healthy apps 2015-11-12 16:04:33 -08:00
Girish Ramakrishnan
7cd89accaf better pullImage debug output 2015-11-12 15:58:39 -08:00
Girish Ramakrishnan
ffee084d2b new format of provisioning info 2015-11-12 14:22:43 -08:00
Girish Ramakrishnan
2bb657a733 rename variable for clarity 2015-11-12 12:40:41 -08:00
Girish Ramakrishnan
bc48171626 use fallback cert from backup if it exists 2015-11-12 12:37:43 -08:00
Girish Ramakrishnan
50924b0cd3 use admin.cert and admin.key if present in backup dir 2015-11-12 12:33:52 -08:00
Girish Ramakrishnan
3d86950cc9 fix indentation 2015-11-12 12:28:05 -08:00
Girish Ramakrishnan
db9ddf9969 backup fallback cert 2015-11-12 12:27:25 -08:00
Girish Ramakrishnan
1b507370dc Cannot use >= node 4.1.2
https://github.com/nodejs/node/issues/3803
2015-11-12 12:19:13 -08:00
Girish Ramakrishnan
b9a3c508c9 Fix target path 2015-11-12 06:58:01 -08:00
Girish Ramakrishnan
9ae49e7169 link npm 2015-11-11 22:04:58 -08:00
Girish Ramakrishnan
7a1cdd62a4 install node 4.2.2 2015-11-11 16:02:42 -08:00
Girish Ramakrishnan
a242881101 change engine requirements 2015-11-11 15:56:14 -08:00
Girish Ramakrishnan
3c5e221c39 change engine requirements 2015-11-11 15:55:59 -08:00
Girish Ramakrishnan
9c37f35d5a new shrinkwrap for 4.2.2 2015-11-11 15:55:24 -08:00
Girish Ramakrishnan
44ca59ac70 update shrinkwrap 2015-11-11 15:49:38 -08:00
Girish Ramakrishnan
398dfce698 update packages 2015-11-11 15:48:32 -08:00
Girish Ramakrishnan
0ebe6bde3d remove async and superagent from dev deps 2015-11-11 15:46:15 -08:00
Girish Ramakrishnan
4044070d76 Add -app prefix for all app sources
so that this doesn't conflict with some user.
2015-11-11 13:27:45 -08:00
Girish Ramakrishnan
8f05917d97 delete container on network error 2015-11-10 21:56:17 -08:00
Girish Ramakrishnan
3766d67daa create new container from cloudron exec 2015-11-10 21:36:20 -08:00
Johannes Zellner
b1290c073e log lines should be newline separated 2015-11-10 11:31:07 +01:00
Girish Ramakrishnan
15f686fc69 reboot automatically on panic after 5 seconds 2015-11-10 01:53:09 -08:00
Girish Ramakrishnan
36daf86ea2 send mail even if no related app was found (for addons) 2015-11-10 01:39:02 -08:00
Girish Ramakrishnan
4fb07a6ab3 make crashnotifier send mails again
mailer module waits for dns syncing. crashnotifier has no time for all that.
neither does it initialize the database. it simply wants to send mail.
(the crash itself could have happenned because of some db issue)

maybe it should simply use a shell script at some point.
2015-11-10 00:25:47 -08:00
Girish Ramakrishnan
8f2119272b print all missing images 2015-11-09 23:31:04 -08:00
Girish Ramakrishnan
ee5bd456e0 set bucket and prefix to make migrate test pass 2015-11-09 22:45:07 -08:00
Girish Ramakrishnan
9c549ed4d8 wait for old apptask to finish before starting new one
kill() is async and takes no callback :/
2015-11-09 22:10:10 -08:00
Girish Ramakrishnan
61fc8b7968 better taskmanager debugs 2015-11-09 21:58:34 -08:00
Girish Ramakrishnan
6b30d65e05 add backupConfig in test 2015-11-09 21:08:23 -08:00
Girish Ramakrishnan
10c876ac75 make migrate test work 2015-11-09 20:34:27 -08:00
Girish Ramakrishnan
0966bd0bb1 use debug instead of console.error 2015-11-09 19:10:33 -08:00
Girish Ramakrishnan
294d1bfca4 remove ununsed require 2015-11-09 18:57:57 -08:00
Girish Ramakrishnan
af1d1236ea expose api server origin to determine if the box is dev/staging/prod 2015-11-09 17:50:09 -08:00
Girish Ramakrishnan
eaf9febdfd do not save aws and backupKey
it is now part of backupConfig
2015-11-09 08:39:01 -08:00
Johannes Zellner
8748226ef3 The controller already ensures we don't show the views here 2015-11-09 09:56:59 +01:00
Johannes Zellner
73568777c0 Only show dns and cert pages for custom domain cloudrons 2015-11-09 09:56:08 +01:00
Girish Ramakrishnan
c64697dde7 cannot read propery provider of null 2015-11-09 00:42:25 -08:00
Girish Ramakrishnan
0701e38a04 do not set dnsConfig for caas on activation 2015-11-09 00:20:16 -08:00
Girish Ramakrishnan
2a27d96e08 pass dnsConfig in update 2015-11-08 23:57:42 -08:00
Girish Ramakrishnan
ba42611701 token is not a function 2015-11-08 23:54:47 -08:00
Girish Ramakrishnan
54486138f0 pass dnsConfig to backend api 2015-11-08 23:21:55 -08:00
Girish Ramakrishnan
13d3f506b0 always add dns config in tests 2015-11-08 23:05:55 -08:00
Girish Ramakrishnan
32ca686e1f read dnsConfig from settings to choose api backend 2015-11-08 22:55:31 -08:00
Girish Ramakrishnan
a5ef9ff372 Add getAllPaged to storage api 2015-11-08 22:41:13 -08:00
Girish Ramakrishnan
738bfa7601 remove unused variable 2015-11-08 11:04:39 -08:00
Girish Ramakrishnan
40cdd270b1 ensure correct token is used in tests 2015-11-07 22:19:23 -08:00
Girish Ramakrishnan
53a2a8015e set the backupConfig in backups test 2015-11-07 22:18:39 -08:00
Girish Ramakrishnan
15aaa440a2 Add test for settings.backupConfig 2015-11-07 22:17:08 -08:00
Girish Ramakrishnan
d8a4014eff remove bad reference to config.aws 2015-11-07 22:12:03 -08:00
Girish Ramakrishnan
d25d423ccd remove legacy update params
these are now part of settings
2015-11-07 22:07:25 -08:00
Girish Ramakrishnan
49b0fde18b remove config.aws and config.backupKey 2015-11-07 22:06:56 -08:00
Girish Ramakrishnan
8df7f17186 load backup config from settingsdb 2015-11-07 22:06:09 -08:00
Girish Ramakrishnan
adc395f888 save backupConfig in db 2015-11-07 21:45:38 -08:00
Girish Ramakrishnan
e770664365 Add backup config to settings 2015-11-07 18:02:45 -08:00
Girish Ramakrishnan
05d4ad3b5d read new format of restore keys 2015-11-07 17:53:54 -08:00
Girish Ramakrishnan
cc6f726f71 change backup provider to caas 2015-11-07 09:17:58 -08:00
Girish Ramakrishnan
a4923f894c prepare for new backupConfig 2015-11-07 00:26:12 -08:00
Girish Ramakrishnan
12200f2e0d split aws code into storage backends 2015-11-06 18:22:29 -08:00
Girish Ramakrishnan
a853afc407 backups: add api call 2015-11-06 18:14:59 -08:00
Girish Ramakrishnan
de471b0012 return BackupError and not SubdomainError 2015-11-06 18:08:25 -08:00
Girish Ramakrishnan
b6f1ad75b8 merge SubdomainError into subdomains.js like other error classes 2015-11-06 17:58:01 -08:00
Girish Ramakrishnan
e6840f352d remove spurious debugs 2015-11-05 11:59:11 -08:00
Johannes Zellner
6456874f97 Avoid ui glitch in setup if no custom domain
This moves the wizard flow logic to next() instead of the views individually
2015-11-05 20:32:30 +01:00
Johannes Zellner
66b4a4b02a Remove unused favicon middleware 2015-11-05 19:29:08 +01:00
Girish Ramakrishnan
7e36b3f8e5 mailer: set dnsReady flag 2015-11-05 10:28:41 -08:00
Girish Ramakrishnan
12061cc707 mailDnsRecordIds is never used 2015-11-05 10:22:25 -08:00
Girish Ramakrishnan
afcc62ecf6 mailServer is never used 2015-11-05 10:22:25 -08:00
Johannes Zellner
bec6850c98 Specify icon for oauth views 2015-11-05 19:19:28 +01:00
Girish Ramakrishnan
d253a06bab Revert "Remove unused very old yellowtent favicon"
This reverts commit c15a200d4a.

This is used by the favicon middleware
2015-11-05 10:14:59 -08:00
Johannes Zellner
857c5c69b1 force reload the webadmin on cert upload 2015-11-05 18:38:32 +01:00
Girish Ramakrishnan
766fc49f39 setup dkim records for custom domain 2015-11-05 09:30:23 -08:00
Johannes Zellner
941e09ca9f Update the cloudron icon 2015-11-05 18:05:19 +01:00
Johannes Zellner
2466a97fb8 Remove more unused image assets 2015-11-05 18:04:54 +01:00
Johannes Zellner
81f92f5182 Remover another unused favicon 2015-11-05 17:49:25 +01:00
Johannes Zellner
91e1d442ff Update the avatar 2015-11-05 17:48:18 +01:00
Johannes Zellner
a1d6ae2296 Remove unused very old yellowtent favicon 2015-11-05 17:46:17 +01:00
Girish Ramakrishnan
b529fd3bea we expect the server cert to be first like the rfc says 2015-11-04 19:22:37 -08:00
Girish Ramakrishnan
bf319cf593 more verbose certificate message 2015-11-04 19:15:23 -08:00
Girish Ramakrishnan
15eedd2a84 pass on certificate errors 2015-11-04 19:13:16 -08:00
Girish Ramakrishnan
d0cd3d1c32 Put dns first since it says dns & certs 2015-11-04 18:23:51 -08:00
Girish Ramakrishnan
747786d0c8 fix avatar url for custom domains 2015-11-04 18:21:46 -08:00
Girish Ramakrishnan
b232255170 move dns and certs to own view 2015-11-04 18:21:43 -08:00
Girish Ramakrishnan
bd2982ea69 move backups after about 2015-11-04 16:41:33 -08:00
Girish Ramakrishnan
1c948cc83c route53: Do no use weight and setIdentifier 2015-11-04 15:15:39 -08:00
Girish Ramakrishnan
ccde1e51ad debug any failure 2015-11-04 14:28:26 -08:00
Girish Ramakrishnan
03ec940352 Add space in spf record 2015-11-04 14:22:56 -08:00
Girish Ramakrishnan
bd5b15e279 insert any dns config into settings 2015-11-04 08:45:12 -08:00
Girish Ramakrishnan
b6897a4577 Revert "Add isConfigured fallback for caas cloudrons"
This reverts commit 338f68a0f3.

We will send dns config from appstore instead
2015-11-04 08:28:21 -08:00
Johannes Zellner
f7225523ec Add isConfigured fallback for caas cloudrons 2015-11-04 12:53:16 +01:00
Girish Ramakrishnan
6a0b8e0722 remove unused backup route 2015-11-03 16:52:38 -08:00
Girish Ramakrishnan
9d9509525c listen on timezone key only when configured 2015-11-03 16:11:24 -08:00
Girish Ramakrishnan
b1dbb3570b Add configured event
Cloudron code paths like cron/mailer/taskmanager now wait for configuration
to be complete before doing anything.

This is useful when a cloudron is moved from a non-custom domain to a custom domain.
In that case, we require route53 configs.
2015-11-03 16:06:38 -08:00
Girish Ramakrishnan
c075160e5d Remove event listener 2015-11-03 15:22:02 -08:00
Girish Ramakrishnan
612ceba98a unsubscribe from events 2015-11-03 15:19:06 -08:00
Girish Ramakrishnan
7d5e0040bc debug only if error 2015-11-03 15:15:37 -08:00
Girish Ramakrishnan
d6e19d2000 resume tasks only if cloudron is activated 2015-11-03 15:14:59 -08:00
Girish Ramakrishnan
a01d5db2a0 minor refactor 2015-11-02 17:45:38 -08:00
Girish Ramakrishnan
5de3baffd4 send monotonic timestamp as well 2015-11-02 14:26:15 -08:00
Girish Ramakrishnan
63c10e8f02 fix typo 2015-11-02 14:23:02 -08:00
Girish Ramakrishnan
a99e7c2783 disable logstream testing (since it requires journald) 2015-11-02 14:08:34 -08:00
Girish Ramakrishnan
88b1cc553f Use journalctl to get app logs 2015-11-02 14:08:34 -08:00
Girish Ramakrishnan
316e8dedd3 name is a query parameter 2015-11-02 14:08:34 -08:00
Girish Ramakrishnan
bb040eb5a8 give yellowtent user access to cloudron logs 2015-11-02 13:20:43 -08:00
Johannes Zellner
f106a76cd5 Fix the avatar and brand label links in navbar 2015-11-02 21:27:52 +01:00
Girish Ramakrishnan
95b2bea828 Give containers a name 2015-11-02 09:34:31 -08:00
Girish Ramakrishnan
51a0ad70aa log to journald instead 2015-11-02 08:54:10 -08:00
Girish Ramakrishnan
71faa5f89e Move ssh to port 202 2015-11-01 13:16:32 -08:00
Girish Ramakrishnan
58d6166592 fix indexOf matching in addDnsRecords 2015-10-30 18:12:24 -07:00
Girish Ramakrishnan
d42f66bfed Fix casing of zone id 2015-10-30 18:05:08 -07:00
Girish Ramakrishnan
6cf0554e23 do not resolve to a tag 2015-10-30 17:40:17 -07:00
Girish Ramakrishnan
5bd8579e73 add null check 2015-10-30 16:24:56 -07:00
Girish Ramakrishnan
01cd0b6b87 fix indexOf matching 2015-10-30 16:12:39 -07:00
Girish Ramakrishnan
b4aec552fc txtRecords is a single level array 2015-10-30 16:04:09 -07:00
Girish Ramakrishnan
93ab606d94 dns resolve using the authoritative nameserver 2015-10-30 15:57:15 -07:00
Girish Ramakrishnan
94e94f136d sendHearbeat on init 2015-10-30 14:58:48 -07:00
Girish Ramakrishnan
1b57128ef6 ListResourceRecordSet returns items based on lexical sorting 2015-10-30 14:41:34 -07:00
Girish Ramakrishnan
219a2b0798 rename function 2015-10-30 13:53:12 -07:00
Girish Ramakrishnan
b37d5b0fda enable back spf 2015-10-30 13:48:46 -07:00
Girish Ramakrishnan
0e9aac14eb leave a note on subdomains.update 2015-10-30 13:47:10 -07:00
Girish Ramakrishnan
cf81ab0306 subdomains.update now takes array 2015-10-30 13:45:10 -07:00
Girish Ramakrishnan
00d8148e46 fix get call 2015-10-30 13:45:10 -07:00
Johannes Zellner
0b59281dbb Remove leftover console.log() 2015-10-30 21:30:57 +01:00
Johannes Zellner
e0c845ca16 Fix cloudron tests when run together 2015-10-30 21:30:57 +01:00
Girish Ramakrishnan
d6bff57c7d subdomains.del now takes array values 2015-10-30 13:30:19 -07:00
Girish Ramakrishnan
5c4b4d764e implement subdomains.get for route53 2015-10-30 13:23:43 -07:00
Girish Ramakrishnan
bf13b5b931 subdomains.add takes array values 2015-10-30 13:23:43 -07:00
Johannes Zellner
afade0a5ac Ensure the next buttons are properly disabled when the fields are invalid 2015-10-30 21:10:01 +01:00
Johannes Zellner
40da8736d4 Do not use special wizard page controller for dns view 2015-10-30 21:10:01 +01:00
Johannes Zellner
a55675b440 Fixup the enter focus flow 2015-10-30 21:10:01 +01:00
Girish Ramakrishnan
6ce71c7506 prepare dns backends to accepts array of values 2015-10-30 13:04:43 -07:00
Girish Ramakrishnan
0dda91078d remove listeners on uninitialize 2015-10-30 12:52:33 -07:00
Girish Ramakrishnan
93632f5c76 disable spf for testing 2015-10-30 12:50:47 -07:00
Girish Ramakrishnan
cb4cd10013 settings changed callback provides the changed setting as first argument 2015-10-30 12:50:47 -07:00
Johannes Zellner
62bcf09ab4 Fix error message in set dns config 2015-10-30 20:32:58 +01:00
Girish Ramakrishnan
b466dc1970 remove unused require 2015-10-30 11:43:40 -07:00
Girish Ramakrishnan
0a10eb66cc caas: add subdomains.get 2015-10-29 16:41:04 -07:00
Girish Ramakrishnan
c6322c00aa remove unused subdomains.addMany 2015-10-29 16:39:07 -07:00
Girish Ramakrishnan
b549a4bddf minor rename of variable 2015-10-29 16:38:18 -07:00
Girish Ramakrishnan
3fa50f2a1a handle getSubdomain error 2015-10-29 16:27:01 -07:00
Girish Ramakrishnan
ddded0ebfb emit change event 2015-10-29 16:21:41 -07:00
Girish Ramakrishnan
71c0945607 add updateSubdomain in route53 backend 2015-10-29 15:37:51 -07:00
Girish Ramakrishnan
f0295c5dc5 debug message for update already in progress 2015-10-29 15:34:30 -07:00
Girish Ramakrishnan
4e1286a8cf addDnsRecords on restarts after activation 2015-10-29 15:00:53 -07:00
Girish Ramakrishnan
d69cead362 remove unused variable 2015-10-29 14:57:51 -07:00
Girish Ramakrishnan
7699cffa26 implement dns updates for custom domains 2015-10-29 14:33:34 -07:00
Girish Ramakrishnan
1021fc566f Add subdomains.update 2015-10-29 14:16:09 -07:00
Girish Ramakrishnan
1fb3b2c373 Add get and update subdomain to caas 2015-10-29 14:15:54 -07:00
Girish Ramakrishnan
2428000262 Add square bracket for empty string/no apps 2015-10-29 13:07:52 -07:00
Johannes Zellner
3d5b4f3191 Ensure we only show cert related ui parts for custom domain cloudrons 2015-10-29 21:01:24 +01:00
Girish Ramakrishnan
eb6a217f4a set non-custom domain provider as caas 2015-10-29 12:41:31 -07:00
Girish Ramakrishnan
06aaf98716 dnsConfig provider can be caas 2015-10-29 12:33:10 -07:00
Girish Ramakrishnan
26fc1fd7a6 use debug again 2015-10-29 12:28:50 -07:00
Girish Ramakrishnan
a9aa3c4fd8 use debug instead of console.error 2015-10-29 12:26:58 -07:00
Girish Ramakrishnan
61d4509a8e do not emit fake activation event
cloudron is simply initialized the first thing
2015-10-29 12:18:25 -07:00
Johannes Zellner
8cff4f4ff1 Only print an app alive digest 2015-10-29 19:54:20 +01:00
Girish Ramakrishnan
5dc30e02c4 mailer: do not start until activated 2015-10-29 10:12:44 -07:00
Girish Ramakrishnan
55f070e12c ensure cloudron.js is initialized first 2015-10-29 10:11:05 -07:00
Girish Ramakrishnan
0afb8f51c3 Fix typo 2015-10-29 09:00:31 -07:00
Girish Ramakrishnan
42f2637078 setup dns records on activation
do no wait for records to sync as well. the appstore does all the
waiting now (or the user in selfhosted case)
2015-10-29 08:43:25 -07:00
Girish Ramakrishnan
bbec7c6610 send heartbeat regardless of activation 2015-10-29 08:40:52 -07:00
Johannes Zellner
76fc257661 Add a link to support page 2015-10-29 14:48:02 +01:00
Johannes Zellner
58ce50571a Adjust text 2015-10-29 14:41:59 +01:00
Johannes Zellner
14205d2810 Add missing = 2015-10-29 14:36:25 +01:00
Johannes Zellner
d798fc4b3f Show current dns config in settings ui 2015-10-29 14:34:43 +01:00
Johannes Zellner
d29d07cb2d Add Client.getDnsConfig() 2015-10-29 14:34:25 +01:00
Johannes Zellner
07a0b360f6 Add form logic for dns credentials 2015-10-29 13:53:48 +01:00
Johannes Zellner
8b253a8a61 Rename cert header 2015-10-29 12:56:12 +01:00
Johannes Zellner
fddbf96c9c Add form feedback for cert forms in settings 2015-10-29 12:55:32 +01:00
Johannes Zellner
d1d01ae4b8 Do not double submit the forms 2015-10-29 12:37:59 +01:00
Johannes Zellner
51706afc46 Fix typo in field name 2015-10-29 12:37:30 +01:00
Johannes Zellner
d4ea23b1ac Add client.setAdminCertificate() 2015-10-29 12:29:25 +01:00
Johannes Zellner
0460beccf0 Add route to set the admin certificate
This route is separate until we can treat the webadmin just
like any other app
2015-10-29 12:27:57 +01:00
Johannes Zellner
aa5ed17dfa streamline cert upload forms in settings 2015-10-29 11:52:53 +01:00
Girish Ramakrishnan
32173b19c9 Do not subscribe to activation event if already activated 2015-10-28 17:07:13 -07:00
Girish Ramakrishnan
1a8fd7dd92 remove gInitialized pattern
not sure why we initialize anything more than once
2015-10-28 17:05:28 -07:00
Girish Ramakrishnan
f0047bc1aa console.error -> debug 2015-10-28 17:05:16 -07:00
Girish Ramakrishnan
917832e0ae Change DKIM selector to cloudron 2015-10-28 16:16:15 -07:00
Girish Ramakrishnan
cf8948ac69 console.error to debug 2015-10-28 16:08:12 -07:00
Girish Ramakrishnan
b2df639155 Move dns backends to separate directory 2015-10-28 16:04:49 -07:00
Girish Ramakrishnan
70ace09ff5 remove unused digitalocean.js 2015-10-28 15:25:22 -07:00
Girish Ramakrishnan
35a69f595a remove unused require 2015-10-28 15:22:10 -07:00
Girish Ramakrishnan
f4c4a931d2 Fix debug message 2015-10-28 15:21:47 -07:00
Girish Ramakrishnan
7caced2fe8 Do not send email if SPF record is not setup correctly 2015-10-28 14:45:51 -07:00
Johannes Zellner
846e5deb36 Add cert form error reporting to app install form 2015-10-28 22:21:20 +01:00
Johannes Zellner
eca328b247 Add cert and key to app install route 2015-10-28 22:09:19 +01:00
Johannes Zellner
c0e9091e4b Preselect current user for singleuser apps 2015-10-28 22:07:41 +01:00
Johannes Zellner
6b6e417435 Add user id to profile object 2015-10-28 22:07:30 +01:00
Johannes Zellner
954bb7039c Make cert forms appear as a group 2015-10-28 21:20:59 +01:00
Johannes Zellner
ae01f517c7 Mark cert fields as optional 2015-10-28 20:51:11 +01:00
Johannes Zellner
385bfe07e2 Add cert handling to install form 2015-10-28 20:50:55 +01:00
Johannes Zellner
25aff6a53b Remove unsed scope vars 2015-10-28 20:38:22 +01:00
Johannes Zellner
edcbf79b85 Add form feedback for certs 2015-10-28 20:30:35 +01:00
Girish Ramakrishnan
2591b8e10c minor rewording 2015-10-28 10:17:46 -07:00
Johannes Zellner
9df9d1667f Test certs are now simply embedded 2015-10-28 16:34:09 +01:00
Johannes Zellner
7798111af1 Ensure the test certs match domain and the folder is created 2015-10-28 16:33:45 +01:00
Johannes Zellner
12351113a9 Fixup the tests for wildcard cert 2015-10-28 16:00:51 +01:00
Johannes Zellner
d9256f99af Make sure the dns sync file is removed 2015-10-28 15:32:49 +01:00
Johannes Zellner
cf021066ed Move cert validation to settings and use it for wildcard cert 2015-10-28 14:35:39 +01:00
Johannes Zellner
04eb2a982f Add proper cert validator 2015-10-28 14:20:25 +01:00
Johannes Zellner
22dcc787b5 Add x509 node module 2015-10-28 14:20:03 +01:00
Johannes Zellner
5d4d0c0a86 Add missing fs. 2015-10-28 12:56:09 +01:00
Johannes Zellner
e81db9728a Set the cert and key dynamically when rendering nginx appconfig 2015-10-28 12:42:04 +01:00
Johannes Zellner
db305af8c9 We name certs with .cert extension 2015-10-28 12:33:27 +01:00
Johannes Zellner
4b3aca7773 certs should be stored with app fqdn 2015-10-28 12:28:57 +01:00
Johannes Zellner
5b5abe99e7 Save the uploaded certs to app cert directory 2015-10-28 12:24:59 +01:00
Johannes Zellner
8f670eb755 Add per app cert dir 2015-10-28 12:23:16 +01:00
Johannes Zellner
21a604814c Add tests for app cert upload 2015-10-28 12:13:37 +01:00
Johannes Zellner
7eeb835d96 Adjust the settings view to upload certs as json body 2015-10-28 12:12:54 +01:00
Johannes Zellner
57de915133 Make settings certificate upload route also just using the json body 2015-10-28 12:12:06 +01:00
Johannes Zellner
a892de5c2d Ensure cert and key are strings 2015-10-28 11:50:50 +01:00
Johannes Zellner
69cd01955b No more dns view 2015-10-28 10:36:55 +01:00
Girish Ramakrishnan
f39809c941 EE API is synchronous 2015-10-27 22:18:02 -07:00
Girish Ramakrishnan
09c4bfeb51 Add DNS records for non-custom domains before activation 2015-10-27 21:10:00 -07:00
Girish Ramakrishnan
615789a9ad fix unregisterSubdomain loop 2015-10-27 18:53:06 -07:00
Girish Ramakrishnan
bec5eaf3c9 send heartbeat immediately on startup 2015-10-27 17:05:56 -07:00
Girish Ramakrishnan
4f13ef9cea hearbeat does not rely on dns sync 2015-10-27 16:42:24 -07:00
Girish Ramakrishnan
873de48beb Do not add DNS records for custom domain 2015-10-27 16:23:08 -07:00
Girish Ramakrishnan
87e70b86d3 sendHeartbeat on activation event 2015-10-27 16:20:14 -07:00
Girish Ramakrishnan
140aa85223 Add cloudron.isActivatedSync 2015-10-27 16:12:05 -07:00
Girish Ramakrishnan
3ac3207497 send heartbeats regardless of activation 2015-10-27 16:05:19 -07:00
Girish Ramakrishnan
e36a0b9a30 create cron jobs only on activation 2015-10-27 16:04:29 -07:00
Girish Ramakrishnan
0b1aac7687 add null check for all jobs 2015-10-27 16:02:42 -07:00
Girish Ramakrishnan
e008cde2ff Add dns records on activation 2015-10-27 16:00:31 -07:00
Girish Ramakrishnan
d1e46be8ad Do not set dns config if null 2015-10-27 12:41:13 -07:00
Girish Ramakrishnan
dc18a18248 remove unused variables 2015-10-27 12:39:06 -07:00
Girish Ramakrishnan
b9a0ad73ab $location is not defined 2015-10-27 12:37:01 -07:00
Girish Ramakrishnan
e2c3fb309c Add custom domain setup step 2015-10-27 12:04:27 -07:00
Girish Ramakrishnan
d5255b8cf4 Add Client.setDnsConfig 2015-10-27 12:02:47 -07:00
Girish Ramakrishnan
42e70e870b deploymentConfig is never used 2015-10-27 11:38:05 -07:00
Johannes Zellner
8ffd7b0197 Adjust the webadmin code for cert upload 2015-10-27 18:38:46 +01:00
Johannes Zellner
01ead194d8 Move cert upload route to /settings 2015-10-27 18:38:46 +01:00
Girish Ramakrishnan
80b9d4be50 awscredentials route is not called anymore 2015-10-27 10:24:42 -07:00
Girish Ramakrishnan
ef06836804 make apps-test partially work 2015-10-27 10:14:51 -07:00
Johannes Zellner
916870b546 Send cert and key with configure 2015-10-27 18:11:48 +01:00
Girish Ramakrishnan
2da7216be6 make apptask-test work 2015-10-27 10:02:43 -07:00
Girish Ramakrishnan
54215cff7a Use the aws backend for tests 2015-10-27 10:02:43 -07:00
Girish Ramakrishnan
166257bbdc Allow endpoint to be configured (for the tests) 2015-10-27 10:02:43 -07:00
Girish Ramakrishnan
d502e04cbd use aws backend for custom domains 2015-10-27 10:02:43 -07:00
Girish Ramakrishnan
1fca680a67 read dns config from settings 2015-10-27 10:02:43 -07:00
Johannes Zellner
4ea3238391 Pass certs down to apps.configure 2015-10-27 16:36:09 +01:00
Johannes Zellner
fa12e7bd97 Add cert and key to app configure route 2015-10-27 15:44:47 +01:00
Johannes Zellner
6118535c4a Add test helper script to generate a selfsigned cert 2015-10-27 15:06:53 +01:00
Johannes Zellner
920f04aab3 Use test-app image 10.0.0 2015-10-27 14:20:19 +01:00
Johannes Zellner
ed13f2d6ef Add basic form elements for certificate in app configure 2015-10-27 12:26:55 +01:00
Johannes Zellner
dff27fe7b3 Remove unused dns views 2015-10-27 10:40:05 +01:00
Johannes Zellner
5d589e7330 Move certificate upload form from dns to settings 2015-10-27 10:39:02 +01:00
Johannes Zellner
01ec16f472 Remove useless console.log() 2015-10-27 10:38:11 +01:00
Girish Ramakrishnan
f510d4bc10 add route for setting/getting dns settings 2015-10-26 16:52:59 -07:00
Girish Ramakrishnan
2db2eb13af add settings.get/setDnsConfig 2015-10-26 16:35:50 -07:00
Girish Ramakrishnan
82e1c07722 separate out dns and backup credentials 2015-10-26 16:23:41 -07:00
Girish Ramakrishnan
23ba078a17 Fix redis hostname 2015-10-23 19:24:22 -07:00
Girish Ramakrishnan
b5358e7565 recreate docker containers for hostname change 2015-10-23 16:30:17 -07:00
Girish Ramakrishnan
697699bd5f test the new env vars APP_* 2015-10-23 16:27:40 -07:00
Girish Ramakrishnan
dd2a806ab8 Do not set hostname of app container
Some apps like pasteboard try to curl the public app url from inside
the container. This fails because we set the hostname and the hostname
maps to the internal docker IP.

To fix this, simply export two environment variables providing the
app's domain and origin. The hostname is set to the app location instead
of the FQDN for debugging.

Fixes #521
2015-10-23 16:17:35 -07:00
Girish Ramakrishnan
84d96cebee linter fixes 2015-10-23 16:06:55 -07:00
Johannes Zellner
10658606d7 Bring back 'Cloudron' in the login header 2015-10-23 20:21:31 +02:00
Johannes Zellner
f72d89fa76 Replace the ugly oauth proxy checkbox 2015-10-22 13:18:58 +02:00
Johannes Zellner
f9f4a8e7ad Check memory availability if an app can be installed or not 2015-10-22 11:16:55 +02:00
Johannes Zellner
fd58e83da9 Provide the memory byte count with the cloudron config route 2015-10-22 11:16:55 +02:00
Johannes Zellner
bfcedfdb2a Add node module bytes 2015-10-22 11:16:55 +02:00
Johannes Zellner
d11e030150 Add resource constrait view on app installation attempt 2015-10-22 11:16:55 +02:00
Johannes Zellner
6103640b53 Remove leftover console.log() 2015-10-22 11:16:55 +02:00
Girish Ramakrishnan
259199897b update test image 2015-10-21 09:16:04 -07:00
Johannes Zellner
ee498b9e2b A readable stream does not have .end() 2015-10-21 17:25:14 +02:00
Johannes Zellner
18a464b1d2 Make data/ directory writeable by yellowtent user 2015-10-21 17:18:45 +02:00
Johannes Zellner
d1c8e34540 dns in sync file should be under data/ 2015-10-21 17:18:39 +02:00
Johannes Zellner
a151846f1c Use config.(set)dnsInSync()
Fixes #520
2015-10-21 16:44:03 +02:00
Johannes Zellner
9f19b0bc9e Use a persistent file for dns sync flag 2015-10-21 16:42:17 +02:00
Johannes Zellner
289fe76adc Avoid network request for access token verification in oauth proxy 2015-10-21 16:23:15 +02:00
Johannes Zellner
1eb1c44926 Clear oauthproxy session in case the access token is invalid 2015-10-21 15:57:18 +02:00
Girish Ramakrishnan
bc09e4204b use debug instead of console.error 2015-10-20 19:03:34 -07:00
Girish Ramakrishnan
1a2948df85 VolumesFrom is part of HostConfig 2015-10-20 17:34:47 -07:00
Girish Ramakrishnan
16df15cf55 containerId does not mean it is running 2015-10-20 16:56:57 -07:00
Girish Ramakrishnan
0566bad6d9 bump infra version 2015-10-20 15:07:35 -07:00
Girish Ramakrishnan
edc90ccc00 bump test image 2015-10-20 14:40:27 -07:00
Girish Ramakrishnan
3688602d16 test the scheduler 2015-10-20 14:30:50 -07:00
Girish Ramakrishnan
0deadc5cf2 autodetect image id 2015-10-20 13:07:25 -07:00
Girish Ramakrishnan
10ac435d53 addons is mandatory 2015-10-20 12:57:00 -07:00
Girish Ramakrishnan
16f025181f ensure boolean 2015-10-20 12:49:02 -07:00
Girish Ramakrishnan
3808f60e69 appState can be null 2015-10-20 12:32:50 -07:00
Girish Ramakrishnan
a00615bd4e manifest always has addons 2015-10-20 12:27:23 -07:00
Girish Ramakrishnan
14bc2c7232 rename isSubcontainer -> isAppContainer 2015-10-20 10:55:06 -07:00
Girish Ramakrishnan
76d286703c ignore portBindings and exportPorts for subcontainers 2015-10-20 10:42:35 -07:00
Girish Ramakrishnan
c80a5b59ab do not dump containerOptions 2015-10-20 10:27:53 -07:00
Girish Ramakrishnan
db6882e9f5 do not kill containers on restart 2015-10-20 10:22:42 -07:00
Girish Ramakrishnan
3fd9d9622b schedulerConfig cannot be null 2015-10-20 09:44:46 -07:00
Girish Ramakrishnan
5ae4c891de scheduler: sync more often to catch bugs sooner 2015-10-20 09:36:55 -07:00
Girish Ramakrishnan
fb2e7cb009 scheduler: crash fixes 2015-10-20 09:36:30 -07:00
Johannes Zellner
8124f0ac7f Remove cloudron name from the setup wizard 2015-10-20 13:36:25 +02:00
Johannes Zellner
446f571bec The activate route does not take a cloudron name anymore 2015-10-20 13:12:37 +02:00
Johannes Zellner
142ae76542 Finally remove the cloudron name from the api wrapper and index page 2015-10-20 12:59:57 +02:00
Johannes Zellner
ed1873f47e Remove cloudron name related UI in the settings view 2015-10-20 12:57:51 +02:00
Johannes Zellner
0ee04e6ef3 Remove cloudron name usage in error.html 2015-10-20 12:55:30 +02:00
Johannes Zellner
1e4475b275 Remove cloudron name usage in naked domain page 2015-10-20 12:46:45 +02:00
Johannes Zellner
9dd9743943 Do not use cloudron name in appstatus 2015-10-20 12:41:52 +02:00
Johannes Zellner
5fbcebf80b Stop using the cloudron name in the oauth views 2015-10-20 12:31:16 +02:00
Girish Ramakrishnan
852b016389 scheduler: do not save cronjob object in state
the cronjob object has lots of js stuff and stringify fails
2015-10-20 01:31:11 -07:00
Girish Ramakrishnan
73f28d7653 put back request 2015-10-20 00:20:21 -07:00
Girish Ramakrishnan
1f28678c27 scheduler: make it work 2015-10-20 00:05:19 -07:00
Girish Ramakrishnan
daba68265c stop all containers of an app 2015-10-20 00:05:19 -07:00
Girish Ramakrishnan
6d04481c27 fix debug tag 2015-10-19 23:38:55 -07:00
Girish Ramakrishnan
ed5d6f73bb scheduler: fix require 2015-10-19 22:42:13 -07:00
Girish Ramakrishnan
d0360e9e68 scheduler: load/save state 2015-10-19 22:41:42 -07:00
Girish Ramakrishnan
32ddda404c explicitly specify all to 0 (this is the default) 2015-10-19 22:09:38 -07:00
Girish Ramakrishnan
41de667e3d do not set container name (we use labels instead) 2015-10-19 22:09:38 -07:00
Girish Ramakrishnan
8530e70af6 delete all containers of an app 2015-10-19 22:09:34 -07:00
Girish Ramakrishnan
7a840ad15f scheduler: make stopJobs async 2015-10-19 21:36:55 -07:00
Girish Ramakrishnan
682c2721d2 scheduler: kill existing tasks if they are still running 2015-10-19 21:36:23 -07:00
Girish Ramakrishnan
fb56795cbd merge start options into hostconfig 2015-10-19 21:35:02 -07:00
Girish Ramakrishnan
15aa4ecc5d Add docker.createSubcontainer 2015-10-19 21:33:53 -07:00
Girish Ramakrishnan
351d7d22fb rename tasks to tasksConfig 2015-10-19 16:29:28 -07:00
Girish Ramakrishnan
79999887a9 job -> cronJob 2015-10-19 16:27:03 -07:00
Girish Ramakrishnan
25d74ed649 createContainer takes optional command 2015-10-19 16:22:35 -07:00
Girish Ramakrishnan
9346666b3e add labels to container 2015-10-19 16:01:04 -07:00
Girish Ramakrishnan
13453552b5 createContainer only takes app object 2015-10-19 16:00:40 -07:00
Girish Ramakrishnan
ef38074b55 add asserts 2015-10-19 15:51:02 -07:00
Girish Ramakrishnan
e5e8eea7ac make it work without app object 2015-10-19 15:45:43 -07:00
Girish Ramakrishnan
9be2efc4f2 downloadImage only requires manifest now 2015-10-19 15:37:57 -07:00
Girish Ramakrishnan
990b7a2d20 implement scheduler
- scan for apps every 10 minutes and schedules tasks
- uses docker.exec
    - there is no way to control exec container. docker developers
      feel exec is for debugging purposes primarily
- future version will be based on docker run instead

part of #519
2015-10-19 14:53:34 -07:00
Girish Ramakrishnan
8d6dd62ef4 refactor container code into docker.js 2015-10-19 14:44:01 -07:00
Girish Ramakrishnan
69d09e8133 use docker.connection 2015-10-19 14:09:20 -07:00
Girish Ramakrishnan
6671b211e0 export a connection property from docker.js 2015-10-19 11:24:21 -07:00
Girish Ramakrishnan
307e815e97 remove unused require 2015-10-19 11:18:50 -07:00
Girish Ramakrishnan
d8e2bd6ff5 Refactor docker.js to not have mac stuff 2015-10-19 11:14:11 -07:00
Girish Ramakrishnan
e74c2f686b remove unused require 2015-10-19 11:05:31 -07:00
Girish Ramakrishnan
c7d5115a56 Remove vbox.js
... and all related mac code. It's totally untested at this point and
most likely doesn't work
2015-10-19 10:54:36 -07:00
Girish Ramakrishnan
774ba11a92 Move HostConfig to createContainer
Newer docker has obsoleted HostConfig in start container
2015-10-19 10:38:46 -07:00
Girish Ramakrishnan
322edbdc20 getByAppIdAndType 2015-10-19 08:58:07 -07:00
Johannes Zellner
c1ba551e07 Cleanup some of the html form elements 2015-10-19 10:31:19 +02:00
Johannes Zellner
9917412329 Indicate during app installation and configuration if the app is a single user app 2015-10-19 10:29:51 +02:00
Girish Ramakrishnan
2f4adb4d5f keep addon listing alphabetical 2015-10-18 20:06:26 -07:00
Girish Ramakrishnan
b61b864094 make callback noop 2015-10-17 13:57:19 -07:00
Johannes Zellner
fa193276c9 Require exactly one user in accessRestriction for singleUser app installations 2015-10-16 20:01:45 +02:00
Johannes Zellner
0ca09c384a Hide client secret field for simple auth 2015-10-16 19:41:50 +02:00
Johannes Zellner
a6a39cc4e6 Adapt clients.getAllWithDetailsByUserId() to new client types 2015-10-16 19:36:12 +02:00
Johannes Zellner
c9f84f6259 Show user selection for singleUser apps 2015-10-16 18:06:49 +02:00
Johannes Zellner
07063ca4f0 Adjust the webadmin to the accessRestriction changes 2015-10-16 16:14:23 +02:00
Johannes Zellner
b5cfdcf875 Fixup the unit tests for accessRestriction format change 2015-10-16 16:06:13 +02:00
Johannes Zellner
373db25077 Make accessRestriction a JSON format to prepare for group access control 2015-10-16 15:32:19 +02:00
Johannes Zellner
f8c2ebe61a Taks accessRestriction and oauthProxy into account for an update through the cli 2015-10-16 14:50:00 +02:00
Johannes Zellner
ae23fade1e Show oauthProxy and accessRestriction values at app installation and configuration 2015-10-16 14:50:00 +02:00
Johannes Zellner
5386c05c0d Give developer tokens the correct scopes 2015-10-16 14:50:00 +02:00
Johannes Zellner
aed94c8aaf roleDeveloper is no more 2015-10-16 14:50:00 +02:00
Johannes Zellner
37185fc4d5 Only allow simple auth clients through simple auth 2015-10-16 14:49:51 +02:00
Johannes Zellner
cc64c6c9f7 Test using simple auth credentials in oauth 2015-10-16 11:48:12 +02:00
Johannes Zellner
0c0782ccd7 Fixup oauth to not allow simple auth clients 2015-10-16 11:27:42 +02:00
Johannes Zellner
5bc9f9e995 use clientdb types in authorization endpoint 2015-10-16 11:22:16 +02:00
Johannes Zellner
22402d1741 Remove legacy test auth client type 2015-10-16 10:05:58 +02:00
Johannes Zellner
8f203b07a1 Fix indentation 2015-10-16 09:19:05 +02:00
Girish Ramakrishnan
9c157246b7 add type field to clients table 2015-10-15 17:35:47 -07:00
Girish Ramakrishnan
d0dfe1ef7f remove unused variable 2015-10-15 17:35:47 -07:00
Girish Ramakrishnan
a9ccc7e2aa remove updating clients
clients are immutable
2015-10-15 16:08:17 -07:00
Girish Ramakrishnan
63edbae1be minor rename 2015-10-15 15:51:51 -07:00
Girish Ramakrishnan
8afe537497 fix typo 2015-10-15 15:32:14 -07:00
Girish Ramakrishnan
f33844d8f1 fix debug tag 2015-10-15 15:19:28 -07:00
Girish Ramakrishnan
c750d00355 ignore any tmp cleanup errors 2015-10-15 14:47:43 -07:00
Girish Ramakrishnan
bb9b39e3c0 callback can be null 2015-10-15 14:25:38 -07:00
Girish Ramakrishnan
057b89ab8e Check error code of image removal 2015-10-15 14:06:05 -07:00
Girish Ramakrishnan
23fc4bec36 callback can be null 2015-10-15 12:06:38 -07:00
Girish Ramakrishnan
6b82fb9ddb Remove old addon images on infra update
Fixes #329
2015-10-15 12:01:31 -07:00
Girish Ramakrishnan
a3ca5a36e8 update test image 2015-10-15 11:11:54 -07:00
Girish Ramakrishnan
f57c91847d addons do not write to /var/log anymore 2015-10-15 11:00:51 -07:00
Johannes Zellner
eda4dc83a3 Do not fail in container.sh when trying to remove non-existing directories 2015-10-15 18:06:57 +02:00
Johannes Zellner
5a0bf8071e Handle the various appId types we have by now 2015-10-15 17:57:07 +02:00
Johannes Zellner
09dfc6a34b Get the oauth2 debug()s in shape 2015-10-15 16:55:48 +02:00
Johannes Zellner
3b8ebe9a59 Fixup the oauth tests with accessRestriction support 2015-10-15 16:50:05 +02:00
Johannes Zellner
2ba1092809 Adhere to accessRestriction for oauth authorization endpoint 2015-10-15 16:49:13 +02:00
Johannes Zellner
7c97ab5408 Revert "Since we got fully rid of the decision dialog, no need to serialze the client anymore"
This is now again required, due to the accesRestriction check

This reverts commit 2c9ff1ee3b.
2015-10-15 16:33:05 +02:00
Johannes Zellner
ac1991f8d1 Fix typo in oauth test 2015-10-15 15:37:56 +02:00
Johannes Zellner
2a573f6ac5 Fixup the simpleauth tests 2015-10-15 15:19:01 +02:00
Johannes Zellner
9833d0cce6 Adhere to accessRestriction in simple auth 2015-10-15 15:18:40 +02:00
Johannes Zellner
fbc3ed0213 Add apps.hasAccessTo() 2015-10-15 15:06:34 +02:00
Johannes Zellner
c916a76e6b Prepare simpleauth test for accessRestriction 2015-10-15 13:29:44 +02:00
Johannes Zellner
ae1bfaf0c8 roleUser is gone as well 2015-10-15 12:50:48 +02:00
Johannes Zellner
0aedff4fec roleAdmin is gone 2015-10-15 12:37:42 +02:00
Johannes Zellner
73d88a3491 Rewrite accessRestriction validator 2015-10-15 12:37:42 +02:00
Girish Ramakrishnan
5d389337cd make /var/log readonly
Expect apps to redirect logs of stdout/stderr

Part of #503
2015-10-15 00:46:50 -07:00
Girish Ramakrishnan
a977597217 cleanup tmpdir in janitor 2015-10-14 23:21:03 -07:00
Girish Ramakrishnan
b3b4106b99 Add janitor tests 2015-10-14 22:50:07 -07:00
Girish Ramakrishnan
7f29eed326 fold janitor into main box code cron job
the volume cleaner will now also come into janitor
2015-10-14 22:39:34 -07:00
Girish Ramakrishnan
ec895a4f31 do not use -f to logrotate
Normally, logrotate is run as a daily cron job. It will not modify a log
multiple times in one day unless the criterion for that log is based on
the log's size and logrotate is being run multiple times each day, or
unless the -f or --force option is used.
2015-10-14 15:10:53 -07:00
Girish Ramakrishnan
fe0c1745e1 make it explicit that logrotate is run via cron in base system 2015-10-14 15:08:38 -07:00
Girish Ramakrishnan
3fc0a96bb0 Add docker volumes janitor
This cleans up tmp and logrotates /var/log every 12 hours.

Note that this janitor is separate from the box janitor because they
run as different users.

Fixes #503
2015-10-14 14:18:36 -07:00
Girish Ramakrishnan
c154f342c2 show restore button if we have a lastBackupId
This is the only way to roll back even if you have a functioning app.
Use cases include:
1. You updated and something doesn't work
2. The app is in 'starting...' state (so it's installed) but no data yet
2015-10-14 11:36:12 -07:00
Johannes Zellner
8f1666dcca Consolidate the oauth comments 2015-10-14 16:31:55 +02:00
Johannes Zellner
9aa4750f55 Since we got fully rid of the decision dialog, no need to serialze the client anymore 2015-10-14 16:22:50 +02:00
Johannes Zellner
c52d985d45 Properly skip decision dialog 2015-10-14 16:16:37 +02:00
Johannes Zellner
376d8d9a38 Cleanup the client serialization 2015-10-14 16:15:51 +02:00
Johannes Zellner
08de0a4e79 Add token exchange tests 2015-10-14 16:15:32 +02:00
Johannes Zellner
11d327edcf Remove unused session error route 2015-10-14 15:51:55 +02:00
Johannes Zellner
d2f7b83ea7 Add oauth callback tests 2015-10-14 15:50:00 +02:00
Johannes Zellner
72ca1b39e8 Add oauth session logout test 2015-10-14 15:38:40 +02:00
Johannes Zellner
69bd234abc Test for unkown client_id 2015-10-14 15:30:10 +02:00
Johannes Zellner
94e6978abf Add test for grant type requests 2015-10-14 15:08:04 +02:00
Johannes Zellner
b5272cbf4d roleAdmin is not part of scopes anymore 2015-10-14 14:59:54 +02:00
Johannes Zellner
edb213089c Add oauth2 test when user is already logged in with his session 2015-10-14 14:46:25 +02:00
Johannes Zellner
b772cf3e5a Add tester tag 2015-10-14 14:46:03 +02:00
Johannes Zellner
e86d043794 The oauth callback does not need a header and footer 2015-10-14 14:36:41 +02:00
Johannes Zellner
4727187071 Also test loginForm submit with email 2015-10-14 14:31:10 +02:00
Johannes Zellner
d8b8f5424c add loginForm submit tests 2015-10-14 14:30:53 +02:00
Johannes Zellner
8425c99a4e Also test oauth clients with oauth proxy 2015-10-14 13:38:37 +02:00
Johannes Zellner
c023dbbc1c Do not handle addon-simpleauth in oauth 2015-10-14 13:35:33 +02:00
Johannes Zellner
af516f42b4 Add oauth login form tests 2015-10-14 13:34:20 +02:00
Johannes Zellner
dbd8e6a08d Add more oauth tests for the authorization endpoint 2015-10-14 12:03:04 +02:00
Johannes Zellner
c24bec9bc6 Remove unused contentType middleware 2015-10-14 11:48:57 +02:00
Johannes Zellner
9854598648 Fix typo to repair oauth and simple auth login
Second time breakage, time for a test ;-)
2015-10-13 21:55:02 +02:00
Girish Ramakrishnan
2cdf73adab Use 20m of logs per app 2015-10-13 11:59:28 -07:00
Johannes Zellner
1e7e2e5e97 Remove decision dialog related route 2015-10-13 20:39:08 +02:00
Johannes Zellner
081e496878 Remove unused oauth decision dialog 2015-10-13 20:32:27 +02:00
Johannes Zellner
aaff7f463a Cleanup the authorization endpoint 2015-10-13 18:23:32 +02:00
Girish Ramakrishnan
55f937bf51 SIMPLE_AUTH_URL -> SIMPLE_AUTH_ORIGIN 2015-10-13 08:40:41 -07:00
Johannes Zellner
d5d1d061bb We also allow non admins to use the webadmin 2015-10-13 15:13:36 +02:00
Johannes Zellner
bc6f602891 Remove unused angular filter for accessRestrictionLabel 2015-10-13 15:11:30 +02:00
Johannes Zellner
ca461057e7 Also update the test image id 2015-10-13 14:24:53 +02:00
Johannes Zellner
b1c5c2468a Fix test to support docker api up to 1.19 and v1.20 2015-10-13 14:24:41 +02:00
Johannes Zellner
562ce3192f Print error when apptask.pullImage() failed 2015-10-13 13:25:43 +02:00
Johannes Zellner
3787dd98b4 Do not crash if a boxVersionsUrl is not set
This prevents test failures when the cron job runs
2015-10-13 13:22:23 +02:00
Johannes Zellner
6c667e4325 Remove console.log 2015-10-13 13:06:50 +02:00
Johannes Zellner
0eec693a85 Update TEST_IMAGE_TAG 2015-10-13 12:30:02 +02:00
Johannes Zellner
c3bf672c2a Ensure we deal with booleans 2015-10-13 12:29:40 +02:00
Johannes Zellner
c3a3b6412f Support oauthProxy in webadmin 2015-10-13 11:49:50 +02:00
Johannes Zellner
44291b842a Fix apps-test.js 2015-10-13 10:41:57 +02:00
Johannes Zellner
36cf502b56 Addons take longer to startup 2015-10-13 10:41:57 +02:00
Johannes Zellner
2df77cf280 Fix settings-test.js 2015-10-13 10:41:57 +02:00
Johannes Zellner
a453e49c27 Fix backups-test.js 2015-10-13 10:41:57 +02:00
Johannes Zellner
e34c34de46 Fixup the apptask-test.js 2015-10-13 10:41:57 +02:00
Johannes Zellner
8dc5bf96e3 Fix apps-test.js 2015-10-13 10:41:57 +02:00
Johannes Zellner
d2c3e1d1ae Fix database tests 2015-10-13 10:41:57 +02:00
Johannes Zellner
4eab101b78 use app.oauthProxy instead of app.accessRestriction 2015-10-13 10:41:57 +02:00
Johannes Zellner
e460d6d15b Add apps.oauthProxy 2015-10-13 10:41:57 +02:00
Girish Ramakrishnan
3012f68a56 pullImage: handle stream error 2015-10-12 21:56:34 -07:00
Girish Ramakrishnan
1909050be2 remove redundant log 2015-10-12 21:54:25 -07:00
Girish Ramakrishnan
d4c62c7295 check for 200 instead of 201 2015-10-12 21:54:18 -07:00
Girish Ramakrishnan
4eb3d1b918 login must return 200 2015-10-12 20:21:27 -07:00
Girish Ramakrishnan
fb6bf50e48 signal redis to backup using SAVE 2015-10-12 13:30:58 -07:00
Girish Ramakrishnan
31125dedc8 add deps of image tooling 2015-10-12 11:36:46 -07:00
Girish Ramakrishnan
a2e2d70660 remove deps of the release tool 2015-10-12 11:34:25 -07:00
Johannes Zellner
d8213f99b1 Ensure we only set the visibility in the progress bar on app restore to not break the layout 2015-10-12 20:32:43 +02:00
Girish Ramakrishnan
7fe1b02ccd moved to release/ repo 2015-10-12 11:30:04 -07:00
Johannes Zellner
7d7b759930 Add navbar with avatar and name to oauth views 2015-10-12 19:56:04 +02:00
Girish Ramakrishnan
04e27496bd remove tmpreaper (will use systemd for this) 2015-10-12 10:52:55 -07:00
Johannes Zellner
51e2e5ec9c Use a temporary file for the release tarball copy
The pipe failed for me a couple of times in a row
2015-10-12 19:37:33 +02:00
Johannes Zellner
6f2bc555e0 Make application name and cloudron name more apparent in oauth login 2015-10-12 17:26:02 +02:00
Johannes Zellner
a8c43ddf4a Show app icon instead of cloudron avatar in oauth login 2015-10-12 16:50:49 +02:00
Johannes Zellner
3eabc27877 Make app icon url public to be used in oauth login screen 2015-10-12 16:49:55 +02:00
Johannes Zellner
cd9602e641 changes 0.0.69 2015-10-12 16:47:25 +02:00
Johannes Zellner
ad379bd766 Support the new oauth client id prefix 2015-10-12 15:18:51 +02:00
Johannes Zellner
8303991217 Revert "install specific version of python"
The specific version does not create /usr/bin/python but only
the exact version /usr/bin/python2.7

The meta package python is the only one creating that link and
according to https://wiki.ubuntu.com/Python/3 /usr/bin/python
will not point to version 3 anytime soon at all.

This reverts commit be128bbecfd2afe9ef2bdca603a3b26e8ccced7b.
2015-10-12 15:02:05 +02:00
Johannes Zellner
c1047535d4 Update to new manifestformat 2015-10-12 13:22:56 +02:00
Girish Ramakrishnan
12eae2c002 remove 0.0.69 2015-10-11 16:03:08 -07:00
Girish Ramakrishnan
10142cc00b make a note of appid format 2015-10-11 14:19:38 -07:00
Girish Ramakrishnan
5e1487d12a appId format has changed in clientdb 2015-10-11 14:16:38 -07:00
Girish Ramakrishnan
39e0c13701 apptest: remove mail addon 2015-10-11 13:53:50 -07:00
Girish Ramakrishnan
c80d984ee6 start the mail addon 2015-10-11 13:48:23 -07:00
Girish Ramakrishnan
3e474767d1 print the values otherwise it gets very confusing 2015-10-11 13:45:02 -07:00
Girish Ramakrishnan
e2b954439c ensure redis container is stopped before removing it
this is required for the configure/update case where the redis container
might be holding some data in memory.

sending redis SIGTERM will make it commit to disk.
2015-10-11 12:08:35 -07:00
Girish Ramakrishnan
950d1eb5c3 remove associated volumes
note that this does not remove host mounts
2015-10-11 11:37:23 -07:00
Girish Ramakrishnan
f48a2520c3 remove RSTATE_ERROR
if startContainer failed, it will still returning success because
it running the db update result
2015-10-11 11:18:30 -07:00
Girish Ramakrishnan
f366d41034 regenerate shrinkwrap with --no-optional for ldapjs 2015-10-11 10:39:41 -07:00
Girish Ramakrishnan
b3c40e1ba7 test image is now 5.0.0 2015-10-11 10:03:42 -07:00
Girish Ramakrishnan
b686d6e011 use latest docker images 2015-10-11 10:03:42 -07:00
Girish Ramakrishnan
99489e5e77 install specific version of python 2015-10-11 10:02:44 -07:00
Johannes Zellner
5c9ff468cc Add python to the base image 2015-10-11 18:48:24 +02:00
Johannes Zellner
bf18307168 0.0.69 changes 2015-10-11 18:19:51 +02:00
Johannes Zellner
5663198bfb Do not log with morgan during test runs for simple auth server 2015-10-11 17:52:22 +02:00
Johannes Zellner
bad50fd78b Merge branch 'simpleauth' 2015-10-11 17:40:10 +02:00
Johannes Zellner
9bd43e3f74 Update to new manifestformat version 1.9.0 2015-10-11 17:19:39 +02:00
Johannes Zellner
f0fefab8ad Prefix morgan logs with source 2015-10-11 17:19:39 +02:00
Johannes Zellner
449231c791 Do not show http request logs during tests 2015-10-11 17:19:39 +02:00
Johannes Zellner
bd161ec677 Remove serving up webadmin/ in test case 2015-10-11 17:19:39 +02:00
Johannes Zellner
8040d4ac2d Add simpleauth logging middleware 2015-10-11 17:19:39 +02:00
Johannes Zellner
06d7820566 read ldap port from config.js 2015-10-11 17:19:39 +02:00
Johannes Zellner
4818a3feee Specify addon env vars for simple auth 2015-10-11 17:19:39 +02:00
Johannes Zellner
9fcb2c0733 Fix the check install to keep up with the base image version 2015-10-11 17:19:39 +02:00
Johannes Zellner
6906b4159a Revert "Attach accessTokens to req for further use"
This reverts commit 895812df1e9226415640b74a001c1f8c1affab01.
2015-10-11 17:19:39 +02:00
Johannes Zellner
763b9309f6 Fixup the simple auth unit tests 2015-10-11 17:19:39 +02:00
Johannes Zellner
2bb4d1c22b Remove the simpleauth route handler 2015-10-11 17:19:39 +02:00
Johannes Zellner
23303363ee Move simple auth to separate express server 2015-10-11 17:19:39 +02:00
Johannes Zellner
79c17abad2 Add simpleAuthPort to config.js 2015-10-11 17:19:39 +02:00
Johannes Zellner
3234e0e3f0 Fixup the simple auth logout route and add tests 2015-10-11 17:19:39 +02:00
Johannes Zellner
982cd1e1f3 Attach accessTokens to req for further use
This helps with extracting the token, which can come
from various places like headers, body or query
2015-10-11 17:19:38 +02:00
Johannes Zellner
df39fc86a4 add simple auth login unit tests 2015-10-11 17:19:38 +02:00
Johannes Zellner
870e0c4144 Fixup the simple login routes for unknown clients 2015-10-11 17:19:38 +02:00
Johannes Zellner
57704b706e Handle 404 in case subdomain does not exist on delete attempt 2015-10-11 17:19:38 +02:00
Johannes Zellner
223e0dfd1f Add SIMPLE_AUTH_ORIGIN 2015-10-11 17:19:38 +02:00
Johannes Zellner
51c438b0b6 Return correct error codes 2015-10-11 17:19:38 +02:00
Girish Ramakrishnan
b3fa76f8c5 0.0.68 changes 2015-10-10 14:39:53 -07:00
Girish Ramakrishnan
93d210a754 Bump the graphite image 2015-10-10 09:57:07 -07:00
Girish Ramakrishnan
8a77242072 clear version field in rerelease 2015-10-09 13:11:53 -07:00
Girish Ramakrishnan
c453df55d6 More 0.0.67 changes 2015-10-09 12:05:03 -07:00
Girish Ramakrishnan
75d69050d5 0.0.67 changes 2015-10-09 10:06:11 -07:00
Johannes Zellner
494bcc1711 prefix oauth client id and app ids 2015-10-09 11:45:53 +02:00
Johannes Zellner
7e071d9f23 add simpleauth addon hooks 2015-10-09 11:44:32 +02:00
Johannes Zellner
6f821222db Add simple auth routes 2015-10-09 11:37:46 +02:00
Johannes Zellner
6e464dbc81 Add simple auth route handlers 2015-10-09 11:37:39 +02:00
Johannes Zellner
be8ef370c6 Add simple auth logic for login/logout 2015-10-09 11:37:17 +02:00
Johannes Zellner
39a05665b0 Update node modules to support v4.1.1
The sqlite3 versions we had used so far did not work with
new node versions
2015-10-09 10:15:10 +02:00
Girish Ramakrishnan
390285d9e5 version 0.0.66 changes 2015-10-08 13:16:23 -07:00
Girish Ramakrishnan
8ef15df7c0 0.0.65 changes 2015-10-07 18:49:53 -07:00
Girish Ramakrishnan
109f9567ea 0.0.64 changes 2015-09-29 13:21:33 -07:00
Girish Ramakrishnan
748eadd225 stop apps and installer when retiring cloudron
we cannot put this in stop.sh because that is called during update.
2015-09-29 11:58:14 -07:00
Girish Ramakrishnan
b8e115ddf6 move images script 2015-09-28 23:58:00 -07:00
Girish Ramakrishnan
11d4df4f7d fix loop (by actually using nextPage link) 2015-09-28 23:55:31 -07:00
Girish Ramakrishnan
0c285f21c1 rework images script 2015-09-28 23:47:13 -07:00
Girish Ramakrishnan
c9bf017637 0.0.63 changes 2015-09-28 17:00:50 -07:00
Johannes Zellner
0d78150f10 Log forwarding is no more 2015-09-28 14:33:06 +02:00
Johannes Zellner
f36946a8aa Forward the backup trigger status code and error message 2015-09-28 14:20:10 +02:00
Girish Ramakrishnan
5c51619798 Version 0.0.62 changes 2015-09-26 00:04:52 -07:00
Girish Ramakrishnan
a022bdb30d set default version to null to override commander built-in 2015-09-22 22:58:27 -07:00
Girish Ramakrishnan
2cfb91d0ce allow version to be specified in various commands 2015-09-22 22:55:55 -07:00
Girish Ramakrishnan
5885d76b89 Version 0.0.61 changes 2015-09-22 16:15:43 -07:00
Girish Ramakrishnan
5cb1a2d120 0.0.60 changes 2015-09-22 13:04:45 -07:00
Girish Ramakrishnan
53fa339363 0.0.59 changes 2015-09-21 21:56:16 -07:00
Girish Ramakrishnan
5f0bb0c6ce 0.0.58 changes 2015-09-21 16:26:03 -07:00
Girish Ramakrishnan
3dec6ac9f1 0.0.57 changes 2015-09-21 11:12:09 -07:00
Girish Ramakrishnan
4c9ec582dc 0.0.56 changes 2015-09-18 14:47:57 -07:00
Girish Ramakrishnan
28b000c820 admin tool is now merged into caas tool 2015-09-17 21:25:07 -07:00
Girish Ramakrishnan
30320e0ac6 Wait for backup to complete
Fixes #351
2015-09-17 16:44:44 -07:00
Girish Ramakrishnan
88b682a317 take ip as first argument instead of --ip 2015-09-17 16:40:06 -07:00
Girish Ramakrishnan
6c5a5c0882 remove tail-stream 2015-09-17 16:21:18 -07:00
Girish Ramakrishnan
1d27fffe44 remove ununsed requires 2015-09-17 16:21:03 -07:00
Girish Ramakrishnan
a3383b1f98 remove logs route
most of this stuff doesn't work anyways since we moved to systemd
2015-09-17 15:44:05 -07:00
Girish Ramakrishnan
f9c2b0acd1 remove cloudronLogin
alternative: admin/admin ssh <ip>
2015-09-17 15:34:17 -07:00
Girish Ramakrishnan
a9444ed879 rename login to ssh 2015-09-17 15:34:04 -07:00
Girish Ramakrishnan
8d5a3ecd69 admin: remove the cloudron chooser
This needlessly ties down this tool to digitalocean
2015-09-17 15:29:44 -07:00
Girish Ramakrishnan
44ff676eef store dates as iso strings 2015-09-17 13:48:20 -07:00
Girish Ramakrishnan
4bb017b740 verify next version exists 2015-09-17 12:11:14 -07:00
Girish Ramakrishnan
0f2435c308 Version 0.0.55 changes 2015-09-16 17:02:22 -07:00
Girish Ramakrishnan
0cd56f4d4c configure cloudron to use UTC
local timezone should be tracked by the webadmin/box code
2015-09-16 13:17:32 -07:00
Girish Ramakrishnan
e328ec2382 0.0.54 changes 2015-09-16 10:35:59 -07:00
Girish Ramakrishnan
4b5ac67993 Revert "Disable rsyslog"
This reverts commit 3c5db59de2b6c2ef8891ecba54496335b5e2d55f.

Don't revert this. Maybe some system services use this.
2015-09-16 09:48:47 -07:00
Girish Ramakrishnan
cb73218dfe Disable rsyslog 2015-09-15 14:32:47 -07:00
Girish Ramakrishnan
5523c2d34a Disable forwarding to syslog 2015-09-15 14:29:16 -07:00
Girish Ramakrishnan
01889c45a2 Fix typo 2015-09-15 14:04:30 -07:00
Girish Ramakrishnan
e89b4a151e 0.0.52 is folded into 0.0.53 2015-09-15 11:57:50 -07:00
Girish Ramakrishnan
ec235eafe8 fix staging with stripped releases 2015-09-15 11:25:03 -07:00
Girish Ramakrishnan
d99720258a gray out unreachable releases 2015-09-15 11:17:59 -07:00
Girish Ramakrishnan
3f064322e4 strip unreachable releases when processing 2015-09-15 11:11:56 -07:00
Girish Ramakrishnan
11592279e2 fix regexp for non-dev 2015-09-15 10:41:36 -07:00
Girish Ramakrishnan
31b4923eb2 better output 2015-09-15 10:33:36 -07:00
Girish Ramakrishnan
cfdfb9a907 release: add edit command 2015-09-15 09:48:05 -07:00
Girish Ramakrishnan
e7a21c821e 0.0.53 changes 2015-09-14 22:20:13 -07:00
Girish Ramakrishnan
255422d5be 0.0.52 changes 2015-09-14 17:28:31 -07:00
Girish Ramakrishnan
02b4990cb1 Version 0.0.51 changes 2015-09-14 12:24:22 -07:00
Johannes Zellner
11bf39374c 0.0.50 changes 2015-09-14 12:37:39 +02:00
Girish Ramakrishnan
04cf382de5 systemctl stop 2015-09-10 20:45:53 -07:00
Girish Ramakrishnan
38884bc0e6 0.0.49 changes 2015-09-10 11:40:58 -07:00
Girish Ramakrishnan
a4d0394d1a Revert "Move ssh to port 919"
This reverts commit 4e4890810f3a22e7ec990cc44381a2c243044d99.

This change is not done yet
2015-09-10 11:40:27 -07:00
Girish Ramakrishnan
8292e78ef2 Move ssh to port 919 2015-09-10 10:32:59 -07:00
Johannes Zellner
6243404d1d 0.0.48 changes 2015-09-10 14:39:30 +02:00
Girish Ramakrishnan
6fe67c93fe 0.0.47 changes 2015-09-09 17:03:13 -07:00
Girish Ramakrishnan
422b65d934 0.0.46 changes 2015-09-09 01:00:12 -07:00
Girish Ramakrishnan
2fa3a3c47e 0.0.45 changes 2015-09-08 12:58:06 -07:00
Girish Ramakrishnan
27e4810239 0.0.44 changes 2015-09-08 10:31:02 -07:00
Girish Ramakrishnan
cbdae3547b 0.0.43 changes 2015-09-07 23:13:14 -07:00
Girish Ramakrishnan
a5d122c0b3 Leave a note on singleshot After= behavior
"Actually oneshot is also a bit special and that is where RemainAfterExit comes in. For oneshot, systemd waits for the process to exit before it starts any follow-up units (and with multiple ExecStarts I assume it waits for all of them). So that automatically leads to the scheme in berbae's last post. However, with RemainAfterExit, the unit remains active even though the process has exited, so this makes it look more like "normal" service with "
2015-09-07 22:37:23 -07:00
Girish Ramakrishnan
47b662be09 Remove unnecessary alias 2015-09-07 20:53:26 -07:00
Girish Ramakrishnan
1ee09825a0 stop systemd target instead of supervisor 2015-09-07 20:11:19 -07:00
Girish Ramakrishnan
0a679da968 Type belongs to service 2015-09-07 14:10:34 -07:00
Girish Ramakrishnan
59d174004e box code has moved to systemd 2015-09-07 11:19:16 -07:00
Girish Ramakrishnan
d0d0d95475 0.0.42 changes 2015-09-05 09:21:59 -07:00
Johannes Zellner
b08a6840f5 changes for 0.0.41 2015-08-31 21:49:37 -07:00
Girish Ramakrishnan
77ada9c151 Copy upgrade flag 2015-08-31 19:23:43 -07:00
Girish Ramakrishnan
222e6b6611 0.0.40 changes 2015-08-31 09:32:12 -07:00
Girish Ramakrishnan
70c93c7be7 Provision only once 2015-08-30 21:10:57 -07:00
Girish Ramakrishnan
b73fc70ecf limit systemd journal size 2015-08-30 21:04:39 -07:00
Johannes Zellner
eab33150ad 0.0.39 changes 2015-08-30 17:23:47 -07:00
Girish Ramakrishnan
21c16d2009 0.0.38 changes 2015-08-28 11:21:43 -07:00
Girish Ramakrishnan
56413ecce6 Account for ext4 reserved space
2G - ram sawp
1G - backup swap
2G - reserved

root@yellowtent:/# du -hcs bin
13M bin
13M total
root@yellowtent:/# du -hcs boot
70M boot
70M total
root@yellowtent:/# du -hcs etc
6.3M    etc
6.3M    total
root@yellowtent:/# du -hcs lib
530M    lib
530M    total
root@yellowtent:/# du -hcs var
634M    var
634M    total
root@yellowtent:/# du -hcs root
33G root
33G total

Filesystem      Size  Used Avail Use% Mounted on
udev            990M     0  990M   0% /dev
tmpfs           201M  960K  200M   1% /run
/dev/vda1        40G   38G     0 100% /
tmpfs          1001M     0 1001M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs          1001M     0 1001M   0% /sys/fs/cgroup
/dev/loop0       33G  7.9G   24G  25% /home/yellowtent/data
tmpfs           201M     0  201M   0% /run/user/0
2015-08-28 10:46:24 -07:00
Girish Ramakrishnan
e94f2a95de Remove ununsed checkData 2015-08-27 18:51:02 -07:00
Girish Ramakrishnan
c2a43b69a9 Just pass the req.body through
The metadata has things like restoreUrl and restoreKey which should not be
passed through anyways
2015-08-27 18:49:54 -07:00
Girish Ramakrishnan
c90e0fd21e do-resize resizes the disk it seems 2015-08-26 23:38:01 -07:00
Girish Ramakrishnan
6744621415 Print the detected ram and disk size 2015-08-26 22:54:23 -07:00
Girish Ramakrishnan
a6680a775f Start with 8gb instead 2015-08-26 16:02:51 -07:00
Girish Ramakrishnan
5a67be2292 Change ownership only of installer/ 2015-08-26 16:01:45 -07:00
Girish Ramakrishnan
c8b2b34138 Always resize the data volume 2015-08-26 15:40:48 -07:00
Girish Ramakrishnan
af8f4676ba systemd does not use /etc/default/docker 2015-08-26 15:32:40 -07:00
Girish Ramakrishnan
b51cb9d84a Remove redundant variable 2015-08-26 15:22:55 -07:00
Girish Ramakrishnan
ec7a61021f create single btrfs partition
apps and data consume space from the same btrfs partition now
2015-08-26 15:19:57 -07:00
Girish Ramakrishnan
8d0d19132e Add missing variable 2015-08-26 13:51:32 -07:00
Girish Ramakrishnan
d2bde5f0b1 Leave 5G for the system 2015-08-26 13:17:42 -07:00
Girish Ramakrishnan
3f9ae5d6bf refactor size calculation 2015-08-26 12:22:32 -07:00
Girish Ramakrishnan
9b97e26b58 Use fallocate everywhere 2015-08-26 12:15:47 -07:00
Girish Ramakrishnan
219032bbbb Dynamically size the home and docker partitions 2015-08-26 11:31:58 -07:00
Johannes Zellner
f0fd4ea45c Fixup tests after removing provisioning routes 2015-08-26 11:04:02 -07:00
Johannes Zellner
23a5a275f8 Reread box config from args.data 2015-08-26 11:04:02 -07:00
Girish Ramakrishnan
3a1bfa91d1 Create app and data partitions dynamically 2015-08-26 09:59:45 -07:00
Girish Ramakrishnan
a4f77dfcd0 Create systemd service to allocate swap
This can be used to make the swap creation dynamic based on the
ram in the cloudron
2015-08-26 09:11:05 -07:00
Girish Ramakrishnan
795ba3e365 Not sure why this is here 2015-08-26 00:07:15 -07:00
Girish Ramakrishnan
83ef4234bc Fix typo 2015-08-25 23:43:42 -07:00
Girish Ramakrishnan
b12b464462 Set RemainAfterExit
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750683
2015-08-25 23:43:04 -07:00
Girish Ramakrishnan
04726ba697 Restore iptables before docker 2015-08-25 23:27:44 -07:00
Girish Ramakrishnan
4d607ada9d Make installer a systemd service 2015-08-25 23:25:42 -07:00
Johannes Zellner
2c7cf9faa1 0.0.37 changes 2015-08-25 20:50:57 -07:00
Girish Ramakrishnan
7efa2fd072 Make update route work 2015-08-25 15:54:15 -07:00
Girish Ramakrishnan
9adf5167c9 Remove apiServerOrigin 2015-08-25 15:25:03 -07:00
Girish Ramakrishnan
4978984d75 Remove provision, restore, update routes 2015-08-25 15:09:35 -07:00
Girish Ramakrishnan
3b7ef4615a installer does not announce anymore 2015-08-25 15:06:39 -07:00
Girish Ramakrishnan
af8f4b64c0 Use userData from metadata API 2015-08-25 15:05:16 -07:00
Girish Ramakrishnan
93042d862d Fix CHANGES 2015-08-25 11:27:49 -07:00
Girish Ramakrishnan
ba5424c250 0.0.37 changes 2015-08-25 11:09:09 -07:00
Girish Ramakrishnan
afdde9b032 Disable forwarding from containers to metadata IP 2015-08-25 10:46:23 -07:00
Girish Ramakrishnan
a033480500 More 0.0.36 changes 2015-08-24 23:31:28 -07:00
Girish Ramakrishnan
14333e2910 Enable memory accounting 2015-08-24 22:34:51 -07:00
Girish Ramakrishnan
20df96b6ba Add v0.0.36 changes 2015-08-24 09:37:20 -07:00
Girish Ramakrishnan
a7729e1597 Add v0.0.35 changelog 2015-08-18 23:46:25 -07:00
Girish Ramakrishnan
29c3233375 0.0.34 changes 2015-08-17 10:12:09 -07:00
Girish Ramakrishnan
d0eab70974 0.0.33 changes 2015-08-13 16:06:52 -07:00
Girish Ramakrishnan
f3cbb91527 fetch the latest codes to generate the change log 2015-08-12 21:54:47 -07:00
Girish Ramakrishnan
9772cfe1f2 0.0.32 changes 2015-08-12 20:33:36 -07:00
Girish Ramakrishnan
3557e8a125 Use constants from the box repo 2015-08-12 19:57:01 -07:00
Girish Ramakrishnan
9e0bb6ca34 Use latest images 2015-08-12 19:04:37 -07:00
Johannes Zellner
bb74eef601 more changes for 0.0.31 2015-08-12 17:45:13 +02:00
Girish Ramakrishnan
d1db38ba8e Add 0.0.31 changes 2015-08-11 17:01:11 -07:00
Girish Ramakrishnan
e9af9fb16b 0.0.30 changes 2015-08-10 21:38:22 -07:00
Girish Ramakrishnan
fd74be8848 Update docker to 1.7.0 2015-08-10 19:17:21 -07:00
Girish Ramakrishnan
926fafd7f6 Add 0.0.29 changelog 2015-08-10 18:24:36 -07:00
Girish Ramakrishnan
c51c715cee Use systemctl 2015-08-10 17:57:24 -07:00
Girish Ramakrishnan
aa80210075 Update base image to 15.04 2015-08-10 17:57:24 -07:00
Johannes Zellner
768654ae63 Version 0.0.28 changelog 2015-08-10 15:33:47 +02:00
Girish Ramakrishnan
6efb291449 0.0.27 changes 2015-08-08 19:13:13 -07:00
Girish Ramakrishnan
7a431b9b83 0.0.26 changes 2015-08-06 14:07:07 -07:00
Johannes Zellner
c78d09df66 Do not install optional dependencies for production
This was needed due to the dtrace-provider failures as optional
deps for ldapjs and bunyan
2015-08-05 17:37:09 +02:00
Johannes Zellner
7d30d9e867 Use shrinkwrap instead of package.json for node module cache 2015-08-05 17:36:51 +02:00
Girish Ramakrishnan
d3fb244cef list ldap as 0.0.25 change 2015-08-04 16:29:49 -07:00
188 changed files with 12998 additions and 4386 deletions

7
.gitattributes vendored
View File

@@ -1,6 +1,7 @@
# Skip files when using git archive
# following files are skipped when exporting using git archive
/release export-ignore
/admin export-ignore
test export-ignore
.gitattributes export-ignore
.gitignore export-ignore
/scripts export-ignore
test export-ignore

2
.gitignore vendored
View File

@@ -3,7 +3,9 @@ coverage/
docs/
webadmin/dist/
setup/splash/website/
installer/src/certs/server.key
# vim swap files
*.swp

402
CHANGES Normal file
View File

@@ -0,0 +1,402 @@
[0.0.1]
- Hot Chocolate
[0.0.2]
- Hotfix appstore ui in webadim
[0.0.3]
- Tall Pike
[0.0.4]
- This will be 0.0.4 changes
[0.0.5]
- App install/configure route fixes
[0.0.6]
- Not sure what happenned here
[0.0.7]
- resetToken is now sent as part of create user
- Same as 0.0.7 which got released by mistake
[0.0.8]
- Manifest changes
[0.0.9]
- Fix app restore
- Fix backup issues
[0.0.10]
- Unknown orchestra
[0.0.11]
- Add ldap addon
[0.0.12]
- Support OAuth2 state
[0.0.13]
- Use docker image from cloudron repository
[0.0.14]
- Improve setup flow
[0.0.15]
- Improved Appstore view
[0.0.16]
- Improved Backup approach
[0.0.17]
- Upgrade testing
- App auto updates
- Usage graphs
[0.0.18]
- Rework backups and updates
[0.0.19]
- Graphite fixes
- Avatar and Cloudron name support
[0.0.20]
- Apptask fixes
- Chrome related fixes
[0.0.21]
- Increase nginx hostname size to 64
[0.0.22]
- Testing the e2e tests
[0.0.23]
- Better error status page
- Fix updater and backup progress reporting
- New avatar set
- Improved setup wizard
[0.0.24]
- Hotfix the ldap support
[0.0.25]
- Add support page
- Really fix ldap issues
[0.0.26]
- Add configurePath support
[0.0.27]
- Improved log collector
[0.0.28]
- Improve app feedback
- Restyle login page
[0.0.29]
- Update to ubuntu 15.04
[0.0.30]
- Move to docker 1.7
[0.0.31]
- WARNING: This update restarts your containers
- System processes are prioritized over apps
- Add ldap group support
[0.0.32]
- MySQL addon update
[0.0.33]
- Fix graphs
- Fix MySQL 5.6 memory usage
[0.0.34]
- Correctly mark apps pending for approval
[0.0.35]
- Fix ldap admin group username
[0.0.36]
- Fix restore without backup
- Optimize image deletion during updates
- Add memory accounting
- Restrict access to metadata from containers
[0.0.37]
- Prepare for Selfhosting 1. part
- Use userData instead of provisioning calls
[0.0.38]
- Account for Ext4 reserved block when partitioning disk
[0.0.39]
- Move subdomain management to the cloudron
[0.0.40]
- Add journal limit
- Fix reprovisioning on reboot
- Fix subdomain management during startup
[0.0.41]
- Finally bring things to a sane state
[0.0.42]
- Parallel apptask
[0.0.43]
- Move to systemd
[0.0.44]
- Fix apptask concurrency bug
[0.0.45]
- Retry subdomain registration
[0.0.46]
- Fix app update email notification
[0.0.47]
- Ensure box code quits within 5 seconds
[0.0.48]
- Styling fixes
- Improved session handling
[0.0.49]
- Fix app autoupdate logic
[0.0.50]
- Use domainmanagement via CaaS
[0.0.51]
- Fix memory management
[0.0.52]
- Restrict addons memory
- Get nofication about container OOMs
[0.0.53]
- Restrict addons memory
- Get notification about container OOMs
- Add retry to subdomain logic
[0.0.54]
- OAuth Proxy now uses internal port forwarding
[0.0.55]
- Setup cloudron timezone based on droplet region
[0.0.56]
- Use correct timezone in updater
[0.0.57]
- Fix systemd logging issues
[0.0.58]
- Ensure backups of failed apps are retained across archival cycles
[0.0.59]
- Installer API fixes
[0.0.60]
- Do full box backup on updates
[0.0.61]
- Track update notifications to inform admin only once
[0.0.62]
- Export bind dn and password from LDAP addon
[0.0.63]
- Fix creation of TXT records
[0.0.64]
- Stop apps in a retired cloudron
- Retry downloading application on failure
[0.0.65]
- Do not send crash mails for apps in development
[0.0.66]
- Readonly application and addon containers
[0.0.67]
- Fix email notifications
- Fix bug when restoring from certain backups
[0.0.68]
- Update graphite image
- Add simpleauth addon support
[0.0.69]
- Support newer manifest format
- Fix app listing rendering in chrome
- Fix redis backup across upgrades
[0.0.70]
- Retry app download on error
[0.0.71]
- Fix oauth and simple auth login
[0.0.72]
- Cleanup application volumes periodically
- New application logging design
[0.0.73]
- Update SSL certificate
[0.0.74]
- Support singleUser apps
[0.0.75]
- scheduler addon
[0.0.76]
- DNS Sync fixes
- Show warning to user when memory limit reached
[0.0.77]
- Do not set hostname in app containers
[0.0.78]
- Support custom domains
[0.0.79]
- Move SSH Port
[0.0.80]
- Use journalctl for container logs
[0.1.0]
- Wait for configuration changes before starting Cloudron
[0.1.1]
- Ensure dns config for all cloudrons
[0.1.2]
- Make email work again
- Add DKIM keys for custom domains
[0.1.3]
- Storage backend
[0.1.4]
- CaaS Backup configuration fix
[0.1.5]
- Use correct tokens for DNS backend
[0.1.6]
- Add hook to determine the api server of the box
- Fix crash notification
[0.2.0]
- New cloudron exec implementation
[0.2.1]
- Update to node 4.1.1
- Fix certification installation with custom domains
[0.2.2]
- Better debug output
- Retry more times if docker registry goes down
[0.3.0]
- Update SSH keys
- Allow bigger manifest files
[0.4.0]
- Update to docker 1.9.0
[0.4.1]
- Fix scheduler crash
- Crucial OAuth fixes
[0.4.2]
- Fix crash when reporting backup error
- Allow larger manifests
[0.4.3]
- Fix cloudron exec
[0.4.4]
- Initial Lets Encrypt integration
[0.4.5]
- Fixup nginx configuration to allow dynamic certificates
[0.4.6]
- LetsEncrypt integration for custom domains
- Rate limit crash emails
[0.5.0]
- Enable staging Lets Encrypt Integration
[0.5.1]
- Display error dialog for app installation errors
- Enable prod Lets Encrypt Integration
- Handle apptask crashes correctly
[0.5.2]
- Fix apphealthtask crash
- Use cgroup fs driver instead of systemd cgroup driver in docker
[0.5.3]
- Changes for e2e testing
[0.5.4]
- Fix bug in LE server selection
[0.5.5]
- Scheduler redesign
- Fix journalctl logging
[0.5.6]
- Prepare for selfhosting option
[0.5.7]
- Move app images off the btrfs subvolume
[0.6.0]
- Consolidate code repositories
[0.6.1]
- Use no-reply as email from address for apps in naked domains
- Update Lets Encrypt account with owner email when available
- Fix email templates to indicate auto update
- Add notification UI
[0.6.2]
- Fix `cloudron exec` container to have same namespaces as app
- Add developmentMode to manifest
[0.6.3]
- Make sending invite for new users optional
[0.6.4]
- Add support for display names
- Send invite links to admins for user setup
- Enforce stronger passwords
[0.6.5]
- Finalize stronger password requirement
[0.7.0]
- Upgrade to 15.10
- Do not remove docker images when in use by another container
- Fix sporadic error when reconfiguring apps
- Handle journald crashes gracefully
[0.7.1]
- Allow admins to edit users
- Fix graphs
- Support more LDAP cases
- Allow appstore deep linking
[0.7.2]
- Fix 5xx errors when password does not meet requirements
- Improved box update management using prereleases
- Less aggressive disk space checks

View File

@@ -1,10 +1,17 @@
The Box
=======
Cloudron a Smart Server
=======================
Development setup
-----------------
* sudo useradd -m yellowtent
** Add admin-localhost as 127.0.0.1 in /etc/hosts
** All apps will be installed as hypened-subdomains of localhost. You should add
hyphened-subdomains of your apps into /etc/hosts
Selfhost Instructions
---------------------
The smart server currently relies on an AWS account with access to Route53 and S3 and is tested on DigitalOcean and EC2.
First create a virtual private server with Ubuntu 15.04 and run the following commands in an ssh session to initialize the base image:
```
curl https://s3.amazonaws.com/prod-cloudron-releases/installer.sh -o installer.sh
chmod +x installer.sh
./installer.sh <domain> <aws access key> <aws acccess secret> <backup bucket> <provider> <release sha1>
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

186
baseimage/createImage Executable file
View File

@@ -0,0 +1,186 @@
#!/bin/bash
set -eu -o pipefail
assertNotEmpty() {
: "${!1:? "$1 is not set."}"
}
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
provider="digitalocean"
installer_revision=$(git rev-parse HEAD)
box_name=""
server_id=""
server_ip=""
destroy_server="yes"
deploy_env="dev"
# Only GNU getopt supports long options. OS X comes bundled with the BSD getopt
# brew install gnu-getopt to get the GNU getopt on OS X
[[ $(uname -s) == "Darwin" ]] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
args=$(${GNU_GETOPT} -o "" -l "provider:,revision:,regions:,size:,name:,no-destroy,env:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--env) deploy_env="$2"; shift 2;;
--revision) installer_revision="$2"; shift 2;;
--provider) provider="$2"; shift 2;;
--name) box_name="$2"; destroy_server="no"; shift 2;;
--no-destroy) destroy_server="no"; shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
echo "Creating image using ${provider}"
if [[ "${provider}" == "digitalocean" ]]; then
if [[ "${deploy_env}" == "staging" ]]; then
assertNotEmpty DIGITAL_OCEAN_TOKEN_STAGING
export DIGITAL_OCEAN_TOKEN="${DIGITAL_OCEAN_TOKEN_STAGING}"
elif [[ "${deploy_env}" == "dev" ]]; then
assertNotEmpty DIGITAL_OCEAN_TOKEN_DEV
export DIGITAL_OCEAN_TOKEN="${DIGITAL_OCEAN_TOKEN_DEV}"
elif [[ "${deploy_env}" == "prod" ]]; then
assertNotEmpty DIGITAL_OCEAN_TOKEN_PROD
export DIGITAL_OCEAN_TOKEN="${DIGITAL_OCEAN_TOKEN_PROD}"
else
echo "No such env ${deploy_env}."
exit 1
fi
vps="/bin/bash ${SCRIPT_DIR}/digitalocean.sh"
else
echo "Unknown provider : ${provider}"
exit 1
fi
readonly ssh_keys="${HOME}/.ssh/id_rsa_caas_${deploy_env}"
readonly scp202="scp -P 202 -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly scp22="scp -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly ssh202="ssh -p 202 -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly ssh22="ssh -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
if [[ ! -f "${ssh_keys}" ]]; then
echo "caas ssh key is missing at ${ssh_keys} (pick it up from secrets repo)"
exit 1
fi
function get_pretty_revision() {
local git_rev="$1"
local sha1=$(git rev-parse --short "${git_rev}" 2>/dev/null)
echo "${sha1}"
}
now=$(date "+%Y-%m-%d-%H%M%S")
pretty_revision=$(get_pretty_revision "${installer_revision}")
if [[ -z "${box_name}" ]]; then
# if you change this, change the regexp is appstore/janitor.js
box_name="box-${deploy_env}-${pretty_revision}-${now}" # remove slashes
# create a new server if no name given
if ! caas_ssh_key_id=$($vps get_ssh_key_id "caas"); then
echo "Could not query caas ssh key"
exit 1
fi
echo "Detected caas ssh key id: ${caas_ssh_key_id}"
echo "Creating Server with name [${box_name}]"
if ! server_id=$($vps create ${caas_ssh_key_id} ${box_name}); then
echo "Failed to create server"
exit 1
fi
echo "Created server with id: ${server_id}"
# If we run scripts overenthusiastically without the wait, setup script randomly fails
echo -n "Waiting 120 seconds for server creation"
for i in $(seq 1 24); do
echo -n "."
sleep 5
done
echo ""
else
if ! server_id=$($vps get_id "${box_name}"); then
echo "Could not determine id from name"
exit 1
fi
echo "Reusing server with id: ${server_id}"
$vps power_on "${server_id}"
fi
# Query until we get an IP
while true; do
echo "Trying to get the server IP"
if server_ip=$($vps get_ip "${server_id}"); then
echo "Server IP : [${server_ip}]"
break
fi
echo "Timedout, trying again in 10 seconds"
sleep 10
done
while true; do
echo "Trying to copy init script to server"
if $scp22 "${SCRIPT_DIR}/initializeBaseUbuntuImage.sh" root@${server_ip}:.; then
break
fi
echo "Timedout, trying again in 30 seconds"
sleep 30
done
echo "Copying INFRA_VERSION"
$scp22 "${SCRIPT_DIR}/../setup/INFRA_VERSION" root@${server_ip}:.
echo "Copying box source"
cd "${SOURCE_DIR}"
git archive --format=tar HEAD | $ssh22 "root@${server_ip}" "cat - > /tmp/box.tar.gz"
echo "Executing init script"
if ! $ssh22 "root@${server_ip}" "/bin/bash /root/initializeBaseUbuntuImage.sh ${installer_revision}"; then
echo "Init script failed"
exit 1
fi
echo "Shutting down server with id : ${server_id}"
$ssh202 "root@${server_ip}" "shutdown -f now" || true # shutdown sometimes terminates ssh connection immediately making this command fail
# wait 10 secs for actual shutdown
echo "Waiting for 10 seconds for server to shutdown"
sleep 30
echo "Powering off server"
if ! $vps power_off "${server_id}"; then
echo "Could not power off server"
exit 1
fi
snapshot_name="box-${deploy_env}-${pretty_revision}-${now}"
echo "Snapshotting as ${snapshot_name}"
if ! image_id=$($vps snapshot "${server_id}" "${snapshot_name}"); then
echo "Could not snapshot and get image id"
exit 1
fi
if [[ "${destroy_server}" == "yes" ]]; then
echo "Destroying server"
if ! $vps destroy "${server_id}"; then
echo "Could not destroy server"
exit 1
fi
else
echo "Skipping server destroy"
fi
echo "Transferring image ${image_id} to other regions"
$vps transfer_image_to_all_regions "${image_id}"
echo "Done."

240
baseimage/digitalocean.sh Normal file
View File

@@ -0,0 +1,240 @@
#!/bin/bash
if [[ -z "${DIGITAL_OCEAN_TOKEN}" ]]; then
echo "Script requires DIGITAL_OCEAN_TOKEN env to be set"
exit 1
fi
if [[ -z "${JSON}" ]]; then
echo "Script requires JSON env to be set to path of JSON binary"
exit 1
fi
readonly CURL="curl -s -u ${DIGITAL_OCEAN_TOKEN}:"
function debug() {
echo "$@" >&2
}
function get_ssh_key_id() {
id=$($CURL "https://api.digitalocean.com/v2/account/keys" \
| $JSON ssh_keys \
| $JSON -c "this.name === \"$1\"" \
| $JSON 0.id)
[[ -z "$id" ]] && exit 1
echo "$id"
}
function create_droplet() {
local ssh_key_id="$1"
local box_name="$2"
local image_region="sfo1"
local ubuntu_image_slug="ubuntu-15-10-x64"
local box_size="512mb"
local data="{\"name\":\"${box_name}\",\"size\":\"${box_size}\",\"region\":\"${image_region}\",\"image\":\"${ubuntu_image_slug}\",\"ssh_keys\":[ \"${ssh_key_id}\" ],\"backups\":false}"
id=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets" | $JSON droplet.id)
[[ -z "$id" ]] && exit 1
echo "$id"
}
function get_droplet_ip() {
local droplet_id="$1"
ip=$($CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}" | $JSON "droplet.networks.v4[0].ip_address")
[[ -z "$ip" ]] && exit 1
echo "$ip"
}
function get_droplet_id() {
local droplet_name="$1"
id=$($CURL "https://api.digitalocean.com/v2/droplets?per_page=100" | $JSON "droplets" | $JSON -c "this.name === '${droplet_name}'" | $JSON "[0].id")
[[ -z "$id" ]] && exit 1
echo "$id"
}
function power_off_droplet() {
local droplet_id="$1"
local data='{"type":"power_off"}'
local response=$($CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions")
local event_id=`echo "${response}" | $JSON action.id`
if [[ -z "${event_id}" ]]; then
debug "Got no event id, assuming already powered off."
debug "Response: ${response}"
return
fi
debug "Powered off droplet. Event id: ${event_id}"
debug -n "Waiting for droplet to power off"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function power_on_droplet() {
local droplet_id="$1"
local data='{"type":"power_on"}'
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
debug "Powered on droplet. Event id: ${event_id}"
if [[ -z "${event_id}" ]]; then
debug "Got no event id, assuming already powered on"
return
fi
debug -n "Waiting for droplet to power on"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function get_image_id() {
local snapshot_name="$1"
local image_id=""
image_id=$($CURL "https://api.digitalocean.com/v2/images?per_page=100" \
| $JSON images \
| $JSON -c "this.name === \"${snapshot_name}\"" 0.id)
if [[ -n "${image_id}" ]]; then
echo "${image_id}"
fi
}
function snapshot_droplet() {
local droplet_id="$1"
local snapshot_name="$2"
local data="{\"type\":\"snapshot\",\"name\":\"${snapshot_name}\"}"
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions" | $JSON action.id`
debug "Droplet snapshotted as ${snapshot_name}. Event id: ${event_id}"
debug -n "Waiting for snapshot to complete"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/droplets/${droplet_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
get_image_id "${snapshot_name}"
}
function destroy_droplet() {
local droplet_id="$1"
# TODO: check for 204 status
$CURL -X DELETE "https://api.digitalocean.com/v2/droplets/${droplet_id}"
debug "Droplet destroyed"
debug ""
}
function transfer_image() {
local image_id="$1"
local region_slug="$2"
local data="{\"type\":\"transfer\",\"region\":\"${region_slug}\"}"
local event_id=`$CURL -X POST -H 'Content-Type: application/json' -d "${data}" "https://api.digitalocean.com/v2/images/${image_id}/actions" | $JSON action.id`
echo "${event_id}"
}
function wait_for_image_event() {
local image_id="$1"
local event_id="$2"
debug -n "Waiting for ${event_id}"
while true; do
local event_status=`$CURL "https://api.digitalocean.com/v2/images/${image_id}/actions/${event_id}" | $JSON action.status`
if [[ "${event_status}" == "completed" ]]; then
break
fi
debug -n "."
sleep 10
done
debug ""
}
function transfer_image_to_all_regions() {
local image_id="$1"
xfer_events=()
image_regions=(ams3) ## sfo1 is where the image is created
for image_region in ${image_regions[@]}; do
xfer_event=$(transfer_image ${image_id} ${image_region})
echo "Image transfer to ${image_region} initiated. Event id: ${xfer_event}"
xfer_events+=("${xfer_event}")
sleep 1
done
echo "Image transfer initiated, but they will take some time to get transferred."
for xfer_event in ${xfer_events[@]}; do
$vps wait_for_image_event "${image_id}" "${xfer_event}"
done
}
if [[ $# -lt 1 ]]; then
debug "<command> <params...>"
exit 1
fi
case $1 in
get_ssh_key_id)
get_ssh_key_id "${@:2}"
;;
create)
create_droplet "${@:2}"
;;
get_id)
get_droplet_id "${@:2}"
;;
get_ip)
get_droplet_ip "${@:2}"
;;
power_on)
power_on_droplet "${@:2}"
;;
power_off)
power_off_droplet "${@:2}"
;;
snapshot)
snapshot_droplet "${@:2}"
;;
destroy)
destroy_droplet "${@:2}"
;;
transfer_image_to_all_regions)
transfer_image_to_all_regions "${@:2}"
;;
*)
echo "Unknown command $1"
exit 1
esac

View File

@@ -0,0 +1,325 @@
#!/bin/bash
set -euv -o pipefail
readonly USER=yellowtent
readonly USER_HOME="/home/${USER}"
readonly INSTALLER_SOURCE_DIR="${USER_HOME}/installer"
readonly INSTALLER_REVISION="$1"
readonly SELFHOSTED=$(( $# > 1 ? 1 : 0 ))
readonly USER_DATA_FILE="/root/user_data.img"
readonly USER_DATA_DIR="/home/yellowtent/data"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
function die {
echo $1
exit 1
}
[[ "$(systemd --version 2>&1)" == *"systemd 225"* ]] || die "Expecting systemd to be 225"
if [ -f "${SOURCE_DIR}/INFRA_VERSION" ]; then
source "${SOURCE_DIR}/INFRA_VERSION"
else
echo "No INFRA_VERSION found, skip pulling docker images"
fi
if [ ${SELFHOSTED} == 0 ]; then
echo "!! Initializing Ubuntu image for CaaS"
else
echo "!! Initializing Ubuntu image for Selfhosting"
fi
echo "==== Create User ${USER} ===="
if ! id "${USER}"; then
useradd "${USER}" -m
fi
echo "=== Yellowtent base image preparation (installer revision - ${INSTALLER_REVISION}) ==="
echo "=== Prepare installer source ==="
rm -rf "${INSTALLER_SOURCE_DIR}" && mkdir -p "${INSTALLER_SOURCE_DIR}"
rm -rf /tmp/box && mkdir -p /tmp/box
tar xvf /tmp/box.tar.gz -C /tmp/box && rm /tmp/box.tar.gz
cp -rf /tmp/box/installer/* "${INSTALLER_SOURCE_DIR}"
echo "${INSTALLER_REVISION}" > "${INSTALLER_SOURCE_DIR}/REVISION"
export DEBIAN_FRONTEND=noninteractive
echo "=== Upgrade ==="
apt-get update
apt-get upgrade -y
apt-get install -y curl
# Setup firewall before everything. docker creates it's own chain and the -X below will remove it
# Do NOT use iptables-persistent because it's startup ordering conflicts with docker
echo "=== Setting up firewall ==="
# clear tables and set default policy
iptables -F # flush all chains
iptables -X # delete all chains
# default policy for filter table
iptables -P INPUT DROP
iptables -P FORWARD ACCEPT # TODO: disable icc and make this as reject
iptables -P OUTPUT ACCEPT
# NOTE: keep these in sync with src/apps.js validatePortBindings
# allow ssh, http, https, ping, dns
iptables -I INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
if [ ${SELFHOSTED} == 0 ]; then
iptables -A INPUT -p tcp -m tcp -m multiport --dports 80,202,443,886 -j ACCEPT
else
iptables -A INPUT -p tcp -m tcp -m multiport --dports 80,22,443,886 -j ACCEPT
fi
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
iptables -A INPUT -p udp --sport 53 -j ACCEPT
iptables -A INPUT -s 172.17.0.0/16 -j ACCEPT # required to accept any connections from apps to our IP:<public port>
# loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# prevent DoS
# iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT
# log dropped incoming. keep this at the end of all the rules
iptables -N LOGGING # new chain
iptables -A INPUT -j LOGGING # last rule in INPUT chain
iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7
iptables -A LOGGING -j DROP
echo "==== Install btrfs tools ==="
apt-get -y install btrfs-tools
echo "==== Install docker ===="
# install docker from binary to pin it to a specific version. the current debian repo does not allow pinning
curl https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 > /usr/bin/docker
chmod +x /usr/bin/docker
groupadd docker
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API
PartOf=docker.service
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
After=network.target docker.socket
Requires=docker.socket
[Service]
ExecStart=/usr/bin/docker daemon -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
echo "=== Setup btrfs data ==="
fallocate -l "8192m" "${USER_DATA_FILE}" # 8gb start (this will get resized dynamically by box-setup.service)
mkfs.btrfs -L UserHome "${USER_DATA_FILE}"
echo "${USER_DATA_FILE} ${USER_DATA_DIR} btrfs loop,nosuid 0 0" >> /etc/fstab
mkdir -p "${USER_DATA_DIR}" && mount "${USER_DATA_FILE}"
systemctl daemon-reload
systemctl enable docker
systemctl start docker
# give docker sometime to start up and create iptables rules
# those rules come in after docker has started, and we want to wait for them to be sure iptables-save has all of them
sleep 10
# Disable forwarding to metadata route from containers
iptables -I FORWARD -d 169.254.169.254 -j DROP
# ubuntu will restore iptables from this file automatically. this is here so that docker's chain is saved to this file
mkdir /etc/iptables && iptables-save > /etc/iptables/rules.v4
echo "=== Enable memory accounting =="
sed -e 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
update-grub
# now add the user to the docker group
usermod "${USER}" -a -G docker
if [ -z $(echo "${INFRA_VERSION}") ]; then
echo "Skip pulling base docker images"
else
echo "=== Pulling base docker images ==="
docker pull "${BASE_IMAGE}"
echo "=== Pulling mysql addon image ==="
docker pull "${MYSQL_IMAGE}"
echo "=== Pulling postgresql addon image ==="
docker pull "${POSTGRESQL_IMAGE}"
echo "=== Pulling redis addon image ==="
docker pull "${REDIS_IMAGE}"
echo "=== Pulling mongodb addon image ==="
docker pull "${MONGODB_IMAGE}"
echo "=== Pulling graphite docker images ==="
docker pull "${GRAPHITE_IMAGE}"
echo "=== Pulling mail relay ==="
docker pull "${MAIL_IMAGE}"
fi
echo "==== Install nginx ===="
apt-get -y install nginx-full
[[ "$(nginx -v 2>&1)" == *"nginx/1.9."* ]] || die "Expecting nginx version to be 1.9.x"
echo "==== Install build-essential ===="
apt-get -y install build-essential rcconf
echo "==== Install mysql ===="
debconf-set-selections <<< 'mysql-server mysql-server/root_password password password'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password password'
apt-get -y install mysql-server
[[ "$(mysqld --version 2>&1)" == *"5.6."* ]] || die "Expecting nginx version to be 5.6.x"
echo "==== Install pwgen ===="
apt-get -y install pwgen
echo "==== Install collectd ==="
if ! apt-get install -y collectd collectd-utils; then
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
echo "Failed to install collectd. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
fi
update-rc.d -f collectd remove
# this simply makes it explicit that we run logrotate via cron. it's already part of base ubuntu
echo "==== Install logrotate ==="
apt-get install -y cron logrotate
systemctl enable cron
echo "==== Install nodejs ===="
# Cannot use anything above 4.1.1 - https://github.com/nodejs/node/issues/3803
mkdir -p /usr/local/node-4.1.1
curl -sL https://nodejs.org/dist/v4.1.1/node-v4.1.1-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-4.1.1
ln -s /usr/local/node-4.1.1/bin/node /usr/bin/node
ln -s /usr/local/node-4.1.1/bin/npm /usr/bin/npm
apt-get install -y python # Install python which is required for npm rebuild
[[ "$(python --version 2>&1)" == "Python 2.7."* ]] || die "Expecting python version to be 2.7.x"
echo "=== Rebuilding npm packages ==="
cd "${INSTALLER_SOURCE_DIR}" && npm install --production
chown "${USER}:${USER}" -R "${INSTALLER_SOURCE_DIR}"
echo "==== Install installer systemd script ===="
provisionEnv="PROVISION=digitalocean"
if [ ${SELFHOSTED} == 1 ]; then
provisionEnv="PROVISION=local"
fi
cat > /etc/systemd/system/cloudron-installer.service <<EOF
[Unit]
Description=Cloudron Installer
; journald crashes result in a EPIPE in node. Cannot ignore it as it results in loss of logs.
BindsTo=systemd-journald.service
[Service]
Type=idle
ExecStart="${INSTALLER_SOURCE_DIR}/src/server.js"
Environment="DEBUG=installer*,connect-lastmile" ${provisionEnv}
; kill any child (installer.sh) as well
KillMode=control-group
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
# Restore iptables before docker
echo "==== Install iptables-restore systemd script ===="
cat > /etc/systemd/system/iptables-restore.service <<EOF
[Unit]
Description=IPTables Restore
Before=docker.service
[Service]
Type=oneshot
ExecStart=/sbin/iptables-restore /etc/iptables/rules.v4
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
# Allocate swap files
# https://bbs.archlinux.org/viewtopic.php?id=194792 ensures this runs after do-resize.service
echo "==== Install box-setup systemd script ===="
cat > /etc/systemd/system/box-setup.service <<EOF
[Unit]
Description=Box Setup
Before=docker.service umount.target collectd.service
After=do-resize.service
[Service]
Type=oneshot
ExecStart="${INSTALLER_SOURCE_DIR}/systemd/box-setup.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable cloudron-installer
systemctl enable iptables-restore
systemctl enable box-setup
# Configure systemd
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
-i /etc/systemd/journald.conf
# When rotating logs, systemd kills journald too soon sometimes
# See https://github.com/systemd/systemd/issues/1353 (this is upstream default)
sed -e "s/^WatchdogSec=.*$/WatchdogSec=3min/" \
-i /lib/systemd/system/systemd-journald.service
sync
# Configure time
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
timedatectl set-ntp 1
timedatectl set-timezone UTC
# Give user access to system logs
apt-get -y install acl
usermod -a -G systemd-journal ${USER}
mkdir -p /var/log/journal # in some images, this directory is not created making system log to /run/systemd instead
chown root:systemd-journal /var/log/journal
systemctl restart systemd-journald
setfacl -n -m u:${USER}:r /var/log/journal/*/system.journal
if [ ${SELFHOSTED} == 0 ]; then
echo "==== Install ssh ==="
apt-get -y install openssh-server
# https://stackoverflow.com/questions/4348166/using-with-sed on why ? must be escaped
sed -e 's/^#\?Port .*/Port 202/g' \
-e 's/^#\?PermitRootLogin .*/PermitRootLogin without-password/g' \
-e 's/^#\?PermitEmptyPasswords .*/PermitEmptyPasswords no/g' \
-e 's/^#\?PasswordAuthentication .*/PasswordAuthentication no/g' \
-i /etc/ssh/sshd_config
# required so we can connect to this machine since port 22 is blocked by iptables by now
systemctl reload sshd
fi

7
box.js
View File

@@ -15,7 +15,8 @@ var appHealthMonitor = require('./src/apphealthmonitor.js'),
config = require('./src/config.js'),
ldap = require('./src/ldap.js'),
oauthproxy = require('./src/oauthproxy.js'),
server = require('./src/server.js');
server = require('./src/server.js'),
simpleauth = require('./src/simpleauth.js');
console.log();
console.log('==========================================');
@@ -25,7 +26,6 @@ console.log();
console.log(' Environment: ', config.CLOUDRON ? 'CLOUDRON' : 'TEST');
console.log(' Version: ', config.version());
console.log(' Admin Origin: ', config.adminOrigin());
console.log(' Appstore token: ', config.token());
console.log(' Appstore API server origin: ', config.apiServerOrigin());
console.log(' Appstore Web server origin: ', config.webServerOrigin());
console.log();
@@ -35,6 +35,7 @@ console.log();
async.series([
server.start,
ldap.start,
simpleauth.start,
appHealthMonitor.start,
oauthproxy.start
], function (error) {
@@ -49,6 +50,7 @@ var NOOP_CALLBACK = function () { };
process.on('SIGINT', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
simpleauth.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
@@ -56,6 +58,7 @@ process.on('SIGINT', function () {
process.on('SIGTERM', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
simpleauth.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});

View File

@@ -36,12 +36,7 @@ function main() {
var processName = process.argv[2];
console.log('Started crash notifier for', processName);
mailer.initialize(function (error) {
if (error) return console.error(error);
sendCrashNotification(processName);
});
sendCrashNotification(processName);
}
main();

View File

@@ -39,7 +39,7 @@ gulp.task('3rdparty', function () {
// JavaScript
// --------------
gulp.task('js', ['js-index', 'js-setup', 'js-update', 'js-error'], function () {});
gulp.task('js', ['js-index', 'js-setup', 'js-update'], function () {});
var oauth = {
clientId: argv.clientId || 'cid-webadmin',
@@ -80,14 +80,6 @@ gulp.task('js-setup', function () {
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-error', function () {
gulp.src(['webadmin/src/js/error.js'])
.pipe(sourcemaps.init())
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-update', function () {
gulp.src(['webadmin/src/js/update.js'])
.pipe(sourcemaps.init())
@@ -102,7 +94,7 @@ gulp.task('js-update', function () {
// HTML
// --------------
gulp.task('html', ['html-views', 'html-update'], function () {
gulp.task('html', ['html-views', 'html-update', 'html-templates'], function () {
return gulp.src('webadmin/src/*.html').pipe(gulp.dest('webadmin/dist'));
});
@@ -114,6 +106,10 @@ gulp.task('html-views', function () {
return gulp.src('webadmin/src/views/**/*.html').pipe(gulp.dest('webadmin/dist/views'));
});
gulp.task('html-templates', function () {
return gulp.src('webadmin/src/templates/**/*.html').pipe(gulp.dest('webadmin/dist/templates'));
});
// --------------
// CSS
// --------------
@@ -143,8 +139,8 @@ gulp.task('watch', ['default'], function () {
gulp.watch(['webadmin/src/img/*'], ['images']);
gulp.watch(['webadmin/src/**/*.html'], ['html']);
gulp.watch(['webadmin/src/views/*.html'], ['html-views']);
gulp.watch(['webadmin/src/templates/*.html'], ['html-templates']);
gulp.watch(['webadmin/src/js/update.js'], ['js-update']);
gulp.watch(['webadmin/src/js/error.js'], ['js-error']);
gulp.watch(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'], ['js-setup']);
gulp.watch(['webadmin/src/js/index.js', 'webadmin/src/js/client.js', 'webadmin/src/js/appstore.js', 'webadmin/src/js/main.js', 'webadmin/src/views/*.js'], ['js-index']);
gulp.watch(['webadmin/src/3rdparty/**/*'], ['3rdparty']);

159
installer.sh Executable file
View File

@@ -0,0 +1,159 @@
#!/bin/bash
set -eu -o pipefail
echo ""
echo "======== Cloudron Installer ========"
echo ""
if [ $# -lt 4 ]; then
echo "Usage: ./installer.sh <fqdn> <aws key id> <aws key secret> <bucket> <provider> <revision>"
exit 1
fi
# commandline arguments
readonly fqdn="${1}"
readonly aws_access_key_id="${2}"
readonly aws_access_key_secret="${3}"
readonly aws_backup_bucket="${4}"
readonly provider="${5}"
readonly revision="${6}"
# environment specific urls
readonly api_server_origin="https://api.dev.cloudron.io"
readonly web_server_origin="https://dev.cloudron.io"
readonly release_bucket_url="https://s3.amazonaws.com/dev-cloudron-releases"
readonly versions_url="https://s3.amazonaws.com/dev-cloudron-releases/versions.json"
readonly installer_code_url="${release_bucket_url}/box-${revision}.tar.gz"
# runtime consts
readonly installer_code_file="/tmp/box.tar.gz"
readonly installer_tmp_dir="/tmp/box"
readonly cert_folder="/tmp/certificates"
# check for fqdn in /ets/hosts
echo "[INFO] checking for hostname entry"
readonly hostentry_found=$(grep "${fqdn}" /etc/hosts || true)
if [[ -z $hostentry_found ]]; then
echo "[WARNING] No entry for ${fqdn} found in /etc/hosts"
echo "Adding an entry ..."
cat >> /etc/hosts <<EOF
# The following line was added by the Cloudron installer script
127.0.1.1 ${fqdn} ${fqdn}
EOF
else
echo "Valid hostname entry found in /etc/hosts"
fi
echo ""
echo "[INFO] ensure minimal dependencies ..."
apt-get update
apt-get install -y curl
echo ""
echo "[INFO] Generating certificates ..."
rm -rf "${cert_folder}"
mkdir -p "${cert_folder}"
cat > "${cert_folder}/CONFIG" <<EOF
[ req ]
default_bits = 1024
default_keyfile = keyfile.pem
distinguished_name = req_distinguished_name
prompt = no
req_extensions = v3_req
[ req_distinguished_name ]
C = DE
ST = Berlin
L = Berlin
O = Cloudron UG
OU = Cloudron
CN = ${fqdn}
emailAddress = cert@cloudron.io
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = ${fqdn}
DNS.2 = *.${fqdn}
EOF
# generate cert files
openssl genrsa 2048 > "${cert_folder}/host.key"
openssl req -new -out "${cert_folder}/host.csr" -key "${cert_folder}/host.key" -config "${cert_folder}/CONFIG"
openssl x509 -req -days 3650 -in "${cert_folder}/host.csr" -signkey "${cert_folder}/host.key" -out "${cert_folder}/host.cert" -extensions v3_req -extfile "${cert_folder}/CONFIG"
# make them json compatible, by collapsing to one line
tls_cert=$(sed ':a;N;$!ba;s/\n/\\n/g' "${cert_folder}/host.cert")
tls_key=$(sed ':a;N;$!ba;s/\n/\\n/g' "${cert_folder}/host.key")
echo ""
echo "[INFO] Fetching installer code ..."
curl "${installer_code_url}" -o "${installer_code_file}"
echo ""
echo "[INFO] Extracting installer code to ${installer_tmp_dir} ..."
rm -rf "${installer_tmp_dir}" && mkdir -p "${installer_tmp_dir}"
tar xvf "${installer_code_file}" -C "${installer_tmp_dir}"
echo ""
echo "Creating initial provisioning config ..."
cat > /root/provision.json <<EOF
{
"sourceTarballUrl": "",
"data": {
"apiServerOrigin": "${api_server_origin}",
"webServerOrigin": "${web_server_origin}",
"fqdn": "${fqdn}",
"token": "",
"isCustomDomain": true,
"boxVersionsUrl": "${versions_url}",
"version": "",
"tlsCert": "${tls_cert}",
"tlsKey": "${tls_key}",
"provider": "${provider}",
"backupConfig": {
"provider": "s3",
"accessKeyId": "${aws_access_key_id}",
"secretAccessKey": "${aws_access_key_secret}",
"bucket": "${aws_backup_bucket}",
"prefix": "backups"
},
"dnsConfig": {
"provider": "route53",
"accessKeyId": "${aws_access_key_id}",
"secretAccessKey": "${aws_access_key_secret}"
},
"tlsConfig": {
"provider": "letsencrypt-dev"
}
}
}
EOF
echo "[INFO] Running Ubuntu initializing script ..."
/bin/bash "${installer_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh" "${revision}" selfhosting
echo ""
echo "[INFO] Reloading systemd daemon ..."
systemctl daemon-reload
echo ""
echo "[INFO] Restart docker ..."
systemctl restart docker
echo ""
echo "[FINISHED] Now starting Cloudron init jobs ..."
systemctl start box-setup
# TODO this is only for convenience we should probably just let the user do a restart
sleep 5 && sync
systemctl start cloudron-installer
journalctl -u cloudron-installer.service -f

516
installer/npm-shrinkwrap.json generated Normal file
View File

@@ -0,0 +1,516 @@
{
"name": "installer",
"version": "0.0.1",
"dependencies": {
"async": {
"version": "1.5.0",
"from": "async@>=1.5.0 <2.0.0",
"resolved": "https://registry.npmjs.org/async/-/async-1.5.0.tgz"
},
"body-parser": {
"version": "1.14.1",
"from": "body-parser@>=1.12.0 <2.0.0",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.14.1.tgz",
"dependencies": {
"bytes": {
"version": "2.1.0",
"from": "bytes@2.1.0",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-2.1.0.tgz"
},
"content-type": {
"version": "1.0.1",
"from": "content-type@>=1.0.1 <1.1.0",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz"
},
"depd": {
"version": "1.1.0",
"from": "depd@>=1.1.0 <1.2.0",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.0.tgz"
},
"http-errors": {
"version": "1.3.1",
"from": "http-errors@>=1.3.1 <1.4.0",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"dependencies": {
"inherits": {
"version": "2.0.1",
"from": "inherits@>=2.0.1 <2.1.0",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
},
"statuses": {
"version": "1.2.1",
"from": "statuses@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz"
}
}
},
"iconv-lite": {
"version": "0.4.12",
"from": "iconv-lite@0.4.12",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.12.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "on-finished@>=2.3.0 <2.4.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "ee-first@1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"qs": {
"version": "5.1.0",
"from": "qs@5.1.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-5.1.0.tgz"
},
"raw-body": {
"version": "2.1.4",
"from": "raw-body@>=2.1.4 <2.2.0",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.1.4.tgz",
"dependencies": {
"unpipe": {
"version": "1.0.0",
"from": "unpipe@1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz"
}
}
},
"type-is": {
"version": "1.6.9",
"from": "type-is@>=1.6.9 <1.7.0",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"dependencies": {
"media-typer": {
"version": "0.3.0",
"from": "media-typer@0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz"
},
"mime-types": {
"version": "2.1.7",
"from": "mime-types@>=2.1.7 <2.2.0",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "mime-db@>=1.19.0 <1.20.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
}
}
}
}
},
"connect-lastmile": {
"version": "0.0.13",
"from": "connect-lastmile@0.0.13",
"resolved": "https://registry.npmjs.org/connect-lastmile/-/connect-lastmile-0.0.13.tgz",
"dependencies": {
"debug": {
"version": "2.1.3",
"from": "debug@>=2.1.0 <2.2.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.1.3.tgz",
"dependencies": {
"ms": {
"version": "0.7.0",
"from": "ms@0.7.0",
"resolved": "http://registry.npmjs.org/ms/-/ms-0.7.0.tgz"
}
}
}
}
},
"debug": {
"version": "2.2.0",
"from": "debug@>=2.1.1 <3.0.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.2.0.tgz",
"dependencies": {
"ms": {
"version": "0.7.1",
"from": "ms@0.7.1",
"resolved": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz"
}
}
},
"express": {
"version": "4.13.3",
"from": "express@>=4.11.2 <5.0.0",
"resolved": "https://registry.npmjs.org/express/-/express-4.13.3.tgz",
"dependencies": {
"accepts": {
"version": "1.2.13",
"from": "accepts@>=1.2.12 <1.3.0",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.2.13.tgz",
"dependencies": {
"mime-types": {
"version": "2.1.7",
"from": "mime-types@>=2.1.6 <2.2.0",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "mime-db@>=1.19.0 <1.20.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
},
"negotiator": {
"version": "0.5.3",
"from": "negotiator@0.5.3",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.5.3.tgz"
}
}
},
"array-flatten": {
"version": "1.1.1",
"from": "array-flatten@1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz"
},
"content-disposition": {
"version": "0.5.0",
"from": "content-disposition@0.5.0",
"resolved": "http://registry.npmjs.org/content-disposition/-/content-disposition-0.5.0.tgz"
},
"content-type": {
"version": "1.0.1",
"from": "content-type@>=1.0.1 <1.1.0",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz"
},
"cookie": {
"version": "0.1.3",
"from": "cookie@0.1.3",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.1.3.tgz"
},
"cookie-signature": {
"version": "1.0.6",
"from": "cookie-signature@1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz"
},
"depd": {
"version": "1.0.1",
"from": "depd@>=1.0.1 <1.1.0",
"resolved": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz"
},
"escape-html": {
"version": "1.0.2",
"from": "escape-html@1.0.2",
"resolved": "http://registry.npmjs.org/escape-html/-/escape-html-1.0.2.tgz"
},
"etag": {
"version": "1.7.0",
"from": "etag@>=1.7.0 <1.8.0",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.7.0.tgz"
},
"finalhandler": {
"version": "0.4.0",
"from": "finalhandler@0.4.0",
"resolved": "http://registry.npmjs.org/finalhandler/-/finalhandler-0.4.0.tgz",
"dependencies": {
"unpipe": {
"version": "1.0.0",
"from": "unpipe@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz"
}
}
},
"fresh": {
"version": "0.3.0",
"from": "fresh@0.3.0",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.3.0.tgz"
},
"merge-descriptors": {
"version": "1.0.0",
"from": "merge-descriptors@1.0.0",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.0.tgz"
},
"methods": {
"version": "1.1.1",
"from": "methods@>=1.1.1 <1.2.0",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.1.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "on-finished@>=2.3.0 <2.4.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "ee-first@1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"parseurl": {
"version": "1.3.0",
"from": "parseurl@>=1.3.0 <1.4.0",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.0.tgz"
},
"path-to-regexp": {
"version": "0.1.7",
"from": "path-to-regexp@0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz"
},
"proxy-addr": {
"version": "1.0.8",
"from": "proxy-addr@>=1.0.8 <1.1.0",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-1.0.8.tgz",
"dependencies": {
"forwarded": {
"version": "0.1.0",
"from": "forwarded@>=0.1.0 <0.2.0",
"resolved": "http://registry.npmjs.org/forwarded/-/forwarded-0.1.0.tgz"
},
"ipaddr.js": {
"version": "1.0.1",
"from": "ipaddr.js@1.0.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.0.1.tgz"
}
}
},
"qs": {
"version": "4.0.0",
"from": "qs@4.0.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-4.0.0.tgz"
},
"range-parser": {
"version": "1.0.3",
"from": "range-parser@>=1.0.2 <1.1.0",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.0.3.tgz"
},
"send": {
"version": "0.13.0",
"from": "send@0.13.0",
"resolved": "http://registry.npmjs.org/send/-/send-0.13.0.tgz",
"dependencies": {
"destroy": {
"version": "1.0.3",
"from": "destroy@1.0.3",
"resolved": "http://registry.npmjs.org/destroy/-/destroy-1.0.3.tgz"
},
"http-errors": {
"version": "1.3.1",
"from": "http-errors@>=1.3.1 <1.4.0",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"dependencies": {
"inherits": {
"version": "2.0.1",
"from": "inherits@>=2.0.1 <2.1.0",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
}
}
},
"mime": {
"version": "1.3.4",
"from": "mime@1.3.4",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.3.4.tgz"
},
"ms": {
"version": "0.7.1",
"from": "ms@0.7.1",
"resolved": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz"
},
"statuses": {
"version": "1.2.1",
"from": "statuses@>=1.2.1 <1.3.0",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz"
}
}
},
"serve-static": {
"version": "1.10.0",
"from": "serve-static@>=1.10.0 <1.11.0",
"resolved": "http://registry.npmjs.org/serve-static/-/serve-static-1.10.0.tgz"
},
"type-is": {
"version": "1.6.9",
"from": "type-is@>=1.6.9 <1.7.0",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"dependencies": {
"media-typer": {
"version": "0.3.0",
"from": "media-typer@0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz"
},
"mime-types": {
"version": "2.1.7",
"from": "mime-types@>=2.1.6 <2.2.0",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "mime-db@>=1.19.0 <1.20.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
}
}
},
"utils-merge": {
"version": "1.0.0",
"from": "utils-merge@1.0.0",
"resolved": "http://registry.npmjs.org/utils-merge/-/utils-merge-1.0.0.tgz"
},
"vary": {
"version": "1.0.1",
"from": "vary@>=1.0.1 <1.1.0",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.0.1.tgz"
}
}
},
"json": {
"version": "9.0.3",
"from": "json@>=9.0.3 <10.0.0",
"resolved": "https://registry.npmjs.org/json/-/json-9.0.3.tgz"
},
"morgan": {
"version": "1.6.1",
"from": "morgan@>=1.5.1 <2.0.0",
"resolved": "https://registry.npmjs.org/morgan/-/morgan-1.6.1.tgz",
"dependencies": {
"basic-auth": {
"version": "1.0.3",
"from": "basic-auth@>=1.0.3 <1.1.0",
"resolved": "https://registry.npmjs.org/basic-auth/-/basic-auth-1.0.3.tgz"
},
"depd": {
"version": "1.0.1",
"from": "depd@>=1.0.1 <1.1.0",
"resolved": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "on-finished@>=2.3.0 <2.4.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "ee-first@1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"on-headers": {
"version": "1.0.1",
"from": "on-headers@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.1.tgz"
}
}
},
"proxy-middleware": {
"version": "0.15.0",
"from": "proxy-middleware@>=0.15.0 <0.16.0",
"resolved": "https://registry.npmjs.org/proxy-middleware/-/proxy-middleware-0.15.0.tgz"
},
"safetydance": {
"version": "0.0.19",
"from": "safetydance@0.0.19",
"resolved": "https://registry.npmjs.org/safetydance/-/safetydance-0.0.19.tgz"
},
"semver": {
"version": "5.1.0",
"from": "semver@>=5.1.0 <6.0.0",
"resolved": "https://registry.npmjs.org/semver/-/semver-5.1.0.tgz"
},
"superagent": {
"version": "0.21.0",
"from": "superagent@>=0.21.0 <0.22.0",
"resolved": "https://registry.npmjs.org/superagent/-/superagent-0.21.0.tgz",
"dependencies": {
"component-emitter": {
"version": "1.1.2",
"from": "component-emitter@1.1.2",
"resolved": "http://registry.npmjs.org/component-emitter/-/component-emitter-1.1.2.tgz"
},
"cookiejar": {
"version": "2.0.1",
"from": "cookiejar@2.0.1",
"resolved": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.0.1.tgz"
},
"extend": {
"version": "1.2.1",
"from": "extend@>=1.2.1 <1.3.0",
"resolved": "https://registry.npmjs.org/extend/-/extend-1.2.1.tgz"
},
"form-data": {
"version": "0.1.3",
"from": "form-data@0.1.3",
"resolved": "http://registry.npmjs.org/form-data/-/form-data-0.1.3.tgz",
"dependencies": {
"async": {
"version": "0.9.2",
"from": "async@>=0.9.0 <0.10.0",
"resolved": "https://registry.npmjs.org/async/-/async-0.9.2.tgz"
},
"combined-stream": {
"version": "0.0.7",
"from": "combined-stream@>=0.0.4 <0.1.0",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-0.0.7.tgz",
"dependencies": {
"delayed-stream": {
"version": "0.0.5",
"from": "delayed-stream@0.0.5",
"resolved": "http://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz"
}
}
}
}
},
"formidable": {
"version": "1.0.14",
"from": "formidable@1.0.14",
"resolved": "https://registry.npmjs.org/formidable/-/formidable-1.0.14.tgz"
},
"methods": {
"version": "1.0.1",
"from": "methods@1.0.1",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.0.1.tgz"
},
"mime": {
"version": "1.2.11",
"from": "mime@1.2.11",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.2.11.tgz"
},
"qs": {
"version": "1.2.0",
"from": "qs@1.2.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-1.2.0.tgz"
},
"readable-stream": {
"version": "1.0.27-1",
"from": "readable-stream@1.0.27-1",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.27-1.tgz",
"dependencies": {
"core-util-is": {
"version": "1.0.1",
"from": "core-util-is@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz"
},
"inherits": {
"version": "2.0.1",
"from": "inherits@>=2.0.1 <2.1.0",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
},
"isarray": {
"version": "0.0.1",
"from": "isarray@0.0.1",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-0.0.1.tgz"
},
"string_decoder": {
"version": "0.10.31",
"from": "string_decoder@>=0.10.0 <0.11.0",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz"
}
}
},
"reduce-component": {
"version": "1.0.1",
"from": "reduce-component@1.0.1",
"resolved": "http://registry.npmjs.org/reduce-component/-/reduce-component-1.0.1.tgz"
}
}
}
}
}

47
installer/package.json Normal file
View File

@@ -0,0 +1,47 @@
{
"name": "installer",
"description": "Cloudron Installer",
"version": "0.0.1",
"private": "true",
"author": {
"name": "Cloudron authors"
},
"repository": {
"type": "git"
},
"engines": [
"node >=4.0.0 <=4.1.1"
],
"dependencies": {
"async": "^1.5.0",
"body-parser": "^1.12.0",
"connect-lastmile": "0.0.13",
"debug": "^2.1.1",
"express": "^4.11.2",
"json": "^9.0.3",
"morgan": "^1.5.1",
"proxy-middleware": "^0.15.0",
"safetydance": "0.0.19",
"semver": "^5.1.0",
"superagent": "^0.21.0"
},
"devDependencies": {
"colors": "^1.1.2",
"commander": "^2.8.1",
"expect.js": "^0.3.1",
"istanbul": "^0.3.5",
"lodash": "^3.2.0",
"mocha": "^2.1.0",
"nock": "^0.59.1",
"sleep": "^3.0.0",
"superagent-sync": "^0.2.0",
"supererror": "^0.7.0",
"yesno": "0.0.1"
},
"scripts": {
"test": "NODE_ENV=test ./node_modules/istanbul/lib/cli.js test $1 ./node_modules/mocha/bin/_mocha -- -R spec ./src/test",
"precommit": "/bin/true",
"prepush": "npm test",
"postmerge": "/bin/true"
}
}

0
installer/src/certs/.gitignore vendored Normal file
View File

112
installer/src/installer.js Normal file
View File

@@ -0,0 +1,112 @@
/* jslint node: true */
'use strict';
var assert = require('assert'),
child_process = require('child_process'),
debug = require('debug')('installer:installer'),
path = require('path'),
safe = require('safetydance'),
semver = require('semver'),
superagent = require('superagent'),
util = require('util');
exports = module.exports = {
InstallerError: InstallerError,
provision: provision,
_ensureVersion: ensureVersion
};
var INSTALLER_CMD = path.join(__dirname, 'scripts/installer.sh'),
SUDO = '/usr/bin/sudo';
function InstallerError(reason, info) {
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
this.message = !info ? reason : (typeof info === 'object' ? JSON.stringify(info) : info);
}
util.inherits(InstallerError, Error);
InstallerError.INTERNAL_ERROR = 1;
InstallerError.ALREADY_PROVISIONED = 2;
// system until file has KillMode=control-group to bring down child processes
function spawn(tag, cmd, args, callback) {
assert.strictEqual(typeof tag, 'string');
assert.strictEqual(typeof cmd, 'string');
assert(util.isArray(args));
assert.strictEqual(typeof callback, 'function');
var cp = child_process.spawn(cmd, args, { timeout: 0 });
cp.stdout.setEncoding('utf8');
cp.stdout.on('data', function (data) { debug('%s (stdout): %s', tag, data); });
cp.stderr.setEncoding('utf8');
cp.stderr.on('data', function (data) { debug('%s (stderr): %s', tag, data); });
cp.on('error', function (error) {
debug('%s : child process errored %s', tag, error.message);
callback(error);
});
cp.on('exit', function (code, signal) {
debug('%s : child process exited. code: %d signal: %d', tag, code, signal);
if (signal) return callback(new Error('Exited with signal ' + signal));
if (code !== 0) return callback(new Error('Exited with code ' + code));
callback(null);
});
}
function ensureVersion(args, callback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof callback, 'function');
if (!args.data || !args.data.boxVersionsUrl) return callback(new Error('No boxVersionsUrl specified'));
if (args.sourceTarballUrl) return callback(null, args);
superagent.get(args.data.boxVersionsUrl).end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new Error(util.format('Bad status: %s %s', result.statusCode, result.text)));
var versions = safe.JSON.parse(result.text);
if (!versions || typeof versions !== 'object') return callback(new Error('versions is not in valid format:' + safe.error));
var latestVersion = Object.keys(versions).sort(semver.compare).pop();
debug('ensureVersion: Latest version is %s etag:%s', latestVersion, result.header['etag']);
if (!versions[latestVersion]) return callback(new Error('No version available'));
if (!versions[latestVersion].sourceTarballUrl) return callback(new Error('No sourceTarballUrl specified'));
args.sourceTarballUrl = versions[latestVersion].sourceTarballUrl;
args.data.version = latestVersion;
callback(null, args);
});
}
function provision(args, callback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof callback, 'function');
if (process.env.NODE_ENV === 'test') return callback(null);
ensureVersion(args, function (error, result) {
if (error) return callback(error);
var pargs = [ INSTALLER_CMD ];
pargs.push('--sourcetarballurl', result.sourceTarballUrl);
pargs.push('--data', JSON.stringify(result.data));
debug('provision: calling with args %j', pargs);
// sudo is required for update()
spawn('provision', SUDO, pargs, callback);
});
}

View File

@@ -0,0 +1,67 @@
#!/bin/bash
set -eu -o pipefail
readonly BOX_SRC_DIR=/home/yellowtent/box
readonly DATA_DIR=/home/yellowtent/data
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly json="${script_dir}/../../node_modules/.bin/json"
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 180"
readonly is_update=$([[ -d "${BOX_SRC_DIR}" ]] && echo "yes" || echo "no")
# create a provision file for testing. %q escapes args. %q is reused as much as necessary to satisfy $@
(echo -e "#!/bin/bash\n"; printf "%q " "${script_dir}/installer.sh" "$@") > /home/yellowtent/provision.sh
chmod +x /home/yellowtent/provision.sh
arg_source_tarball_url=""
arg_data=""
args=$(getopt -o "" -l "sourcetarballurl:,data:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--sourcetarballurl) arg_source_tarball_url="$2";;
--data) arg_data="$2";;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
shift 2
done
box_src_tmp_dir=$(mktemp -dt box-src-XXXXXX)
echo "Downloading box code from ${arg_source_tarball_url} to ${box_src_tmp_dir}"
while true; do
if $curl -L "${arg_source_tarball_url}" | tar -zxf - -C "${box_src_tmp_dir}"; then break; fi
echo "Failed to download source tarball, trying again"
sleep 5
done
while true; do
# for reasons unknown, the dtrace package will fail. but rebuilding second time will work
if cd "${box_src_tmp_dir}" && npm rebuild; then break; fi
echo "Failed to rebuild, trying again"
sleep 5
done
if [[ "${is_update}" == "yes" ]]; then
echo "Setting up update splash screen"
"${box_src_tmp_dir}/setup/splashpage.sh" --data "${arg_data}" # show splash from new code
${BOX_SRC_DIR}/setup/stop.sh # stop the old code
fi
# switch the codes
rm -rf "${BOX_SRC_DIR}"
mv "${box_src_tmp_dir}" "${BOX_SRC_DIR}"
chown -R yellowtent.yellowtent "${BOX_SRC_DIR}"
# create a start file for testing. %q escapes args
(echo -e "#!/bin/bash\n"; printf "%q " "${BOX_SRC_DIR}/setup/start.sh" --data "${arg_data}") > /home/yellowtent/setup_start.sh
chmod +x /home/yellowtent/setup_start.sh
echo "Calling box setup script"
"${BOX_SRC_DIR}/setup/start.sh" --data "${arg_data}"

144
installer/src/server.js Executable file
View File

@@ -0,0 +1,144 @@
#!/usr/bin/env node
/* jslint node: true */
'use strict';
var assert = require('assert'),
async = require('async'),
debug = require('debug')('installer:server'),
express = require('express'),
fs = require('fs'),
http = require('http'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess,
installer = require('./installer.js'),
json = require('body-parser').json,
lastMile = require('connect-lastmile'),
morgan = require('morgan'),
superagent = require('superagent');
exports = module.exports = {
start: start,
stop: stop
};
var PROVISION_CONFIG_FILE = '/root/provision.json';
var CLOUDRON_CONFIG_FILE = '/home/yellowtent/configs/cloudron.conf';
var gHttpServer = null; // update server; used for updates
function provisionDigitalOcean(callback) {
if (fs.existsSync(CLOUDRON_CONFIG_FILE)) return callback(null); // already provisioned
superagent.get('http://169.254.169.254/metadata/v1.json').end(function (error, result) {
if (error || result.statusCode !== 200) {
console.error('Error getting metadata', error);
return callback(new Error('Error getting metadata'));
}
var userData = JSON.parse(result.body.user_data);
installer.provision(userData, callback);
});
}
function provisionLocal(callback) {
if (fs.existsSync(CLOUDRON_CONFIG_FILE)) return callback(null); // already provisioned
if (!fs.existsSync(PROVISION_CONFIG_FILE)) {
console.error('No provisioning data found at %s', PROVISION_CONFIG_FILE);
return callback(new Error('No provisioning data found'));
}
var userData = require(PROVISION_CONFIG_FILE);
installer.provision(userData, callback);
}
function update(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.sourceTarballUrl || typeof req.body.sourceTarballUrl !== 'string') return next(new HttpError(400, 'No sourceTarballUrl provided'));
if (!req.body.data || typeof req.body.data !== 'object') return next(new HttpError(400, 'No data provided'));
debug('provision: received from box %j', req.body);
installer.provision(req.body, function (error) {
if (error) console.error(error);
});
next(new HttpSuccess(202, { }));
}
function startUpdateServer(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Starting update server');
var app = express();
var router = new express.Router();
if (process.env.NODE_ENV !== 'test') app.use(morgan('dev', { immediate: false }));
app.use(json({ strict: true }))
.use(router)
.use(lastMile());
router.post('/api/v1/installer/update', update);
gHttpServer = http.createServer(app);
gHttpServer.on('error', console.error);
gHttpServer.listen(2020, '127.0.0.1', callback);
}
function stopUpdateServer(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Stopping update server');
if (!gHttpServer) return callback(null);
gHttpServer.close(callback);
gHttpServer = null;
}
function start(callback) {
assert.strictEqual(typeof callback, 'function');
var actions;
if (process.env.PROVISION === 'local') {
debug('Starting Installer in selfhost mode');
actions = [
startUpdateServer,
provisionLocal
];
} else { // current fallback, should be 'digitalocean' eventually, see initializeBaseUbuntuImage.sh
debug('Starting Installer in managed mode');
actions = [
startUpdateServer,
provisionDigitalOcean
];
}
async.series(actions, callback);
}
function stop(callback) {
assert.strictEqual(typeof callback, 'function');
async.series([
stopUpdateServer
], callback);
}
if (require.main === module) {
start(function (error) {
if (error) console.error(error);
});
}

View File

@@ -0,0 +1,179 @@
/* jslint node:true */
/* global it:false */
/* global describe:false */
/* global before:false */
/* global after:false */
'use strict';
var expect = require('expect.js'),
fs = require('fs'),
path = require('path'),
nock = require('nock'),
os = require('os'),
request = require('superagent'),
server = require('../server.js'),
installer = require('../installer.js'),
_ = require('lodash');
var EXTERNAL_SERVER_URL = 'https://localhost:4443';
var INTERNAL_SERVER_URL = 'http://localhost:2020';
var APPSERVER_ORIGIN = 'http://appserver';
var FQDN = os.hostname();
describe('Server', function () {
this.timeout(5000);
before(function (done) {
var user_data = JSON.stringify({ apiServerOrigin: APPSERVER_ORIGIN }); // user_data is a string
var scope = nock('http://169.254.169.254')
.persist()
.get('/metadata/v1.json')
.reply(200, JSON.stringify({ user_data: user_data }), { 'Content-Type': 'application/json' });
done();
});
after(function (done) {
nock.cleanAll();
done();
});
describe('starts and stop', function () {
it('starts', function (done) {
server.start(done);
});
it('stops', function (done) {
server.stop(done);
});
});
describe('update (internal server)', function () {
before(function (done) {
server.start(done);
});
after(function (done) {
server.stop(done);
});
it('does not respond to provision', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/provision').send({ }).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(404);
done();
});
});
it('does not respond to restore', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/restore').send({ }).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(404);
done();
});
});
var data = {
sourceTarballUrl: "https://foo.tar.gz",
data: {
token: 'sometoken',
apiServerOrigin: APPSERVER_ORIGIN,
webServerOrigin: 'https://somethingelse.com',
fqdn: 'www.something.com',
tlsKey: 'key',
tlsCert: 'cert',
boxVersionsUrl: 'https://versions.json',
version: '0.1'
}
};
Object.keys(data).forEach(function (key) {
it('fails due to missing ' + key, function (done) {
var dataCopy = _.merge({ }, data);
delete dataCopy[key];
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/update').send(dataCopy).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(400);
done();
});
});
});
it('succeeds', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/update').send(data).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(202);
done();
});
});
});
describe('ensureVersion', function () {
before(function () {
process.env.NODE_ENV = undefined;
});
after(function () {
process.env.NODE_ENV = 'test';
});
it ('fails without data', function (done) {
installer._ensureVersion({}, function (error) {
expect(error).to.be.an(Error);
done();
});
});
it ('fails without boxVersionsUrl', function (done) {
installer._ensureVersion({ data: {}}, function (error) {
expect(error).to.be.an(Error);
done();
});
});
it ('succeeds with sourceTarballUrl', function (done) {
var data = {
sourceTarballUrl: 'sometarballurl',
data: {
boxVersionsUrl: 'http://foobar/versions.json'
}
};
installer._ensureVersion(data, function (error, result) {
expect(error).to.equal(null);
expect(result).to.eql(data);
done();
});
});
it ('succeeds without sourceTarballUrl', function (done) {
var versions = {
'0.1.0': {
sourceTarballUrl: 'sometarballurl1'
},
'0.2.0': {
sourceTarballUrl: 'sometarballurl2'
}
};
var scope = nock('http://foobar')
.get('/versions.json')
.reply(200, JSON.stringify(versions), { 'Content-Type': 'application/json' });
var data = {
data: {
boxVersionsUrl: 'http://foobar/versions.json'
}
};
installer._ensureVersion(data, function (error, result) {
expect(error).to.equal(null);
expect(result.sourceTarballUrl).to.equal(versions['0.2.0'].sourceTarballUrl);
expect(result.data.boxVersionsUrl).to.equal(data.data.boxVersionsUrl);
done();
});
});
});
});

65
installer/systemd/box-setup.sh Executable file
View File

@@ -0,0 +1,65 @@
#!/bin/bash
set -eu -o pipefail
readonly USER_HOME="/home/yellowtent"
readonly APPS_SWAP_FILE="/apps.swap"
readonly BACKUP_SWAP_FILE="/backup.swap" # used when doing app backups
readonly USER_DATA_FILE="/root/user_data.img"
readonly USER_DATA_DIR="/home/yellowtent/data"
# detect device
if [[ -b "/dev/vda1" ]]; then
disk_device="/dev/vda1"
fi
if [[ -b "/dev/xvda1" ]]; then
disk_device="/dev/xvda1"
fi
# all sizes are in mb
readonly physical_memory=$(free -m | awk '/Mem:/ { print $2 }')
readonly swap_size="${physical_memory}"
readonly app_count=$((${physical_memory} / 200)) # estimated app count
readonly disk_size_gb=$(fdisk -l ${disk_device} | grep "Disk ${disk_device}" | awk '{ print $3 }')
readonly disk_size=$((disk_size_gb * 1024))
readonly backup_swap_size=1024
# readonly system_size=5120 # 5 gigs for system libs, installer, box code and tmp
readonly system_size=10240 # 10 gigs for system libs, apps images, installer, box code and tmp
readonly ext4_reserved=$((disk_size * 5 / 100)) # this can be changes using tune2fs -m percent /dev/vda1
echo "Disk device: ${disk_device}"
echo "Physical memory: ${physical_memory}"
echo "Estimated app count: ${app_count}"
echo "Disk size: ${disk_size}"
# Allocate two sets of swap files - one for general app usage and another for backup
# The backup swap is setup for swap on the fly by the backup scripts
if [[ ! -f "${APPS_SWAP_FILE}" ]]; then
echo "Creating Apps swap file of size ${swap_size}M"
fallocate -l "${swap_size}m" "${APPS_SWAP_FILE}"
chmod 600 "${APPS_SWAP_FILE}"
mkswap "${APPS_SWAP_FILE}"
swapon "${APPS_SWAP_FILE}"
echo "${APPS_SWAP_FILE} none swap sw 0 0" >> /etc/fstab
else
echo "Apps Swap file already exists"
fi
if [[ ! -f "${BACKUP_SWAP_FILE}" ]]; then
echo "Creating Backup swap file of size ${backup_swap_size}M"
fallocate -l "${backup_swap_size}m" "${BACKUP_SWAP_FILE}"
chmod 600 "${BACKUP_SWAP_FILE}"
mkswap "${BACKUP_SWAP_FILE}"
else
echo "Backups Swap file already exists"
fi
echo "Resizing data volume"
home_data_size=$((disk_size - system_size - swap_size - backup_swap_size - ext4_reserved))
echo "Resizing up btrfs user data to size ${home_data_size}M"
umount "${USER_DATA_DIR}"
fallocate -l "${home_data_size}m" "${USER_DATA_FILE}" # does not overwrite existing data
mount "${USER_DATA_FILE}"
btrfs filesystem resize max "${USER_DATA_DIR}"

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
require('debug').formatArgs = function formatArgs() {
arguments[0] = this.namespace + ' ' + arguments[0];
return arguments;
};
var assert = require('assert'),
debug = require('debug')('box:janitor'),
async = require('async'),
tokendb = require('./src/tokendb.js'),
authcodedb = require('./src/authcodedb.js'),
database = require('./src/database.js');
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
async.series([
database.initialize
], callback);
}
function cleanupExpiredTokens(callback) {
assert.strictEqual(typeof callback, 'function');
tokendb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired tokens.', result);
callback(null);
});
}
function cleanupExpiredAuthCodes(callback) {
assert.strictEqual(typeof callback, 'function');
authcodedb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired authcodes.', result);
callback(null);
});
}
function run() {
cleanupExpiredTokens(function (error) {
if (error) console.error(error);
cleanupExpiredAuthCodes(function (error) {
if (error) console.error(error);
process.exit(0);
});
});
}
if (require.main === module) {
initialize(function (error) {
if (error) {
console.error('janitor task exiting with error', error);
process.exit(1);
}
run();
});
}

View File

@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN oauthProxy BOOLEAN DEFAULT 0', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN oauthProxy', function (error) {
if (error) console.error(error);
callback(error);
});
};

View File

@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'DELETE FROM clients'),
db.runSql.bind(db, 'ALTER TABLE clients ADD COLUMN type VARCHAR(16) NOT NULL'),
], callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE clients DROP COLUMN type', function (error) {
if (error) console.error(error);
callback(error);
});
};

View File

@@ -0,0 +1,17 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE accessRestriction accessRestrictionJson VARCHAR(2048)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE accessRestrictionJson accessRestriction VARCHAR(2048)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};

View File

@@ -0,0 +1,16 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY manifestJson TEXT', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps MODIFY manifestJson VARCHAR(2048)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};

View File

@@ -0,0 +1,19 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps MODIFY accessRestrictionJson TEXT'),
db.runSql.bind(db, 'ALTER TABLE apps MODIFY lastBackupConfigJson TEXT'),
db.runSql.bind(db, 'ALTER TABLE apps MODIFY oldConfigJson TEXT')
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE apps MODIFY accessRestrictionJson VARCHAR(2048)'),
db.runSql.bind(db, 'ALTER TABLE apps MODIFY lastBackupConfigJson VARCHAR(2048)'),
db.runSql.bind(db, 'ALTER TABLE apps MODIFY oldConfigJson VARCHAR(2048)')
], callback);
};

View File

@@ -0,0 +1,15 @@
dbm = dbm || require('db-migrate');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN displayName VARCHAR(512) DEFAULT ""', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN displayName', function (error) {
if (error) console.error(error);
callback(error);
});
};

View File

@@ -18,6 +18,7 @@ CREATE TABLE IF NOT EXISTS users(
createdAt VARCHAR(512) NOT NULL,
modifiedAt VARCHAR(512) NOT NULL,
admin INTEGER NOT NULL,
displayName VARCHAR(512) DEFAULT '',
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS tokens(
@@ -29,8 +30,9 @@ CREATE TABLE IF NOT EXISTS tokens(
PRIMARY KEY(accessToken));
CREATE TABLE IF NOT EXISTS clients(
id VARCHAR(128) NOT NULL UNIQUE,
id VARCHAR(128) NOT NULL UNIQUE, // prefixed with cid- to identify token easily in auth routes
appId VARCHAR(128) NOT NULL,
type VARCHAR(16) NOT NULL,
clientSecret VARCHAR(512) NOT NULL,
redirectURI VARCHAR(512) NOT NULL,
scope VARCHAR(512) NOT NULL,
@@ -44,17 +46,18 @@ CREATE TABLE IF NOT EXISTS apps(
runState VARCHAR(512),
health VARCHAR(128),
containerId VARCHAR(128),
manifestJson VARCHAR(2048),
manifestJson TEXT,
httpPort INTEGER, // this is the nginx proxy port and not manifest.httpPort
location VARCHAR(128) NOT NULL UNIQUE,
dnsRecordId VARCHAR(512),
accessRestriction VARCHAR(512),
accessRestrictionJson TEXT,
oauthProxy BOOLEAN DEFAULT 0,
createdAt TIMESTAMP(2) NOT NULL DEFAULT CURRENT_TIMESTAMP,
lastBackupId VARCHAR(128),
lastBackupConfigJson VARCHAR(2048), // used for appstore and non-appstore installs. it's here so it's easy to do REST validation
lastBackupConfigJson TEXT, // used for appstore and non-appstore installs. it's here so it's easy to do REST validation
oldConfigJson VARCHAR(2048), // used to pass old config for apptask
oldConfigJson TEXT, // used to pass old config for apptask
PRIMARY KEY(id));

1585
npm-shrinkwrap.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,13 +10,15 @@
"type": "git"
},
"engines": [
"node >= 0.12.0"
"node >=4.0.0 <=4.1.1"
],
"dependencies": {
"async": "^1.2.1",
"attempt": "^1.0.1",
"aws-sdk": "^2.1.46",
"body-parser": "^1.13.1",
"cloudron-manifestformat": "^1.7.0",
"bytes": "^2.1.0",
"cloudron-manifestformat": "^2.2.0",
"connect-ensure-login": "^0.1.1",
"connect-lastmile": "0.0.13",
"connect-timeout": "^1.5.0",
@@ -40,6 +42,7 @@
"multiparty": "^4.1.2",
"mysql": "^2.7.0",
"native-dns": "^0.7.0",
"node-df": "^0.1.1",
"node-uuid": "^1.4.3",
"nodemailer": "^1.3.0",
"nodemailer-smtp-transport": "^1.0.3",
@@ -50,22 +53,25 @@
"passport-http-bearer": "^1.0.1",
"passport-local": "^1.0.0",
"passport-oauth2-client-password": "^0.1.2",
"password-generator": "^1.0.0",
"password-generator": "^2.0.2",
"proxy-middleware": "^0.13.0",
"safetydance": "0.0.19",
"safetydance": "^0.1.0",
"semver": "^4.3.6",
"serve-favicon": "^2.2.0",
"split": "^1.0.0",
"superagent": "~0.21.0",
"supererror": "^0.7.0",
"superagent": "^1.5.0",
"supererror": "^0.7.1",
"tail-stream": "https://registry.npmjs.org/tail-stream/-/tail-stream-0.2.1.tgz",
"underscore": "^1.7.0",
"ursa": "^0.9.1",
"valid-url": "^1.0.9",
"validator": "^3.30.0"
"validator": "^4.4.0",
"x509": "^0.2.2"
},
"devDependencies": {
"apidoc": "*",
"bootstrap-sass": "^3.3.3",
"deep-extend": "^0.4.1",
"del": "^1.1.1",
"expect.js": "*",
"gulp": "^3.8.11",
@@ -81,9 +87,10 @@
"istanbul": "*",
"js2xmlparser": "^1.0.0",
"mocha": "*",
"nock": "^2.6.0",
"nock": "^3.4.0",
"node-sass": "^3.0.0-alpha.0",
"redis": "^0.12.1",
"redis": "^2.4.2",
"request": "^2.65.0",
"sinon": "^1.12.2",
"yargs": "^3.15.0"
},

123
scripts/createReleaseTarball Executable file
View File

@@ -0,0 +1,123 @@
#!/bin/bash
set -eu
assertNotEmpty() {
: "${!1:? "$1 is not set."}"
}
# Only GNU getopt supports long options. OS X comes bundled with the BSD getopt
# brew install gnu-getopt to get the GNU getopt on OS X
[[ $(uname -s) == "Darwin" ]] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
args=$(${GNU_GETOPT} -o "" -l "revision:,output:,publish,no-upload" -n "$0" -- "$@")
eval set -- "${args}"
readonly RELEASE_TOOL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../release" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
delete_bundle="yes"
commitish="HEAD"
publish="no"
upload="yes"
bundle_file=""
while true; do
case "$1" in
--revision) commitish="$2"; shift 2;;
--output) bundle_file="$2"; delete_bundle="no"; shift 2;;
--no-upload) upload="no"; shift;;
--publish) publish="yes"; shift;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
if [[ "${upload}" == "no" && "${publish}" == "yes" ]]; then
echo "Cannot publish without uploading"
exit 1
fi
readonly TMPDIR=${TMPDIR:-/tmp} # why is this not set on mint?
assertNotEmpty AWS_DEV_ACCESS_KEY
assertNotEmpty AWS_DEV_SECRET_KEY
if ! $(cd "${SOURCE_DIR}" && git diff --exit-code >/dev/null); then
echo "You have local changes, stash or commit them to proceed"
exit 1
fi
if [[ "$(node --version)" != "v4.1.1" ]]; then
echo "This script requires node 4.1.1"
exit 1
fi
version=$(cd "${SOURCE_DIR}" && git rev-parse "${commitish}")
bundle_dir=$(mktemp -d -t box 2>/dev/null || mktemp -d box-XXXXXXXXXX --tmpdir=$TMPDIR)
[[ -z "$bundle_file" ]] && bundle_file="${TMPDIR}/box-${version}.tar.gz"
chmod "o+rx,g+rx" "${bundle_dir}" # otherwise extracted tarball director won't be readable by others/group
echo "Checking out code [${version}] into ${bundle_dir}"
(cd "${SOURCE_DIR}" && git archive --format=tar ${version} | (cd "${bundle_dir}" && tar xf -))
if diff "${TMPDIR}/boxtarball.cache/npm-shrinkwrap.json.all" "${bundle_dir}/npm-shrinkwrap.json" >/dev/null 2>&1; then
echo "Reusing dev modules from cache"
cp -r "${TMPDIR}/boxtarball.cache/node_modules-all/." "${bundle_dir}/node_modules"
else
echo "Installing modules with dev dependencies"
(cd "${bundle_dir}" && npm install)
echo "Caching dev dependencies"
mkdir -p "${TMPDIR}/boxtarball.cache/node_modules-all"
rsync -a --delete "${bundle_dir}/node_modules/" "${TMPDIR}/boxtarball.cache/node_modules-all/"
cp "${bundle_dir}/npm-shrinkwrap.json" "${TMPDIR}/boxtarball.cache/npm-shrinkwrap.json.all"
fi
echo "Building webadmin assets"
(cd "${bundle_dir}" && gulp)
echo "Remove intermediate files required at build-time only"
rm -rf "${bundle_dir}/node_modules/"
rm -rf "${bundle_dir}/webadmin/src"
rm -rf "${bundle_dir}/gulpfile.js"
if diff "${TMPDIR}/boxtarball.cache/npm-shrinkwrap.json.prod" "${bundle_dir}/npm-shrinkwrap.json" >/dev/null 2>&1; then
echo "Reusing prod modules from cache"
cp -r "${TMPDIR}/boxtarball.cache/node_modules-prod/." "${bundle_dir}/node_modules"
else
echo "Installing modules for production"
(cd "${bundle_dir}" && npm install --production --no-optional)
echo "Caching prod dependencies"
mkdir -p "${TMPDIR}/boxtarball.cache/node_modules-prod"
rsync -a --delete "${bundle_dir}/node_modules/" "${TMPDIR}/boxtarball.cache/node_modules-prod/"
cp "${bundle_dir}/npm-shrinkwrap.json" "${TMPDIR}/boxtarball.cache/npm-shrinkwrap.json.prod"
fi
echo "Create final tarball"
(cd "${bundle_dir}" && tar czf "${bundle_file}" .)
echo "Cleaning up ${bundle_dir}"
rm -rf "${bundle_dir}"
if [[ "${upload}" == "yes" ]]; then
echo "Uploading bundle to S3"
# That special header is needed to allow access with singed urls created with different aws credentials than the ones the file got uploaded
s3cmd --multipart-chunk-size-mb=5 --ssl --acl-public --access_key="${AWS_DEV_ACCESS_KEY}" --secret_key="${AWS_DEV_SECRET_KEY}" --no-mime-magic put "${bundle_file}" "s3://dev-cloudron-releases/box-${version}.tar.gz"
versions_file_url="https://dev-cloudron-releases.s3.amazonaws.com/box-${version}.tar.gz"
echo "The URL for the versions file is: ${versions_file_url}"
if [[ "${publish}" == "yes" ]]; then
echo "Publishing to dev"
${RELEASE_TOOL_DIR}/release create --env dev --code "${versions_file_url}"
fi
fi
if [[ "${delete_bundle}" == "no" ]]; then
echo "Tarball preserved at ${bundle_file}"
else
rm "${bundle_file}"
fi

View File

@@ -3,15 +3,21 @@
# If you change the infra version, be sure to put a warning
# in the change log
INFRA_VERSION=14
INFRA_VERSION=21
# WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
# These constants are used in the installer script as well
BASE_IMAGE=cloudron/base:0.5.1
MYSQL_IMAGE=cloudron/mysql:0.5.0
POSTGRESQL_IMAGE=cloudron/postgresql:0.5.0
MONGODB_IMAGE=cloudron/mongodb:0.5.0
REDIS_IMAGE=cloudron/redis:0.5.0 # if you change this, fix src/addons.js as well
MAIL_IMAGE=cloudron/mail:0.5.0
GRAPHITE_IMAGE=cloudron/graphite:0.4.0
BASE_IMAGE=cloudron/base:0.8.0
MYSQL_IMAGE=cloudron/mysql:0.8.0
POSTGRESQL_IMAGE=cloudron/postgresql:0.8.0
MONGODB_IMAGE=cloudron/mongodb:0.8.0
REDIS_IMAGE=cloudron/redis:0.8.0 # if you change this, fix src/addons.js as well
MAIL_IMAGE=cloudron/mail:0.9.0
GRAPHITE_IMAGE=cloudron/graphite:0.8.0
MYSQL_REPO=cloudron/mysql
POSTGRESQL_REPO=cloudron/postgresql
MONGODB_REPO=cloudron/mongodb
REDIS_REPO=cloudron/redis # if you change this, fix src/addons.js as well
MAIL_REPO=cloudron/mail
GRAPHITE_REPO=cloudron/graphite

View File

@@ -11,13 +11,16 @@ arg_is_custom_domain="false"
arg_restore_key=""
arg_restore_url=""
arg_retire="false"
arg_tls_config=""
arg_tls_cert=""
arg_tls_key=""
arg_token=""
arg_version=""
arg_web_server_origin=""
arg_backup_key=""
arg_aws=""
arg_backup_config=""
arg_dns_config=""
arg_update_config=""
arg_provider=""
args=$(getopt -o "" -l "data:,retire" -n "$0" -- "$@")
eval set -- "${args}"
@@ -30,24 +33,32 @@ while true; do
;;
--data)
# only read mandatory non-empty parameters here
read -r arg_api_server_origin arg_web_server_origin arg_fqdn arg_token arg_is_custom_domain arg_box_versions_url arg_version <<EOF
$(echo "$2" | $json apiServerOrigin webServerOrigin fqdn token isCustomDomain boxVersionsUrl version | tr '\n' ' ')
read -r arg_api_server_origin arg_web_server_origin arg_fqdn arg_is_custom_domain arg_box_versions_url arg_version <<EOF
$(echo "$2" | $json apiServerOrigin webServerOrigin fqdn isCustomDomain boxVersionsUrl version | tr '\n' ' ')
EOF
# read possibly empty parameters here
arg_tls_cert=$(echo "$2" | $json tlsCert)
arg_tls_key=$(echo "$2" | $json tlsKey)
arg_token=$(echo "$2" | $json token)
arg_provider=$(echo "$2" | $json provider)
arg_restore_url=$(echo "$2" | $json restoreUrl)
arg_tls_config=$(echo "$2" | $json tlsConfig)
[[ "${arg_tls_config}" == "null" ]] && arg_tls_config=""
arg_restore_url=$(echo "$2" | $json restore.url)
[[ "${arg_restore_url}" == "null" ]] && arg_restore_url=""
arg_restore_key=$(echo "$2" | $json restoreKey)
arg_restore_key=$(echo "$2" | $json restore.key)
[[ "${arg_restore_key}" == "null" ]] && arg_restore_key=""
arg_backup_key=$(echo "$2" | $json backupKey)
[[ "${arg_backup_key}" == "null" ]] && arg_backup_key=""
arg_backup_config=$(echo "$2" | $json backupConfig)
[[ "${arg_backup_config}" == "null" ]] && arg_backup_config=""
arg_aws=$(echo "$2" | $json aws)
[[ "${arg_aws}" == "null" ]] && arg_aws=""
arg_dns_config=$(echo "$2" | $json dnsConfig)
[[ "${arg_dns_config}" == "null" ]] && arg_dns_config=""
arg_update_config=$(echo "$2" | $json updateConfig)
[[ "${arg_update_config}" == "null" ]] && arg_update_config=""
shift 2
;;
@@ -66,5 +77,7 @@ echo "restore url: ${arg_restore_url}"
echo "tls cert: ${arg_tls_cert}"
echo "tls key: ${arg_tls_key}"
echo "token: ${arg_token}"
echo "tlsConfig: ${arg_tls_config}"
echo "version: ${arg_version}"
echo "web server: ${arg_web_server_origin}"
echo "provider: ${arg_provider}"

View File

@@ -14,12 +14,13 @@ rm -rf "${CONFIG_DIR}"
sudo -u yellowtent mkdir "${CONFIG_DIR}"
########## systemd
rm -f /etc/systemd/system/janitor.*
cp -r "${container_files}/systemd/." /etc/systemd/system/
systemctl daemon-reload
systemctl enable cloudron.target
########## sudoers
rm /etc/sudoers.d/*
rm -f /etc/sudoers.d/yellowtent
cp "${container_files}/sudoers" /etc/sudoers.d/yellowtent
########## collectd

View File

@@ -1,3 +1,6 @@
# sudo logging breaks journalctl output with very long urls (systemd bug)
Defaults !syslog
Defaults!/home/yellowtent/box/src/scripts/createappdir.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/createappdir.sh
@@ -27,3 +30,7 @@ yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/backupswap.sh
Defaults!/home/yellowtent/box/src/scripts/collectlogs.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/collectlogs.sh
Defaults!/home/yellowtent/box/src/scripts/retire.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/retire.sh

View File

@@ -2,6 +2,8 @@
Description=Cloudron Admin
OnFailure=crashnotifier@%n.service
StopWhenUnneeded=true
; journald crashes result in a EPIPE in node. Cannot ignore it as it results in loss of logs.
BindsTo=systemd-journald.service
[Service]
Type=idle
@@ -9,9 +11,12 @@ WorkingDirectory=/home/yellowtent/box
Restart=always
ExecStart=/usr/bin/node --max_old_space_size=150 /home/yellowtent/box/box.js
Environment="HOME=/home/yellowtent" "USER=yellowtent" "DEBUG=box*,connect-lastmile" "BOX_ENV=cloudron" "NODE_ENV=production"
KillMode=process
; kill apptask processes as well
KillMode=control-group
User=yellowtent
Group=yellowtent
MemoryLimit=200M
TimeoutStopSec=5s
StartLimitInterval=1
StartLimitBurst=60

View File

@@ -2,8 +2,8 @@
Description=Cloudron Smart Cloud
Documentation=https://cloudron.io/documentation.html
StopWhenUnneeded=true
Requires=box.service janitor.timer
After=box.service janitor.timer
Requires=box.service
After=box.service
# AllowIsolate=yes
[Install]

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Cloudron Janitor
OnFailure=crashnotifier@%n.service
[Service]
Type=simple
WorkingDirectory=/home/yellowtent/box
Restart=no
ExecStart=/usr/bin/node /home/yellowtent/box/janitor.js
Environment="HOME=/home/yellowtent" "USER=yellowtent" "DEBUG=box*,connect-lastmile" "BOX_ENV=cloudron" "NODE_ENV=production"
KillMode=process
User=yellowtent
Group=yellowtent
MemoryLimit=50M
WatchdogSec=30

View File

@@ -1,10 +0,0 @@
[Unit]
Description=Cloudron Janitor
StopWhenUnneeded=true
[Timer]
# this activates it immediately
OnBootSec=0
OnUnitActiveSec=30min
Unit=janitor.service

View File

@@ -29,10 +29,10 @@ infra_version="none"
if [[ "${arg_retire}" == "true" || "${infra_version}" != "${INFRA_VERSION}" ]]; then
rm -f ${DATA_DIR}/nginx/applications/*
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
else
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
fi
echo '{ "update": { "percent": "10", "message": "Updating cloudron software" }, "backup": null }' > "${SETUP_WEBSITE_DIR}/progress.json"

View File

@@ -38,7 +38,9 @@ set_progress "10" "Ensuring directories"
# keep these in sync with paths.js
[[ "${is_update}" == "false" ]] && btrfs subvolume create "${DATA_DIR}/box"
mkdir -p "${DATA_DIR}/box/appicons"
mkdir -p "${DATA_DIR}/box/certs"
mkdir -p "${DATA_DIR}/box/mail"
mkdir -p "${DATA_DIR}/box/acme" # acme keys
mkdir -p "${DATA_DIR}/graphite"
mkdir -p "${DATA_DIR}/mysql"
@@ -47,6 +49,7 @@ mkdir -p "${DATA_DIR}/mongodb"
mkdir -p "${DATA_DIR}/snapshots"
mkdir -p "${DATA_DIR}/addons"
mkdir -p "${DATA_DIR}/collectd/collectd.conf.d"
mkdir -p "${DATA_DIR}/acme" # acme challenges
# bookkeep the version as part of data
echo "{ \"version\": \"${arg_version}\", \"boxVersionsUrl\": \"${arg_box_versions_url}\" }" > "${DATA_DIR}/box/version"
@@ -89,31 +92,35 @@ EOF
set_progress "28" "Setup collectd"
cp "${script_dir}/start/collectd.conf" "${DATA_DIR}/collectd/collectd.conf"
# collectd 5.4.1 has some bug where we simply cannot get it to create df-vda1
mkdir -p "${DATA_DIR}/graphite/whisper/collectd/localhost/"
vda1_id=$(blkid -s UUID -o value /dev/vda1)
ln -sfF "df-disk_by-uuid_${vda1_id}" "${DATA_DIR}/graphite/whisper/collectd/localhost/df-vda1"
service collectd restart
set_progress "30" "Setup nginx"
# setup naked domain to use admin by default. app restoration will overwrite this config
mkdir -p "${DATA_DIR}/nginx/applications"
cp "${script_dir}/start/nginx/nginx.conf" "${DATA_DIR}/nginx/nginx.conf"
cp "${script_dir}/start/nginx/mime.types" "${DATA_DIR}/nginx/mime.types"
# generate the main nginx config file
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/nginx.ejs" \
-O "{ \"sourceDir\": \"${BOX_SRC_DIR}\" }" > "${DATA_DIR}/nginx/nginx.conf"
# generate these for update code paths as well to overwrite splash
admin_cert_file="${DATA_DIR}/nginx/cert/host.cert"
admin_key_file="${DATA_DIR}/nginx/cert/host.key"
if [[ -f "${DATA_DIR}/box/certs/${admin_fqdn}.cert" && -f "${DATA_DIR}/box/certs/${admin_fqdn}.key" ]]; then
admin_cert_file="${DATA_DIR}/box/certs/${admin_fqdn}.cert"
admin_key_file="${DATA_DIR}/box/certs/${admin_fqdn}.key"
fi
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"admin\", \"sourceDir\": \"${BOX_SRC_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"admin\", \"sourceDir\": \"${BOX_SRC_DIR}\", \"certFilePath\": \"${admin_cert_file}\", \"keyFilePath\": \"${admin_key_file}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
mkdir -p "${DATA_DIR}/nginx/cert"
echo "${arg_tls_cert}" > ${DATA_DIR}/nginx/cert/host.cert
echo "${arg_tls_key}" > ${DATA_DIR}/nginx/cert/host.key
if [[ -f "${DATA_DIR}/box/certs/host.cert" && -f "${DATA_DIR}/box/certs/host.key" ]]; then
cp "${DATA_DIR}/box/certs/host.cert" "${DATA_DIR}/nginx/cert/host.cert"
cp "${DATA_DIR}/box/certs/host.key" "${DATA_DIR}/nginx/cert/host.key"
else
echo "${arg_tls_cert}" > "${DATA_DIR}/nginx/cert/host.cert"
echo "${arg_tls_key}" > "${DATA_DIR}/nginx/cert/host.key"
fi
set_progress "33" "Changing ownership"
chown "${USER}:${USER}" -R "${DATA_DIR}/box" "${DATA_DIR}/nginx" "${DATA_DIR}/collectd" "${DATA_DIR}/addons"
chown "${USER}:${USER}" -R "${DATA_DIR}/box" "${DATA_DIR}/nginx" "${DATA_DIR}/collectd" "${DATA_DIR}/addons" "${DATA_DIR}/acme"
chown "${USER}:${USER}" "${DATA_DIR}"
set_progress "40" "Setting up infra"
${script_dir}/start/setup_infra.sh "${arg_fqdn}"
@@ -122,7 +129,6 @@ set_progress "65" "Creating cloudron.conf"
sudo -u yellowtent -H bash <<EOF
set -eu
echo "Creating cloudron.conf"
# note that arg_aws is a javascript object and intentionally unquoted below
cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
{
"version": "${arg_version}",
@@ -133,15 +139,14 @@ cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
"isCustomDomain": ${arg_is_custom_domain},
"boxVersionsUrl": "${arg_box_versions_url}",
"adminEmail": "admin@${arg_fqdn}",
"provider": "${arg_provider}",
"database": {
"hostname": "localhost",
"username": "root",
"password": "${mysql_root_password}",
"port": 3306,
"name": "box"
},
"backupKey": "${arg_backup_key}",
"aws": ${arg_aws}
}
}
CONF_END
@@ -153,18 +158,50 @@ cat > "${BOX_SRC_DIR}/webadmin/dist/config.json" <<CONF_END
CONF_END
EOF
# Add Backup Configuration
if [[ ! -z "${arg_backup_config}" ]]; then
echo "Add Backup Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"backup_config\", '$arg_backup_config')" box
fi
# Add DNS Configuration
if [[ ! -z "${arg_dns_config}" ]]; then
echo "Add DNS Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"dns_config\", '$arg_dns_config')" box
fi
# Add Update Configuration
if [[ ! -z "${arg_update_config}" ]]; then
echo "Add Update Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"update_config\", '$arg_update_config')" box
fi
# Add TLS Configuration
if [[ ! -z "${arg_tls_config}" ]]; then
echo "Add TLS Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"tls_config\", '$arg_tls_config')" box
fi
# Add webadmin oauth client
# The domain might have changed, therefor we have to update the record
# !!! This needs to be in sync with the webadmin, specifically login_callback.js
echo "Add webadmin oauth cient"
ADMIN_SCOPES="root,developer,profile,users,apps,settings,roleUser"
ADMIN_SCOPES="root,developer,profile,users,apps,settings"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (\"cid-webadmin\", \"webadmin\", \"secret-webadmin\", \"${admin_origin}\", \"${ADMIN_SCOPES}\")" box
-e "REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (\"cid-webadmin\", \"webadmin\", \"admin\", \"secret-webadmin\", \"${admin_origin}\", \"${ADMIN_SCOPES}\")" box
echo "Add localhost test oauth cient"
ADMIN_SCOPES="root,developer,profile,users,apps,settings,roleUser"
echo "Add localhost test oauth client"
ADMIN_SCOPES="root,developer,profile,users,apps,settings"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (\"cid-test\", \"test\", \"secret-test\", \"http://127.0.0.1:5000\", \"${ADMIN_SCOPES}\")" box
-e "REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (\"cid-test\", \"test\", \"test\", \"secret-test\", \"http://127.0.0.1:5000\", \"${ADMIN_SCOPES}\")" box
set_progress "80" "Starting Cloudron"
systemctl start cloudron.target

View File

@@ -133,10 +133,10 @@ LoadPlugin nginx
# Globals true
#</LoadPlugin>
#LoadPlugin pinba
LoadPlugin ping
#LoadPlugin ping
#LoadPlugin postgresql
#LoadPlugin powerdns
LoadPlugin processes
#LoadPlugin processes
#LoadPlugin protocols
#<LoadPlugin python>
# Globals true
@@ -161,7 +161,7 @@ LoadPlugin tail
#LoadPlugin users
#LoadPlugin uuid
#LoadPlugin varnish
LoadPlugin vmem
#LoadPlugin vmem
#LoadPlugin vserver
#LoadPlugin wireless
LoadPlugin write_graphite
@@ -193,11 +193,11 @@ LoadPlugin write_graphite
</Plugin>
<Plugin df>
FSType "tmpfs"
MountPoint "/dev"
FSType "ext4"
FSType "btrfs"
ReportByDevice true
IgnoreSelected true
IgnoreSelected false
ValuesAbsolute true
ValuesPercentage true
@@ -212,17 +212,6 @@ LoadPlugin write_graphite
URL "http://127.0.0.1/nginx_status"
</Plugin>
<Plugin ping>
Host "google.com"
Interval 1.0
Timeout 0.9
TTL 255
</Plugin>
<Plugin processes>
ProcessMatch "app" "node box.js"
</Plugin>
<Plugin swap>
ReportByDevice false
ReportBytes true
@@ -255,10 +244,6 @@ LoadPlugin write_graphite
</File>
</Plugin>
<Plugin vmem>
Verbose false
</Plugin>
<Plugin write_graphite>
<Node "graphing">
Host "localhost"

View File

@@ -10,8 +10,8 @@ server {
ssl on;
# paths are relative to prefix and not to this file
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
ssl_certificate <%= certFilePath %>;
ssl_certificate_key <%= keyFilePath %>;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
@@ -37,7 +37,8 @@ server {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
error_page 500 502 503 504 @appstatus;
# only serve up the status page if we get proxy gateway errors
error_page 502 503 504 @appstatus;
location @appstatus {
return 307 <%= adminOrigin %>/appstatus.html?referrer=https://$host$request_uri;
}
@@ -57,12 +58,17 @@ server {
client_max_body_size 1m;
}
# graphite paths
location ~ ^/(graphite|content|metrics|dashboard|render|browser|composer)/ {
proxy_pass http://127.0.0.1:8000;
client_max_body_size 1m;
location ~ ^/api/v1/apps/.*/exec$ {
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 30m;
}
# graphite paths
# location ~ ^/(graphite|content|metrics|dashboard|render|browser|composer)/ {
# proxy_pass http://127.0.0.1:8000;
# client_max_body_size 1m;
# }
location / {
root <%= sourceDir %>/webadmin/dist;
index index.html index.htm;

View File

@@ -38,6 +38,12 @@ http {
deny all;
}
# acme challenges
location /.well-known/acme-challenge/ {
default_type text/plain;
alias /home/yellowtent/data/acme/;
}
location / {
# redirect everything to HTTPS
return 301 https://$host$request_uri;
@@ -52,14 +58,31 @@ http {
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
# increase the proxy buffer sizes to not run into buffer issues (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers)
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Disable check to allow unlimited body sizes
client_max_body_size 0;
error_page 404 = @fallback;
location @fallback {
internal;
root <%= sourceDir %>/webadmin/dist;
root /home/yellowtent/box/webadmin/dist;
rewrite ^/$ /nakeddomain.html break;
}
return 404;
location / {
internal;
root /home/yellowtent/box/webadmin/dist;
rewrite ^/$ /nakeddomain.html break;
}
location /api/ {
proxy_pass http://127.0.0.1:3000;
client_max_body_size 1m;
}
}
include applications/*.conf;

View File

@@ -34,9 +34,12 @@ graphite_container_id=$(docker run --restart=always -d --name="graphite" \
-p 127.0.0.1:2004:2004 \
-p 127.0.0.1:8000:8000 \
-v "${DATA_DIR}/graphite:/app/data" \
--read-only -v /tmp -v /run -v /var/log \
--read-only -v /tmp -v /run \
"${GRAPHITE_IMAGE}")
echo "Graphite container id: ${graphite_container_id}"
if docker images "${GRAPHITE_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${GRAPHITE_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old graphite images"
fi
# mail (MAIL_SMTP_PORT is 2500 in addons.js. used in mailer.js as well)
mail_container_id=$(docker run --restart=always -d --name="mail" \
@@ -45,9 +48,12 @@ mail_container_id=$(docker run --restart=always -d --name="mail" \
-h "${arg_fqdn}" \
-e "DOMAIN_NAME=${arg_fqdn}" \
-v "${DATA_DIR}/box/mail:/app/data" \
--read-only -v /tmp -v /run -v /var/log \
--read-only -v /tmp -v /run \
"${MAIL_IMAGE}")
echo "Mail container id: ${mail_container_id}"
if docker images "${MAIL_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${MAIL_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old mail images"
fi
# mysql
mysql_addon_root_password=$(pwgen -1 -s)
@@ -62,9 +68,12 @@ mysql_container_id=$(docker run --restart=always -d --name="mysql" \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/mysql:/var/lib/mysql" \
-v "${DATA_DIR}/addons/mysql_vars.sh:/etc/mysql/mysql_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
--read-only -v /tmp -v /run \
"${MYSQL_IMAGE}")
echo "MySQL container id: ${mysql_container_id}"
if docker images "${MYSQL_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${MYSQL_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old mysql images"
fi
# postgresql
postgresql_addon_root_password=$(pwgen -1 -s)
@@ -77,9 +86,12 @@ postgresql_container_id=$(docker run --restart=always -d --name="postgresql" \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/postgresql:/var/lib/postgresql" \
-v "${DATA_DIR}/addons/postgresql_vars.sh:/etc/postgresql/postgresql_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
--read-only -v /tmp -v /run \
"${POSTGRESQL_IMAGE}")
echo "PostgreSQL container id: ${postgresql_container_id}"
if docker images "${POSTGRESQL_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${POSTGRESQL_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old postgresql images"
fi
# mongodb
mongodb_addon_root_password=$(pwgen -1 -s)
@@ -92,9 +104,17 @@ mongodb_container_id=$(docker run --restart=always -d --name="mongodb" \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/mongodb:/var/lib/mongodb" \
-v "${DATA_DIR}/addons/mongodb_vars.sh:/etc/mongodb_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
--read-only -v /tmp -v /run \
"${MONGODB_IMAGE}")
echo "Mongodb container id: ${mongodb_container_id}"
if docker images "${MONGODB_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${MONGODB_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old mongodb images"
fi
# redis
if docker images "${REDIS_REPO}" | tail -n +2 | awk '{ print $1 ":" $2 }' | grep -v "${REDIS_IMAGE}" | xargs --no-run-if-empty docker rmi; then
echo "Removed old redis images"
fi
# only touch apps in installed state. any other state is just resumed by the taskmanager
if [[ "${infra_version}" == "none" ]]; then
@@ -107,4 +127,3 @@ else
fi
echo -n "${INFRA_VERSION}" > "${DATA_DIR}/INFRA_VERSION"

View File

@@ -9,6 +9,7 @@ exports = module.exports = {
getEnvironment: getEnvironment,
getLinksSync: getLinksSync,
getBindsSync: getBindsSync,
getContainerNamesSync: getContainerNamesSync,
// exported for testing
_setupOauth: setupOauth,
@@ -23,56 +24,36 @@ var appdb = require('./appdb.js'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:addons'),
docker = require('./docker.js'),
docker = require('./docker.js').connection,
fs = require('fs'),
generatePassword = require('password-generator'),
hat = require('hat'),
MemoryStream = require('memorystream'),
once = require('once'),
os = require('os'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
shell = require('./shell.js'),
spawn = child_process.spawn,
util = require('util'),
uuid = require('node-uuid'),
vbox = require('./vbox.js');
uuid = require('node-uuid');
var NOOP = function (app, options, callback) { return callback(); };
// setup can be called multiple times for the same app (configure crash restart) and existing data must not be lost
// teardown is destructive. app data stored with the addon is lost
var KNOWN_ADDONS = {
oauth: {
setup: setupOauth,
teardown: teardownOauth,
backup: NOOP,
restore: setupOauth
},
ldap: {
setup: setupLdap,
teardown: teardownLdap,
backup: NOOP,
restore: setupLdap
},
sendmail: {
setup: setupSendMail,
teardown: teardownSendMail,
backup: NOOP,
restore: setupSendMail
},
mysql: {
setup: setupMySql,
teardown: teardownMySql,
backup: backupMySql,
restore: restoreMySql,
},
postgresql: {
setup: setupPostgreSql,
teardown: teardownPostgreSql,
backup: backupPostgreSql,
restore: restorePostgreSql
localstorage: {
setup: NOOP, // docker creates the directory for us
teardown: NOOP,
backup: NOOP, // no backup because it's already inside app data
restore: NOOP
},
mongodb: {
setup: setupMongoDb,
@@ -80,18 +61,48 @@ var KNOWN_ADDONS = {
backup: backupMongoDb,
restore: restoreMongoDb
},
mysql: {
setup: setupMySql,
teardown: teardownMySql,
backup: backupMySql,
restore: restoreMySql,
},
oauth: {
setup: setupOauth,
teardown: teardownOauth,
backup: NOOP,
restore: setupOauth
},
postgresql: {
setup: setupPostgreSql,
teardown: teardownPostgreSql,
backup: backupPostgreSql,
restore: restorePostgreSql
},
redis: {
setup: setupRedis,
teardown: teardownRedis,
backup: NOOP, // no backup because we store redis as part of app's volume
backup: backupRedis,
restore: setupRedis // same thing
},
localstorage: {
setup: NOOP, // docker creates the directory for us
sendmail: {
setup: setupSendMail,
teardown: teardownSendMail,
backup: NOOP,
restore: setupSendMail
},
scheduler: {
setup: NOOP,
teardown: NOOP,
backup: NOOP, // no backup because it's already inside app data
backup: NOOP,
restore: NOOP
},
simpleauth: {
setup: setupSimpleAuth,
teardown: teardownSimpleAuth,
backup: NOOP,
restore: setupSimpleAuth
},
_docker: {
setup: NOOP,
teardown: NOOP,
@@ -229,23 +240,44 @@ function getBindsSync(app, addons) {
return binds;
}
function getContainerNamesSync(app, addons) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
var names = [ ];
if (!addons) return names;
for (var addon in addons) {
switch (addon) {
case 'scheduler':
// names here depend on how scheduler.js creates containers
names = names.concat(Object.keys(addons.scheduler).map(function (taskName) { return app.id + '-' + taskName; }));
break;
default: break;
}
}
return names;
}
function setupOauth(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var appId = app.id;
var id = 'cid-addon-' + uuid.v4();
var id = 'cid-' + uuid.v4();
var clientSecret = hat(256);
var redirectURI = 'https://' + config.appFqdn(app.location);
var scope = 'profile,roleUser';
var scope = 'profile';
debugApp(app, 'setupOauth: id:%s clientSecret:%s', id, clientSecret);
clientdb.delByAppId('addon-' + appId, function (error) { // remove existing creds
clientdb.delByAppIdAndType(appId, clientdb.TYPE_OAUTH, function (error) { // remove existing creds
if (error && error.reason !== DatabaseError.NOT_FOUND) return callback(error);
clientdb.add(id, 'addon-' + appId, clientSecret, redirectURI, scope, function (error) {
clientdb.add(id, appId, clientdb.TYPE_OAUTH, clientSecret, redirectURI, scope, function (error) {
if (error) return callback(error);
var env = [
@@ -268,22 +300,68 @@ function teardownOauth(app, options, callback) {
debugApp(app, 'teardownOauth');
clientdb.delByAppId('addon-' + app.id, function (error) {
clientdb.delByAppIdAndType(app.id, clientdb.TYPE_OAUTH, function (error) {
if (error && error.reason !== DatabaseError.NOT_FOUND) console.error(error);
appdb.unsetAddonConfig(app.id, 'oauth', callback);
});
}
function setupSimpleAuth(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var appId = app.id;
var id = 'cid-' + uuid.v4();
var scope = 'profile';
debugApp(app, 'setupSimpleAuth: id:%s', id);
clientdb.delByAppIdAndType(app.id, clientdb.TYPE_SIMPLE_AUTH, function (error) { // remove existing creds
if (error && error.reason !== DatabaseError.NOT_FOUND) return callback(error);
clientdb.add(id, appId, clientdb.TYPE_SIMPLE_AUTH, '', '', scope, function (error) {
if (error) return callback(error);
var env = [
'SIMPLE_AUTH_SERVER=172.17.0.1',
'SIMPLE_AUTH_PORT=' + config.get('simpleAuthPort'),
'SIMPLE_AUTH_URL=http://172.17.0.1:' + config.get('simpleAuthPort'), // obsolete, remove
'SIMPLE_AUTH_ORIGIN=http://172.17.0.1:' + config.get('simpleAuthPort'),
'SIMPLE_AUTH_CLIENT_ID=' + id
];
debugApp(app, 'Setting simple auth addon config to %j', env);
appdb.setAddonConfig(appId, 'simpleauth', env, callback);
});
});
}
function teardownSimpleAuth(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'teardownSimpleAuth');
clientdb.delByAppIdAndType(app.id, clientdb.TYPE_SIMPLE_AUTH, function (error) {
if (error && error.reason !== DatabaseError.NOT_FOUND) console.error(error);
appdb.unsetAddonConfig(app.id, 'simpleauth', callback);
});
}
function setupLdap(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var env = [
'LDAP_SERVER=172.17.42.1',
'LDAP_PORT=3002',
'LDAP_URL=ldap://172.17.42.1:3002',
'LDAP_SERVER=172.17.0.1',
'LDAP_PORT=' + config.get('ldapPort'),
'LDAP_URL=ldap://172.17.0.1:' + config.get('ldapPort'),
'LDAP_USERS_BASE_DN=ou=users,dc=cloudron',
'LDAP_GROUPS_BASE_DN=ou=groups,dc=cloudron',
'LDAP_BIND_DN=cn='+ app.id + ',ou=apps,dc=cloudron',
@@ -310,10 +388,12 @@ function setupSendMail(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var username = app.location ? app.location + '-app' : 'no-reply'; // use no-reply for bare domains
var env = [
'MAIL_SMTP_SERVER=mail',
'MAIL_SMTP_PORT=2500', // if you change this, change the mail container
'MAIL_SMTP_USERNAME=' + (app.location || app.id), // use app.id for bare domains
'MAIL_SMTP_USERNAME=' + username,
'MAIL_DOMAIN=' + config.fqdn()
];
@@ -658,19 +738,35 @@ function forwardRedisPort(appId, callback) {
var redisPort = parseInt(safe.query(data, 'NetworkSettings.Ports.6379/tcp[0].HostPort'), 10);
if (!Number.isInteger(redisPort)) return callback(new Error('Unable to get container port mapping'));
vbox.forwardFromHostToVirtualBox('redis-' + appId, redisPort);
return callback(null);
});
}
function stopAndRemoveRedis(container, callback) {
function ignoreError(func) {
return function (callback) {
func(function (error) {
if (error) debug('stopAndRemoveRedis: Ignored error:', error);
callback();
});
};
}
// stopping redis with SIGTERM makes it commit the database to disk
async.series([
ignoreError(container.stop.bind(container, { t: 10 })),
ignoreError(container.wait.bind(container)),
ignoreError(container.remove.bind(container, { force: true, v: true }))
], callback);
}
// Ensures that app's addon redis container is running. Can be called when named container already exists/running
function setupRedis(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var redisPassword = generatePassword(64, false /* memorable */);
var redisPassword = generatePassword(64, false /* memorable */, /[\w\d_]/); // ensure no / in password for being sed friendly (and be uri friendly)
var redisVarsFile = path.join(paths.ADDON_CONFIG_DIR, 'redis-' + app.id + '_vars.sh');
var redisDataDir = path.join(paths.DATA_DIR, app.id + '/redis');
@@ -682,36 +778,30 @@ function setupRedis(app, options, callback) {
var createOptions = {
name: 'redis-' + app.id,
Hostname: config.appFqdn(app.location),
Hostname: 'redis-' + app.location,
Tty: true,
Image: 'cloudron/redis:0.5.0', // if you change this, fix setup/INFRA_VERSION as well
Image: 'cloudron/redis:0.8.0', // if you change this, fix setup/INFRA_VERSION as well
Cmd: null,
Volumes: {
'/tmp': {},
'/run': {},
'/var/log': {}
'/run': {}
},
VolumesFrom: []
};
var isMac = os.platform() === 'darwin';
var startOptions = {
Binds: [
redisVarsFile + ':/etc/redis/redis_vars.sh:ro',
redisDataDir + ':/var/lib/redis:rw'
],
Memory: 1024 * 1024 * 75, // 100mb
MemorySwap: 1024 * 1024 * 75 * 2, // 150mb
// On Mac (boot2docker), we have to export the port to external world for port forwarding from Mac to work
// On linux, export to localhost only for testing purposes and not for the app itself
PortBindings: {
'6379/tcp': [{ HostPort: '0', HostIp: isMac ? '0.0.0.0' : '127.0.0.1' }]
},
ReadonlyRootfs: true,
RestartPolicy: {
'Name': 'always',
'MaximumRetryCount': 0
VolumesFrom: [],
HostConfig: {
Binds: [
redisVarsFile + ':/etc/redis/redis_vars.sh:ro',
redisDataDir + ':/var/lib/redis:rw'
],
Memory: 1024 * 1024 * 75, // 100mb
MemorySwap: 1024 * 1024 * 75 * 2, // 150mb
PortBindings: {
'6379/tcp': [{ HostPort: '0', HostIp: '127.0.0.1' }]
},
ReadonlyRootfs: true,
RestartPolicy: {
'Name': 'always',
'MaximumRetryCount': 0
}
}
};
@@ -723,11 +813,11 @@ function setupRedis(app, options, callback) {
];
var redisContainer = docker.getContainer(createOptions.name);
redisContainer.remove({ force: true, v: false }, function (ignoredError) {
stopAndRemoveRedis(redisContainer, function () {
docker.createContainer(createOptions, function (error) {
if (error && error.statusCode !== 409) return callback(error); // if not already created
redisContainer.start(startOptions, function (error) {
redisContainer.start(function (error) {
if (error && error.statusCode !== 304) return callback(error); // if not already running
appdb.setAddonConfig(app.id, 'redis', env, function (error) {
@@ -749,14 +839,12 @@ function teardownRedis(app, options, callback) {
var removeOptions = {
force: true, // kill container if it's running
v: false // removes volumes associated with the container
v: true // removes volumes associated with the container
};
container.remove(removeOptions, function (error) {
if (error && error.statusCode !== 404) return callback(new Error('Error removing container:' + error));
vbox.unforwardFromHostToVirtualBox('redis-' + app.id);
safe.fs.unlinkSync(paths.ADDON_CONFIG_DIR, 'redis-' + app.id + '_vars.sh');
shell.sudo('teardownRedis', [ RMAPPDIR_CMD, app.id + '/redis' ], function (error, stdout, stderr) {
@@ -766,3 +854,19 @@ function teardownRedis(app, options, callback) {
});
});
}
function backupRedis(app, options, callback) {
debugApp(app, 'Backing up redis');
callback = once(callback); // ChildProcess exit may or may not be called after error
var cp = spawn('/usr/bin/docker', [ 'exec', 'redis-' + app.id, '/addons/redis/service.sh', 'backup' ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'backupRedis: done. code:%s signal:%s', code, signal);
if (!callback.called) callback(code ? 'backupRedis failed with status ' + code : null);
});
cp.stdout.pipe(process.stdout);
cp.stderr.pipe(process.stderr);
}

View File

@@ -40,7 +40,6 @@ exports = module.exports = {
RSTATE_PENDING_START: 'pending_start',
RSTATE_PENDING_STOP: 'pending_stop',
RSTATE_STOPPED: 'stopped', // app stopped by use
RSTATE_ERROR: 'error',
// run codes (keep in sync in UI)
HEALTH_HEALTHY: 'healthy',
@@ -58,13 +57,9 @@ var assert = require('assert'),
safe = require('safetydance'),
util = require('util');
var APPS_FIELDS = [ 'id', 'appStoreId', 'installationState', 'installationProgress', 'runState',
'health', 'containerId', 'manifestJson', 'httpPort', 'location', 'dnsRecordId',
'accessRestriction', 'lastBackupId', 'lastBackupConfigJson', 'oldConfigJson' ].join(',');
var APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.installationState', 'apps.installationProgress', 'apps.runState',
'apps.health', 'apps.containerId', 'apps.manifestJson', 'apps.httpPort', 'apps.location', 'apps.dnsRecordId',
'apps.accessRestriction', 'apps.lastBackupId', 'apps.lastBackupConfigJson', 'apps.oldConfigJson' ].join(',');
'apps.accessRestrictionJson', 'apps.lastBackupId', 'apps.lastBackupConfigJson', 'apps.oldConfigJson', 'apps.oauthProxy' ].join(',');
var PORT_BINDINGS_FIELDS = [ 'hostPort', 'environmentVariable', 'appId' ].join(',');
@@ -96,6 +91,13 @@ function postProcess(result) {
for (var i = 0; i < environmentVariables.length; i++) {
result.portBindings[environmentVariables[i]] = parseInt(hostPorts[i], 10);
}
result.oauthProxy = !!result.oauthProxy;
assert(result.accessRestrictionJson === null || typeof result.accessRestrictionJson === 'string');
result.accessRestriction = safe.JSON.parse(result.accessRestrictionJson);
if (result.accessRestriction && !result.accessRestriction.users) result.accessRestriction.users = [];
delete result.accessRestrictionJson;
}
function get(id, callback) {
@@ -177,24 +179,26 @@ function getAll(callback) {
});
}
function add(id, appStoreId, manifest, location, portBindings, accessRestriction, callback) {
function add(id, appStoreId, manifest, location, portBindings, accessRestriction, oauthProxy, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appStoreId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof manifest.version, 'string');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert.strictEqual(typeof accessRestriction, 'object');
assert.strictEqual(typeof oauthProxy, 'boolean');
assert.strictEqual(typeof callback, 'function');
portBindings = portBindings || { };
var manifestJson = JSON.stringify(manifest);
var accessRestrictionJson = JSON.stringify(accessRestriction);
var queries = [ ];
queries.push({
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, location, accessRestriction) VALUES (?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, exports.ISTATE_PENDING_INSTALL, location, accessRestriction ]
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, location, accessRestrictionJson, oauthProxy) VALUES (?, ?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, exports.ISTATE_PENDING_INSTALL, location, accessRestrictionJson, oauthProxy ]
});
Object.keys(portBindings).forEach(function (env) {
@@ -303,6 +307,9 @@ function updateWithConstraints(id, app, constraints, callback) {
} else if (p === 'oldConfig') {
fields.push('oldConfigJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'accessRestriction') {
fields.push('accessRestrictionJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p !== 'portBindings') {
fields.push(p + ' = ?');
values.push(app[p]);
@@ -361,7 +368,7 @@ function setInstallationCommand(appId, installationState, values, callback) {
updateWithConstraints(appId, values, '', callback);
} else if (installationState === exports.ISTATE_PENDING_RESTORE) {
updateWithConstraints(appId, values, 'AND (installationState = "installed" OR installationState = "error")', callback);
} else if (installationState === exports.ISTATE_PENDING_UPDATE || exports.ISTATE_PENDING_CONFIGURE || installationState == exports.ISTATE_PENDING_BACKUP) {
} else if (installationState === exports.ISTATE_PENDING_UPDATE || installationState === exports.ISTATE_PENDING_CONFIGURE || installationState === exports.ISTATE_PENDING_BACKUP) {
updateWithConstraints(appId, values, 'AND installationState = "installed"', callback);
} else {
callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, 'invalid installationState'));

View File

@@ -5,7 +5,7 @@ var appdb = require('./appdb.js'),
async = require('async'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:apphealthmonitor'),
docker = require('./docker.js'),
docker = require('./docker.js').connection,
mailer = require('./mailer.js'),
superagent = require('superagent'),
util = require('util');
@@ -92,12 +92,13 @@ function checkAppHealth(app, callback) {
.redirects(0)
.timeout(HEALTHCHECK_INTERVAL)
.end(function (error, res) {
if (error || res.status >= 400) { // 2xx and 3xx are ok
if (error && !error.response) {
debugApp(app, 'not alive (network error): %s', error.message);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else if (res.statusCode >= 400) { // 2xx and 3xx are ok
debugApp(app, 'not alive : %s', error || res.status);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else {
debugApp(app, 'alive');
setHealth(app, appdb.HEALTH_HEALTHY, callback);
}
});
@@ -110,6 +111,13 @@ function processApps(callback) {
async.each(apps, checkAppHealth, function (error) {
if (error) console.error(error);
var alive = apps
.filter(function (a) { return a.installationState === appdb.ISTATE_INSTALLED && a.runState === appdb.RSTATE_RUNNING && a.health === appdb.HEALTH_HEALTHY; })
.map(function (a) { return a.location; }).join(', ');
debug('apps alive: [%s]', alive);
callback(null);
});
});
@@ -149,7 +157,7 @@ function processDockerEvents() {
debug('OOM Context: %s', context);
// do not send mails for dev apps
if (app.appStoreId !== '') mailer.sendCrashNotification(program, context); // app can be null if it's an addon crash
if (error || app.appStoreId !== '') mailer.sendCrashNotification(program, context); // app can be null if it's an addon crash
});
});
@@ -159,9 +167,8 @@ function processDockerEvents() {
});
stream.on('end', function () {
console.error('Docke event stream ended');
console.error('Docker event stream ended');
gDockerEventStream = null; // TODO: reconnect?
stream.end();
});
});
}

View File

@@ -5,6 +5,8 @@
exports = module.exports = {
AppsError: AppsError,
hasAccessTo: hasAccessTo,
get: get,
getBySubdomain: getBySubdomain,
getAll: getAll,
@@ -20,8 +22,8 @@ exports = module.exports = {
backup: backup,
backupApp: backupApp,
listBackups: listBackups,
getLogStream: getLogStream,
getLogs: getLogs,
start: start,
@@ -37,7 +39,8 @@ exports = module.exports = {
// exported for testing
_validateHostname: validateHostname,
_validatePortBindings: validatePortBindings
_validatePortBindings: validatePortBindings,
_validateAccessRestriction: validateAccessRestriction
};
var addons = require('./addons.js'),
@@ -46,6 +49,7 @@ var addons = require('./addons.js'),
async = require('async'),
backups = require('./backups.js'),
BackupsError = require('./backups.js').BackupsError,
certificates = require('./certificates.js'),
config = require('./config.js'),
constants = require('./constants.js'),
DatabaseError = require('./databaseerror.js'),
@@ -58,6 +62,7 @@ var addons = require('./addons.js'),
safe = require('safetydance'),
semver = require('semver'),
shell = require('./shell.js'),
spawn = require('child_process').spawn,
split = require('split'),
superagent = require('superagent'),
taskmanager = require('./taskmanager.js'),
@@ -114,6 +119,9 @@ AppsError.BAD_STATE = 'Bad State';
AppsError.PORT_RESERVED = 'Port Reserved';
AppsError.PORT_CONFLICT = 'Port Conflict';
AppsError.BILLING_REQUIRED = 'Billing Required';
AppsError.ACCESS_DENIED = 'Access denied';
AppsError.USER_REQUIRED = 'User required';
AppsError.BAD_CERTIFICATE = 'Invalid certificate';
// Hostname validation comes from RFC 1123 (section 2.1)
// Domain name validation comes from RFC 2181 (Name syntax)
@@ -152,6 +160,7 @@ function validatePortBindings(portBindings, tcpPorts) {
config.get('internalPort'), /* internal app server (lo) */
config.get('ldapPort'), /* ldap server (lo) */
config.get('oauthProxyPort'), /* oauth proxy server (lo) */
config.get('simpleAuthPort'), /* simple auth server (lo) */
3306, /* mysql (lo) */
8000 /* graphite (lo) */
];
@@ -165,7 +174,7 @@ function validatePortBindings(portBindings, tcpPorts) {
if (!Number.isInteger(portBindings[env])) return new Error(portBindings[env] + ' is not an integer');
if (portBindings[env] <= 0 || portBindings[env] > 65535) return new Error(portBindings[env] + ' is out of range');
if (RESERVED_PORTS.indexOf(portBindings[env]) !== -1) return new AppsError(AppsError.PORT_RESERVED, + portBindings[env]);
if (RESERVED_PORTS.indexOf(portBindings[env]) !== -1) return new AppsError(AppsError.PORT_RESERVED, String(portBindings[env]));
}
// it is OK if there is no 1-1 mapping between values in manifest.tcpPorts and portBindings. missing values implies
@@ -178,6 +187,18 @@ function validatePortBindings(portBindings, tcpPorts) {
return null;
}
function validateAccessRestriction(accessRestriction) {
assert.strictEqual(typeof accessRestriction, 'object');
if (accessRestriction === null) return null;
if (!accessRestriction.users || !Array.isArray(accessRestriction.users)) return new Error('users array property required');
if (accessRestriction.users.length === 0) return new Error('users array cannot be empty');
if (!accessRestriction.users.every(function (e) { return typeof e === 'string'; })) return new Error('All users have to be strings');
return null;
}
function getDuplicateErrorDetails(location, portBindings, error) {
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
@@ -205,6 +226,14 @@ function getIconUrlSync(app) {
return fs.existsSync(iconPath) ? '/api/v1/apps/' + app.id + '/icon' : null;
}
function hasAccessTo(app, user) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof user, 'object');
if (app.accessRestriction === null) return true;
return app.accessRestriction.users.some(function (e) { return e === user.id; });
}
function get(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -250,18 +279,6 @@ function getAll(callback) {
});
}
function validateAccessRestriction(accessRestriction) {
// TODO: make the values below enumerations in the oauth code
switch (accessRestriction) {
case '':
case 'roleUser':
case 'roleAdmin':
return null;
default:
return new Error('Invalid accessRestriction');
}
}
function purchase(appStoreId, callback) {
assert.strictEqual(typeof appStoreId, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -269,25 +286,32 @@ function purchase(appStoreId, callback) {
// Skip purchase if appStoreId is empty
if (appStoreId === '') return callback(null);
// Skip if we don't have an appstore token
if (config.token() === '') return callback(null);
var url = config.apiServerOrigin() + '/api/v1/apps/' + appStoreId + '/purchase';
superagent.post(url).query({ token: config.token() }).end(function (error, res) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (res.status === 402) return callback(new AppsError(AppsError.BILLING_REQUIRED));
if (res.status !== 201 && res.status !== 200) return callback(new Error(util.format('App purchase failed. %s %j', res.status, res.body)));
if (error && !error.response) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error));
if (res.statusCode === 402) return callback(new AppsError(AppsError.BILLING_REQUIRED));
if (res.statusCode === 404) return callback(new AppsError(AppsError.NOT_FOUND));
if (res.statusCode !== 201 && res.statusCode !== 200) return callback(new Error(util.format('App purchase failed. %s %j', res.status, res.body)));
callback(null);
});
}
function install(appId, appStoreId, manifest, location, portBindings, accessRestriction, icon, callback) {
function install(appId, appStoreId, manifest, location, portBindings, accessRestriction, oauthProxy, icon, cert, key, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof appStoreId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert.strictEqual(typeof accessRestriction, 'object');
assert.strictEqual(typeof oauthProxy, 'boolean');
assert(!icon || typeof icon === 'string');
assert(cert === null || typeof cert === 'string');
assert(key === null || typeof key === 'string');
assert.strictEqual(typeof callback, 'function');
var error = manifestFormat.parse(manifest);
@@ -305,6 +329,10 @@ function install(appId, appStoreId, manifest, location, portBindings, accessRest
error = validateAccessRestriction(accessRestriction);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
// singleUser mode requires accessRestriction to contain exactly one user
if (manifest.singleUser && accessRestriction === null) return callback(new AppsError(AppsError.USER_REQUIRED));
if (manifest.singleUser && accessRestriction.users.length !== 1) return callback(new AppsError(AppsError.USER_REQUIRED));
if (icon) {
if (!validator.isBase64(icon)) return callback(new AppsError(AppsError.BAD_FIELD, 'icon is not base64'));
@@ -313,15 +341,24 @@ function install(appId, appStoreId, manifest, location, portBindings, accessRest
}
}
error = certificates.validateCertificate(cert, key, config.appFqdn(location));
if (error) return callback(new AppsError(AppsError.BAD_CERTIFICATE, error.message));
debug('Will install app with id : ' + appId);
purchase(appStoreId, function (error) {
if (error) return callback(error);
appdb.add(appId, appStoreId, manifest, location.toLowerCase(), portBindings, accessRestriction, function (error) {
appdb.add(appId, appStoreId, manifest, location.toLowerCase(), portBindings, accessRestriction, oauthProxy, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location.toLowerCase(), portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
// save cert to data/box/certs
if (cert && key) {
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.cert'), cert)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving cert: ' + safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.key'), key)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving key: ' + safe.error.message));
}
taskmanager.restartAppTask(appId);
callback(null);
@@ -329,11 +366,14 @@ function install(appId, appStoreId, manifest, location, portBindings, accessRest
});
}
function configure(appId, location, portBindings, accessRestriction, callback) {
function configure(appId, location, portBindings, accessRestriction, oauthProxy, cert, key, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert.strictEqual(typeof accessRestriction, 'object');
assert.strictEqual(typeof oauthProxy, 'boolean');
assert(cert === null || typeof cert === 'string');
assert(key === null || typeof key === 'string');
assert.strictEqual(typeof callback, 'function');
var error = validateHostname(location, config.fqdn());
@@ -342,6 +382,9 @@ function configure(appId, location, portBindings, accessRestriction, callback) {
error = validateAccessRestriction(accessRestriction);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
error = certificates.validateCertificate(cert, key, config.appFqdn(location));
if (error) return callback(new AppsError(AppsError.BAD_CERTIFICATE, error.message));
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
@@ -349,15 +392,23 @@ function configure(appId, location, portBindings, accessRestriction, callback) {
error = validatePortBindings(portBindings, app.manifest.tcpPorts);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
// save cert to data/box/certs
if (cert && key) {
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.cert'), cert)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving cert: ' + safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.key'), key)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving key: ' + safe.error.message));
}
var values = {
location: location.toLowerCase(),
accessRestriction: accessRestriction,
oauthProxy: oauthProxy,
portBindings: portBindings,
oldConfig: {
location: app.location,
accessRestriction: app.accessRestriction,
portBindings: app.portBindings
portBindings: app.portBindings,
oauthProxy: app.oauthProxy
}
};
@@ -411,7 +462,9 @@ function update(appId, force, manifest, portBindings, icon, callback) {
portBindings: portBindings,
oldConfig: {
manifest: app.manifest,
portBindings: app.portBindings
portBindings: app.portBindings,
accessRestriction: app.accessRestriction,
oauthProxy: app.oauthProxy
}
};
@@ -427,58 +480,48 @@ function update(appId, force, manifest, portBindings, icon, callback) {
});
}
function getLogStream(appId, fromLine, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof fromLine, 'number'); // behaves like tail -n
assert.strictEqual(typeof callback, 'function');
function appLogFilter(app) {
var names = [ app.id ].concat(addons.getContainerNamesSync(app, app.manifest.addons));
debug('Getting logs for %s', appId);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, util.format('App is in %s state.', app.installationState)));
var container = docker.getContainer(app.containerId);
var tail = fromLine < 0 ? -fromLine : 'all';
// note: cannot access docker file directly because it needs root access
container.logs({ stdout: true, stderr: true, follow: true, timestamps: true, tail: tail }, function (error, logStream) {
if (error && error.statusCode === 404) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var lineCount = 0;
var skipLinesStream = split(function mapper(line) {
if (++lineCount < fromLine) return undefined;
var timestamp = line.substr(0, line.indexOf(' ')); // sometimes this has square brackets around it
return JSON.stringify({ lineNumber: lineCount, timestamp: timestamp.replace(/[[\]]/g,''), log: line.substr(timestamp.length + 1) });
});
skipLinesStream.close = logStream.req.abort;
logStream.pipe(skipLinesStream);
return callback(null, skipLinesStream);
});
});
return names.map(function (name) { return 'CONTAINER_NAME=' + name; });
}
function getLogs(appId, callback) {
function getLogs(appId, lines, follow, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof lines, 'number');
assert.strictEqual(typeof follow, 'boolean');
assert.strictEqual(typeof callback, 'function');
debug('Getting logs for %s', appId);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, util.format('App is in %s state.', app.installationState)));
var args = [ '--output=json', '--no-pager', '--lines=' + lines ];
if (follow) args.push('--follow');
args = args.concat(appLogFilter(app));
var container = docker.getContainer(app.containerId);
// note: cannot access docker file directly because it needs root access
container.logs({ stdout: true, stderr: true, follow: false, timestamps: true, tail: 'all' }, function (error, logStream) {
if (error && error.statusCode === 404) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var cp = spawn('/bin/journalctl', args);
return callback(null, logStream);
var transformStream = split(function mapper(line) {
var obj = safe.JSON.parse(line);
if (!obj) return undefined;
var source = obj.CONTAINER_NAME.slice(app.id.length + 1);
return JSON.stringify({
realtimeTimestamp: obj.__REALTIME_TIMESTAMP,
monotonicTimestamp: obj.__MONOTONIC_TIMESTAMP,
message: obj.MESSAGE,
source: source || 'main'
}) + '\n';
});
transformStream.close = cp.kill.bind(cp, 'SIGKILL'); // closing stream kills the child process
cp.stdout.pipe(transformStream);
return callback(null, transformStream);
});
}
@@ -511,6 +554,7 @@ function restore(appId, callback) {
oldConfig: {
location: app.location,
accessRestriction: app.accessRestriction,
oauthProxy: app.oauthProxy,
portBindings: app.portBindings,
manifest: app.manifest
}
@@ -602,13 +646,13 @@ function exec(appId, options, callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var container = docker.getContainer(app.containerId);
var container = docker.connection.getContainer(app.containerId);
var execOptions = {
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
Tty: true,
Tty: options.tty,
Cmd: cmd
};
@@ -616,7 +660,7 @@ function exec(appId, options, callback) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var startOptions = {
Detach: false,
Tty: true,
Tty: options.tty,
stdin: true // this is a dockerode option that enabled openStdin in the modem
};
exec.start(startOptions, function(error, stream) {
@@ -758,7 +802,8 @@ function backupApp(app, addonsToBackup, callback) {
manifest: app.manifest,
location: app.location,
portBindings: app.portBindings,
accessRestriction: app.accessRestriction
accessRestriction: app.accessRestriction,
oauthProxy: app.oauthProxy
};
backupFunction = createNewBackup.bind(null, app, addonsToBackup);
@@ -784,9 +829,9 @@ function backup(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
get(appId, function (error, app) {
if (error && error.reason === AppsError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
appdb.exists(appId, function (error, exists) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!exists) return callback(new AppsError(AppsError.NOT_FOUND));
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_BACKUP, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
@@ -799,13 +844,14 @@ function backup(appId, callback) {
});
}
function restoreApp(app, addonsToRestore, callback) {
function restoreApp(app, addonsToRestore, backupId, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof addonsToRestore, 'object');
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
assert(app.lastBackupId);
backups.getRestoreUrl(app.lastBackupId, function (error, result) {
backups.getRestoreUrl(backupId, function (error, result) {
if (error && error.reason == BackupsError.EXTERNAL_ERROR) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error.message));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
@@ -818,3 +864,31 @@ function restoreApp(app, addonsToRestore, callback) {
});
});
}
function listBackups(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
appdb.exists(appId, function (error, exists) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!exists) return callback(new AppsError(AppsError.NOT_FOUND));
// TODO pagination is not implemented in the backend yet
backups.getAllPaged(0, 1000, function (error, result) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var appBackups = [];
result.forEach(function (backup) {
appBackups = appBackups.concat(backup.dependsOn.filter(function (d) {
return d.indexOf('appbackup_' + appId) === 0;
}));
});
// alphabetic should be sufficient
appBackups.sort();
callback(null, appBackups);
});
});
}

View File

@@ -9,7 +9,7 @@ exports = module.exports = {
startTask: startTask,
// exported for testing
_getFreePort: getFreePort,
_reserveHttpPort: reserveHttpPort,
_configureNginx: configureNginx,
_unconfigureNginx: unconfigureNginx,
_createVolume: createVolume,
@@ -19,7 +19,6 @@ exports = module.exports = {
_verifyManifest: verifyManifest,
_registerSubdomain: registerSubdomain,
_unregisterSubdomain: unregisterSubdomain,
_reloadNginx: reloadNginx,
_waitForDnsPropagation: waitForDnsPropagation
};
@@ -36,6 +35,7 @@ var addons = require('./addons.js'),
apps = require('./apps.js'),
assert = require('assert'),
async = require('async'),
certificates = require('./certificates.js'),
clientdb = require('./clientdb.js'),
config = require('./config.js'),
database = require('./database.js'),
@@ -47,249 +47,114 @@ var addons = require('./addons.js'),
hat = require('hat'),
manifestFormat = require('cloudron-manifestformat'),
net = require('net'),
os = require('os'),
nginx = require('./nginx.js'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
semver = require('semver'),
shell = require('./shell.js'),
SubdomainError = require('./subdomainerror.js'),
SubdomainError = require('./subdomains.js').SubdomainError,
subdomains = require('./subdomains.js'),
superagent = require('superagent'),
sysinfo = require('./sysinfo.js'),
util = require('util'),
uuid = require('node-uuid'),
vbox = require('./vbox.js'),
_ = require('underscore');
var NGINX_APPCONFIG_EJS = fs.readFileSync(__dirname + '/../setup/start/nginx/appconfig.ejs', { encoding: 'utf8' }),
COLLECTD_CONFIG_EJS = fs.readFileSync(__dirname + '/collectd.config.ejs', { encoding: 'utf8' }),
RELOAD_NGINX_CMD = path.join(__dirname, 'scripts/reloadnginx.sh'),
var COLLECTD_CONFIG_EJS = fs.readFileSync(__dirname + '/collectd.config.ejs', { encoding: 'utf8' }),
RELOAD_COLLECTD_CMD = path.join(__dirname, 'scripts/reloadcollectd.sh'),
RMAPPDIR_CMD = path.join(__dirname, 'scripts/rmappdir.sh'),
CREATEAPPDIR_CMD = path.join(__dirname, 'scripts/createappdir.sh');
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
database.initialize(callback);
}
function debugApp(app, args) {
assert(!app || typeof app === 'object');
function debugApp(app) {
assert.strictEqual(typeof app, 'object');
var prefix = app ? (app.location || '(bare)') : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function targetBoxVersion(manifest) {
if ('targetBoxVersion' in manifest) return manifest.targetBoxVersion;
function reserveHttpPort(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
if ('minBoxVersion' in manifest) return manifest.minBoxVersion;
return '0.0.1';
}
// We expect conflicts to not happen despite closing the port (parallel app installs, app update does not reconfigure nginx etc)
// https://tools.ietf.org/html/rfc6056#section-3.5 says linux uses random ephemeral port allocation
function getFreePort(callback) {
var server = net.createServer();
server.listen(0, function () {
var port = server.address().port;
server.close(function () {
return callback(null, port);
updateApp(app, { httpPort: port }, function (error) {
if (error) {
server.close();
return callback(error);
}
server.close(callback);
});
});
}
function reloadNginx(callback) {
shell.sudo('reloadNginx', [ RELOAD_NGINX_CMD ], callback);
}
function configureNginx(app, callback) {
getFreePort(function (error, freePort) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
var vhost = config.appFqdn(app.location);
certificates.ensureCertificate(vhost, function (error, certFilePath, keyFilePath) {
if (error) return callback(error);
var sourceDir = path.resolve(__dirname, '..');
var endpoint = app.accessRestriction ? 'oauthproxy' : 'app';
var nginxConf = ejs.render(NGINX_APPCONFIG_EJS, { sourceDir: sourceDir, adminOrigin: config.adminOrigin(), vhost: config.appFqdn(app.location), port: freePort, endpoint: endpoint });
var nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, app.id + '.conf');
debugApp(app, 'writing config to %s', nginxConfigFilename);
if (!safe.fs.writeFileSync(nginxConfigFilename, nginxConf)) {
debugApp(app, 'Error creating nginx config : %s', safe.error.message);
return callback(safe.error);
}
async.series([
exports._reloadNginx,
updateApp.bind(null, app, { httpPort: freePort })
], callback);
vbox.forwardFromHostToVirtualBox(app.id + '-http', freePort);
nginx.configureApp(app, certFilePath, keyFilePath, callback);
});
}
function unconfigureNginx(app, callback) {
var nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, app.id + '.conf');
if (!safe.fs.unlinkSync(nginxConfigFilename)) {
debugApp(app, 'Error removing nginx configuration : %s', safe.error.message);
return callback(null);
}
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
exports._reloadNginx(callback);
vbox.unforwardFromHostToVirtualBox(app.id + '-http');
}
function pullImage(app, callback) {
docker.pull(app.manifest.dockerImage, function (err, stream) {
if (err) return callback(new Error('Error connecting to docker. statusCode: %s' + err.statusCode));
// https://github.com/dotcloud/docker/issues/1074 says each status message
// is emitted as a chunk
stream.on('data', function (chunk) {
var data = safe.JSON.parse(chunk) || { };
debugApp(app, 'downloadImage data: %j', data);
// The information here is useless because this is per layer as opposed to per image
if (data.status) {
debugApp(app, 'progress: %s', data.status); // progressDetail { current, total }
} else if (data.error) {
debugApp(app, 'error detail: %s', data.errorDetail.message);
}
});
stream.on('end', function () {
debugApp(app, 'download image successfully');
var image = docker.getImage(app.manifest.dockerImage);
image.inspect(function (err, data) {
if (err) return callback(new Error('Error inspecting image:' + err.message));
if (!data || !data.Config) return callback(new Error('Missing Config in image:' + JSON.stringify(data, null, 4)));
if (!data.Config.Entrypoint && !data.Config.Cmd) return callback(new Error('Only images with entry point are allowed'));
debugApp(app, 'This image exposes ports: %j', data.Config.ExposedPorts);
callback(null);
});
});
});
}
function downloadImage(app, callback) {
debugApp(app, 'downloadImage %s', app.manifest.dockerImage);
var attempt = 1;
async.retry({ times: 5, interval: 15000 }, function (retryCallback) {
debugApp(app, 'Downloading image. attempt: %s', attempt++);
pullImage(app, retryCallback);
}, callback);
// TODO: maybe revoke the cert
nginx.unconfigureApp(app, callback);
}
function createContainer(app, callback) {
appdb.getPortBindings(app.id, function (error, portBindings) {
if (error) return callback(error);
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
assert(!app.containerId); // otherwise, it will trigger volumeFrom
var manifest = app.manifest;
var exposedPorts = {};
var env = [];
debugApp(app, 'creating container');
// docker portBindings requires ports to be exposed
exposedPorts[manifest.httpPort + '/tcp'] = {};
docker.createContainer(app, function (error, container) {
if (error) return callback(new Error('Error creating container: ' + error));
for (var e in portBindings) {
var hostPort = portBindings[e];
var containerPort = manifest.tcpPorts[e].containerPort || hostPort;
exposedPorts[containerPort + '/tcp'] = {};
env.push(e + '=' + hostPort);
}
env.push('CLOUDRON=1');
env.push('WEBADMIN_ORIGIN' + '=' + config.adminOrigin());
env.push('API_ORIGIN' + '=' + config.adminOrigin());
addons.getEnvironment(app, function (error, addonEnv) {
if (error) return callback(new Error('Error getting addon env: ' + error));
var containerOptions = {
name: app.id,
Hostname: config.appFqdn(app.location),
Tty: true,
Image: app.manifest.dockerImage,
Cmd: null,
Env: env.concat(addonEnv),
ExposedPorts: exposedPorts,
Volumes: { // see also ReadonlyRootfs
'/tmp': {},
'/run': {},
'/var/log': {}
}
};
debugApp(app, 'Creating container for %s', app.manifest.dockerImage);
docker.createContainer(containerOptions, function (error, container) {
if (error) return callback(new Error('Error creating container: ' + error));
updateApp(app, { containerId: container.id }, callback);
});
});
updateApp(app, { containerId: container.id }, callback);
});
}
function deleteContainer(app, callback) {
if (app.containerId === null) return callback(null);
function deleteContainers(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
var container = docker.getContainer(app.containerId);
debugApp(app, 'deleting containers');
var removeOptions = {
force: true, // kill container if it's running
v: true // removes volumes associated with the container (but not host mounts)
};
docker.deleteContainers(app.id, function (error) {
if (error) return callback(new Error('Error deleting container: ' + error));
container.remove(removeOptions, function (error) {
if (error && error.statusCode === 404) return updateApp(app, { containerId: null }, callback);
if (error) debugApp(app, 'Error removing container', error);
callback(error);
});
}
function deleteImage(app, manifest, callback) {
var dockerImage = manifest ? manifest.dockerImage : null;
if (!dockerImage) return callback(null);
docker.getImage(dockerImage).inspect(function (error, result) {
if (error && error.statusCode === 404) return callback(null);
if (error) return callback(error);
var removeOptions = {
force: true,
noprune: false
};
// delete image by id because 'docker pull' pulls down all the tags and this is the only way to delete all tags
docker.getImage(result.Id).remove(removeOptions, function (error) {
if (error && error.statusCode === 404) return callback(null);
if (error && error.statusCode === 409) return callback(null); // another container using the image
if (error) debugApp(app, 'Error removing image', error);
callback(error);
});
updateApp(app, { containerId: null }, callback);
});
}
function createVolume(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
shell.sudo('createVolume', [ CREATEAPPDIR_CMD, app.id ], callback);
}
function deleteVolume(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
shell.sudo('deleteVolume', [ RMAPPDIR_CMD, app.id ], callback);
}
@@ -297,22 +162,21 @@ function allocateOAuthProxyCredentials(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.accessRestriction) return callback(null);
if (!app.oauthProxy) return callback(null);
var appId = 'proxy-' + app.id;
var id = 'cid-proxy-' + uuid.v4();
var id = 'cid-' + uuid.v4();
var clientSecret = hat(256);
var redirectURI = 'https://' + config.appFqdn(app.location);
var scope = 'profile,' + app.accessRestriction;
var scope = 'profile';
clientdb.add(id, appId, clientSecret, redirectURI, scope, callback);
clientdb.add(id, app.id, clientdb.TYPE_PROXY, clientSecret, redirectURI, scope, callback);
}
function removeOAuthProxyCredentials(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
clientdb.delByAppId('proxy-' + app.id, function (error) {
clientdb.delByAppIdAndType(app.id, clientdb.TYPE_PROXY, function (error) {
if (error && error.reason !== DatabaseError.NOT_FOUND) {
debugApp(app, 'Error removing OAuth client id', error);
return callback(error);
@@ -323,6 +187,9 @@ function removeOAuthProxyCredentials(app, callback) {
}
function addCollectdProfile(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
var collectdConf = ejs.render(COLLECTD_CONFIG_EJS, { appId: app.id, containerId: app.containerId });
fs.writeFile(path.join(paths.COLLECTD_APPCONFIG_DIR, app.id + '.conf'), collectdConf, function (error) {
if (error) return callback(error);
@@ -331,94 +198,19 @@ function addCollectdProfile(app, callback) {
}
function removeCollectdProfile(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
fs.unlink(path.join(paths.COLLECTD_APPCONFIG_DIR, app.id + '.conf'), function (error) {
if (error && error.code !== 'ENOENT') debugApp(app, 'Error removing collectd profile', error);
shell.sudo('removeCollectdProfile', [ RELOAD_COLLECTD_CMD ], callback);
});
}
function startContainer(app, callback) {
appdb.getPortBindings(app.id, function (error, portBindings) {
if (error) return callback(error);
var manifest = app.manifest;
var dockerPortBindings = { };
var isMac = os.platform() === 'darwin';
// On Mac (boot2docker), we have to export the port to external world for port forwarding from Mac to work
dockerPortBindings[manifest.httpPort + '/tcp'] = [ { HostIp: isMac ? '0.0.0.0' : '127.0.0.1', HostPort: app.httpPort + '' } ];
for (var env in portBindings) {
var hostPort = portBindings[env];
var containerPort = manifest.tcpPorts[env].containerPort || hostPort;
dockerPortBindings[containerPort + '/tcp'] = [ { HostIp: '0.0.0.0', HostPort: hostPort + '' } ];
vbox.forwardFromHostToVirtualBox(app.id + '-tcp' + containerPort, hostPort);
}
var memoryLimit = manifest.memoryLimit || 1024 * 1024 * 200; // 200mb by default
var startOptions = {
Binds: addons.getBindsSync(app, app.manifest.addons),
Memory: memoryLimit / 2,
MemorySwap: memoryLimit, // Memory + Swap
PortBindings: dockerPortBindings,
PublishAllPorts: false,
ReadonlyRootfs: semver.gte(targetBoxVersion(app.manifest), '0.0.66'), // see also Volumes in startContainer
Links: addons.getLinksSync(app, app.manifest.addons),
RestartPolicy: {
"Name": "always",
"MaximumRetryCount": 0
},
CpuShares: 512, // relative to 1024 for system processes
SecurityOpt: config.CLOUDRON ? [ "apparmor:docker-cloudron-app" ] : null // profile available only on cloudron
};
var container = docker.getContainer(app.containerId);
debugApp(app, 'Starting container %s with options: %j', container.id, JSON.stringify(startOptions));
container.start(startOptions, function (error, data) {
if (error && error.statusCode !== 304) return callback(new Error('Error starting container:' + error));
return callback(null);
});
});
}
function stopContainer(app, callback) {
if (!app.containerId) {
debugApp(app, 'No previous container to stop');
return callback();
}
var container = docker.getContainer(app.containerId);
debugApp(app, 'Stopping container %s', container.id);
var options = {
t: 10 // wait for 10 seconds before killing it
};
container.stop(options, function (error) {
if (error && (error.statusCode !== 304 && error.statusCode !== 404)) return callback(new Error('Error stopping container:' + error));
var tcpPorts = safe.query(app, 'manifest.tcpPorts', { });
for (var containerPort in tcpPorts) {
vbox.unforwardFromHostToVirtualBox(app.id + '-tcp' + containerPort);
}
debugApp(app, 'Waiting for container ' + container.id);
container.wait(function (error, data) {
if (error && (error.statusCode !== 304 && error.statusCode !== 404)) return callback(new Error('Error waiting on container:' + error));
debugApp(app, 'Container stopped with status code [%s]', data ? String(data.StatusCode) : '');
return callback(null);
});
});
}
function verifyManifest(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Verifying manifest');
var manifest = app.manifest;
@@ -432,6 +224,9 @@ function verifyManifest(app, callback) {
}
function downloadIcon(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Downloading icon of %s@%s', app.appStoreId, app.manifest.version);
var iconUrl = config.apiServerOrigin() + '/api/v1/apps/' + app.appStoreId + '/versions/' + app.manifest.version + '/icon';
@@ -440,8 +235,8 @@ function downloadIcon(app, callback) {
.get(iconUrl)
.buffer(true)
.end(function (error, res) {
if (error) return callback(new Error('Error downloading icon:' + error.message));
if (res.status !== 200) return callback(null); // ignore error. this can also happen for apps installed with cloudron-cli
if (error && !error.response) return callback(new Error('Network error downloading icon:' + error.message));
if (res.statusCode !== 200) return callback(null); // ignore error. this can also happen for apps installed with cloudron-cli
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, app.id + '.png'), res.body)) return callback(new Error('Error saving icon:' + safe.error.message));
@@ -450,50 +245,64 @@ function downloadIcon(app, callback) {
}
function registerSubdomain(app, callback) {
// even though the bare domain is already registered in the appstore, we still
// need to register it so that we have a dnsRecordId to wait for it to complete
var record = { subdomain: app.location, type: 'A', value: sysinfo.getIp() };
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
async.retry({ times: 200, interval: 5000 }, function (retryCallback) {
debugApp(app, 'Registering subdomain location [%s]', app.location);
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
subdomains.add(record, function (error, changeId) {
if (error && (error.reason === SubdomainError.STILL_BUSY || error.reason === SubdomainError.EXTERNAL_ERROR)) return retryCallback(error); // try again
// even though the bare domain is already registered in the appstore, we still
// need to register it so that we have a dnsRecordId to wait for it to complete
async.retry({ times: 200, interval: 5000 }, function (retryCallback) {
debugApp(app, 'Registering subdomain location [%s]', app.location);
retryCallback(null, error || changeId);
subdomains.add(app.location, 'A', [ ip ], function (error, changeId) {
if (error && (error.reason === SubdomainError.STILL_BUSY || error.reason === SubdomainError.EXTERNAL_ERROR)) return retryCallback(error); // try again
retryCallback(null, error || changeId);
});
}, function (error, result) {
if (error || result instanceof Error) return callback(error || result);
updateApp(app, { dnsRecordId: result }, callback);
});
}, function (error, result) {
if (error || result instanceof Error) return callback(error || result);
updateApp(app, { dnsRecordId: result }, callback);
});
}
function unregisterSubdomain(app, location, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof callback, 'function');
// do not unregister bare domain because we show a error/cloudron info page there
if (location === '') {
debugApp(app, 'Skip unregister of empty subdomain');
return callback(null);
}
var record = { subdomain: location, type: 'A', value: sysinfo.getIp() };
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
async.retry({ times: 30, interval: 5000 }, function (retryCallback) {
debugApp(app, 'Unregistering subdomain: %s', location);
async.retry({ times: 30, interval: 5000 }, function (retryCallback) {
debugApp(app, 'Unregistering subdomain: %s', location);
subdomains.remove(record, function (error) {
if (error && (error.reason === SubdomainError.STILL_BUSY || error.reason === SubdomainError.EXTERNAL_ERROR))return retryCallback(error); // try again
subdomains.remove(location, 'A', [ ip ], function (error) {
if (error && (error.reason === SubdomainError.STILL_BUSY || error.reason === SubdomainError.EXTERNAL_ERROR)) return retryCallback(error); // try again
retryCallback(error);
retryCallback(null, error);
});
}, function (error, result) {
if (error || result instanceof Error) return callback(error || result);
updateApp(app, { dnsRecordId: null }, callback);
});
}, function (error) {
if (error) debugApp(app, 'Error unregistering subdomain: %s', error);
updateApp(app, { dnsRecordId: null }, callback);
});
}
function removeIcon(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
fs.unlink(path.join(paths.APPICONS_DIR, app.id + '.png'), function (error) {
if (error && error.code !== 'ENOENT') debugApp(app, 'cannot remove icon : %s', error);
callback(null);
@@ -501,6 +310,9 @@ function removeIcon(app, callback) {
}
function waitForDnsPropagation(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
if (!config.CLOUDRON) {
debugApp(app, 'Skipping dns propagation check for development');
return callback(null);
@@ -524,7 +336,11 @@ function waitForDnsPropagation(app, callback) {
// updates the app object and the database
function updateApp(app, values, callback) {
debugApp(app, 'installationState: %s progress: %s', app.installationState, app.installationProgress);
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof values, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'updating app with values: %j', values);
appdb.update(app.id, values, function (error) {
if (error) return callback(error);
@@ -548,23 +364,25 @@ function updateApp(app, values, callback) {
// - setup the container (requires image, volumes, addons)
// - setup collectd (requires container id)
function install(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
async.series([
verifyManifest.bind(null, app),
// teardown for re-installs
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
unconfigureNginx.bind(null, app),
removeCollectdProfile.bind(null, app),
stopApp.bind(null, app),
deleteContainer.bind(null, app),
deleteContainers.bind(null, app),
addons.teardownAddons.bind(null, app, app.manifest.addons),
deleteVolume.bind(null, app),
unregisterSubdomain.bind(null, app, app.location),
removeOAuthProxyCredentials.bind(null, app),
// removeIcon.bind(null, app), // do not remove icon for non-appstore installs
unconfigureNginx.bind(null, app),
updateApp.bind(null, app, { installationProgress: '15, Configure nginx' }),
configureNginx.bind(null, app),
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '20, Downloading icon' }),
downloadIcon.bind(null, app),
@@ -576,7 +394,7 @@ function install(app, callback) {
registerSubdomain.bind(null, app),
updateApp.bind(null, app, { installationProgress: '40, Downloading image' }),
downloadImage.bind(null, app),
docker.downloadImage.bind(null, app.manifest),
updateApp.bind(null, app, { installationProgress: '50, Creating volume' }),
createVolume.bind(null, app),
@@ -595,6 +413,9 @@ function install(app, callback) {
updateApp.bind(null, app, { installationProgress: '90, Waiting for DNS propagation' }),
exports._waitForDnsPropagation.bind(null, app),
updateApp.bind(null, app, { installationProgress: '95, Configure nginx' }),
configureNginx.bind(null, app),
// done!
function (callback) {
debugApp(app, 'installed');
@@ -610,6 +431,9 @@ function install(app, callback) {
}
function backup(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
async.series([
updateApp.bind(null, app, { installationProgress: '10, Backing up' }),
apps.backupApp.bind(null, app, app.manifest.addons),
@@ -630,6 +454,9 @@ function backup(app, callback) {
// restore is also called for upgrades and infra updates. note that in those cases it is possible there is no backup
function restore(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
// we don't have a backup, same as re-install. this allows us to install from install failures (update failures always
// have a backupId)
if (!app.lastBackupId) {
@@ -637,25 +464,26 @@ function restore(app, callback) {
return install(app, callback);
}
var backupId = app.lastBackupId;
async.series([
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
unconfigureNginx.bind(null, app),
removeCollectdProfile.bind(null, app),
stopApp.bind(null, app),
deleteContainer.bind(null, app),
deleteContainers.bind(null, app),
// oldConfig can be null during upgrades
addons.teardownAddons.bind(null, app, app.oldConfig ? app.oldConfig.manifest.addons : null),
deleteVolume.bind(null, app),
function deleteImageIfChanged(done) {
if (!app.oldConfig || (app.oldConfig.manifest.dockerImage === app.manifest.dockerImage)) return done();
deleteImage(app, app.oldConfig.manifest, done);
docker.deleteImage(app.oldConfig.manifest, done);
},
removeOAuthProxyCredentials.bind(null, app),
removeIcon.bind(null, app),
unconfigureNginx.bind(null, app),
updateApp.bind(null, app, { installationProgress: '30, Configuring Nginx' }),
configureNginx.bind(null, app),
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '40, Downloading icon' }),
downloadIcon.bind(null, app),
@@ -667,13 +495,13 @@ function restore(app, callback) {
registerSubdomain.bind(null, app),
updateApp.bind(null, app, { installationProgress: '60, Downloading image' }),
downloadImage.bind(null, app),
docker.downloadImage.bind(null, app.manifest),
updateApp.bind(null, app, { installationProgress: '65, Creating volume' }),
createVolume.bind(null, app),
updateApp.bind(null, app, { installationProgress: '70, Download backup and restore addons' }),
apps.restoreApp.bind(null, app, app.manifest.addons),
apps.restoreApp.bind(null, app, app.manifest.addons, backupId),
updateApp.bind(null, app, { installationProgress: '75, Creating container' }),
createContainer.bind(null, app),
@@ -686,6 +514,9 @@ function restore(app, callback) {
updateApp.bind(null, app, { installationProgress: '90, Waiting for DNS propagation' }),
exports._waitForDnsPropagation.bind(null, app),
updateApp.bind(null, app, { installationProgress: '95, Configuring Nginx' }),
configureNginx.bind(null, app),
// done!
function (callback) {
debugApp(app, 'restored');
@@ -703,21 +534,23 @@ function restore(app, callback) {
// note that configure is called after an infra update as well
function configure(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
async.series([
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
unconfigureNginx.bind(null, app),
removeCollectdProfile.bind(null, app),
stopApp.bind(null, app),
deleteContainer.bind(null, app),
deleteContainers.bind(null, app),
function (next) {
// oldConfig can be null during an infra update
if (!app.oldConfig || app.oldConfig.location === app.location) return next();
unregisterSubdomain(app, app.oldConfig.location, next);
},
removeOAuthProxyCredentials.bind(null, app),
unconfigureNginx.bind(null, app),
updateApp.bind(null, app, { installationProgress: '25, Configuring Nginx' }),
configureNginx.bind(null, app),
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '30, Create OAuth proxy credentials' }),
allocateOAuthProxyCredentials.bind(null, app),
@@ -740,6 +573,9 @@ function configure(app, callback) {
updateApp.bind(null, app, { installationProgress: '80, Waiting for DNS propagation' }),
exports._waitForDnsPropagation.bind(null, app),
updateApp.bind(null, app, { installationProgress: '90, Configuring Nginx' }),
configureNginx.bind(null, app),
// done!
function (callback) {
debugApp(app, 'configured');
@@ -756,6 +592,9 @@ function configure(app, callback) {
// nginx configuration is skipped because app.httpPort is expected to be available
function update(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Updating to %s', safe.query(app, 'manifest.version'));
// app does not want these addons anymore
@@ -770,12 +609,12 @@ function update(app, callback) {
updateApp.bind(null, app, { installationProgress: '10, Cleaning up old install' }),
removeCollectdProfile.bind(null, app),
stopApp.bind(null, app),
deleteContainer.bind(null, app),
deleteContainers.bind(null, app),
addons.teardownAddons.bind(null, app, unusedAddons),
function deleteImageIfChanged(done) {
if (app.oldConfig.manifest.dockerImage === app.manifest.dockerImage) return done();
deleteImage(app, app.oldConfig.manifest, done);
docker.deleteImage(app.oldConfig.manifest, done);
},
// removeIcon.bind(null, app), // do not remove icon, otherwise the UI breaks for a short time...
@@ -792,7 +631,7 @@ function update(app, callback) {
downloadIcon.bind(null, app),
updateApp.bind(null, app, { installationProgress: '45, Downloading image' }),
downloadImage.bind(null, app),
docker.downloadImage.bind(null, app.manifest),
updateApp.bind(null, app, { installationProgress: '70, Updating addons' }),
addons.setupAddons.bind(null, app, app.manifest.addons),
@@ -820,6 +659,9 @@ function update(app, callback) {
}
function uninstall(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'uninstalling');
async.series([
@@ -830,7 +672,7 @@ function uninstall(app, callback) {
stopApp.bind(null, app),
updateApp.bind(null, app, { installationProgress: '20, Deleting container' }),
deleteContainer.bind(null, app),
deleteContainers.bind(null, app),
updateApp.bind(null, app, { installationProgress: '30, Teardown addons' }),
addons.teardownAddons.bind(null, app, app.manifest.addons),
@@ -839,7 +681,7 @@ function uninstall(app, callback) {
deleteVolume.bind(null, app),
updateApp.bind(null, app, { installationProgress: '50, Deleting image' }),
deleteImage.bind(null, app, app.manifest),
docker.deleteImage.bind(null, app.manifest),
updateApp.bind(null, app, { installationProgress: '60, Unregistering subdomain' }),
unregisterSubdomain.bind(null, app, app.location),
@@ -859,25 +701,31 @@ function uninstall(app, callback) {
}
function runApp(app, callback) {
startContainer(app, function (error) {
if (error) {
debugApp(app, 'Error starting container : %s', error);
return updateApp(app, { runState: appdb.RSTATE_ERROR }, callback);
}
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
docker.startContainer(app.containerId, function (error) {
if (error) return callback(error);
updateApp(app, { runState: appdb.RSTATE_RUNNING }, callback);
});
}
function stopApp(app, callback) {
stopContainer(app, function (error) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
docker.stopContainers(app.id, function (error) {
if (error) return callback(error);
updateApp(app, { runState: appdb.RSTATE_STOPPED }, callback);
updateApp(app, { runState: appdb.RSTATE_STOPPED, health: null }, callback);
});
}
function handleRunCommand(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
if (app.runState === appdb.RSTATE_PENDING_STOP) {
return stopApp(app, callback);
}
@@ -893,6 +741,9 @@ function handleRunCommand(app, callback) {
}
function startTask(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
// determine what to do
appdb.get(appId, function (error, app) {
if (error) return callback(error);
@@ -927,7 +778,7 @@ if (require.main === module) {
if (error) throw error;
startTask(process.argv[2], function (error) {
if (error) console.error(error);
if (error) debug('Apptask completed with error', error);
debug('Apptask completed for %s', process.argv[2]);
// https://nodejs.org/api/process.html are exit codes used by node. apps.js uses the value below
@@ -936,4 +787,3 @@ if (require.main === module) {
});
});
}

View File

@@ -1,282 +0,0 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
getSignedUploadUrl: getSignedUploadUrl,
getSignedDownloadUrl: getSignedDownloadUrl,
addSubdomain: addSubdomain,
delSubdomain: delSubdomain,
getChangeStatus: getChangeStatus,
copyObject: copyObject
};
var assert = require('assert'),
AWS = require('aws-sdk'),
config = require('./config.js'),
debug = require('debug')('box:aws'),
SubdomainError = require('./subdomainerror.js'),
superagent = require('superagent');
function getAWSCredentials(callback) {
assert.strictEqual(typeof callback, 'function');
// CaaS
if (config.token()) {
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/awscredentials';
superagent.post(url).query({ token: config.token() }).end(function (error, result) {
if (error) return callback(error);
if (result.statusCode !== 201) return callback(new Error(result.text));
if (!result.body || !result.body.credentials) return callback(new Error('Unexpected response'));
var credentials = {
accessKeyId: result.body.credentials.AccessKeyId,
secretAccessKey: result.body.credentials.SecretAccessKey,
sessionToken: result.body.credentials.SessionToken,
region: 'us-east-1'
};
if (config.aws().endpoint) credentials.endpoint = new AWS.Endpoint(config.aws().endpoint);
callback(null, credentials);
});
} else {
if (!config.aws().accessKeyId || !config.aws().secretAccessKey) return callback(new SubdomainError(SubdomainError.MISSING_CREDENTIALS));
var credentials = {
accessKeyId: config.aws().accessKeyId,
secretAccessKey: config.aws().secretAccessKey,
region: 'us-east-1'
};
if (config.aws().endpoint) credentials.endpoint = new AWS.Endpoint(config.aws().endpoint);
callback(null, credentials);
}
}
function getSignedUploadUrl(filename, callback) {
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getSignedUploadUrl: %s', filename);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var s3 = new AWS.S3(credentials);
var params = {
Bucket: config.aws().backupBucket,
Key: config.aws().backupPrefix + '/' + filename,
Expires: 60 * 30 /* 30 minutes */
};
var url = s3.getSignedUrl('putObject', params);
callback(null, { url : url, sessionToken: credentials.sessionToken });
});
}
function getSignedDownloadUrl(filename, callback) {
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getSignedDownloadUrl: %s', filename);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var s3 = new AWS.S3(credentials);
var params = {
Bucket: config.aws().backupBucket,
Key: config.aws().backupPrefix + '/' + filename,
Expires: 60 * 30 /* 30 minutes */
};
var url = s3.getSignedUrl('getObject', params);
callback(null, { url: url, sessionToken: credentials.sessionToken });
});
}
function getZoneByName(zoneName, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getZoneByName: %s', zoneName);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.listHostedZones({}, function (error, result) {
if (error) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
var zone = result.HostedZones.filter(function (zone) {
return zone.Name.slice(0, -1) === zoneName; // aws zone name contains a '.' at the end
})[0];
if (!zone) return callback(new SubdomainError(SubdomainError.NOT_FOUND, 'no such zone'));
debug('getZoneByName: found zone', zone);
callback(null, zone);
});
});
}
function addSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('addSubdomain: ' + subdomain + ' for domain ' + zoneName + ' with value ' + value);
getZoneByName(zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var params = {
ChangeBatch: {
Changes: [{
Action: 'UPSERT',
ResourceRecordSet: {
Type: type,
Name: fqdn,
ResourceRecords: [{
Value: value
}],
Weight: 0,
SetIdentifier: fqdn,
TTL: 1
}
}]
},
HostedZoneId: zone.Id
};
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.code === 'PriorRequestNotComplete') {
return callback(new SubdomainError(SubdomainError.STILL_BUSY, error.message));
} else if (error) {
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
debug('addSubdomain: success. changeInfoId:%j', result);
callback(null, result.ChangeInfo.Id);
});
});
});
}
function delSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('delSubdomain: %s for domain %s.', subdomain, zoneName);
getZoneByName(zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var resourceRecordSet = {
Name: fqdn,
Type: type,
ResourceRecords: [{
Value: value
}],
Weight: 0,
SetIdentifier: fqdn,
TTL: 1
};
var params = {
ChangeBatch: {
Changes: [{
Action: 'DELETE',
ResourceRecordSet: resourceRecordSet
}]
},
HostedZoneId: zone.Id
};
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.message && error.message.indexOf('it was not found') !== -1) {
debug('delSubdomain: resource record set not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'NoSuchHostedZone') {
debug('delSubdomain: hosted zone not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'PriorRequestNotComplete') {
debug('delSubdomain: resource is still busy', error);
return callback(new SubdomainError(SubdomainError.STILL_BUSY, new Error(error)));
} else if (error && error.code === 'InvalidChangeBatch') {
debug('delSubdomain: invalid change batch. No such record to be deleted.');
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error) {
debug('delSubdomain: error', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
}
debug('delSubdomain: success');
callback(null);
});
});
});
}
function getChangeStatus(changeId, callback) {
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.getChange({ Id: changeId }, function (error, result) {
if (error) return callback(error);
callback(null, result.ChangeInfo.Status);
});
});
}
function copyObject(from, to, callback) {
assert.strictEqual(typeof from, 'string');
assert.strictEqual(typeof to, 'string');
assert.strictEqual(typeof callback, 'function');
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: config.aws().backupBucket, // target bucket
Key: config.aws().backupPrefix + '/' + to, // target file
CopySource: config.aws().backupBucket + '/' + config.aws().backupPrefix + '/' + from, // source
};
var s3 = new AWS.S3(credentials);
s3.copyObject(params, callback);
});
}

View File

@@ -12,10 +12,11 @@ exports = module.exports = {
};
var assert = require('assert'),
aws = require('./aws.js'),
caas = require('./storage/caas.js'),
config = require('./config.js'),
debug = require('debug')('box:backups'),
superagent = require('superagent'),
s3 = require('./storage/s3.js'),
settings = require('./settings.js'),
util = require('util');
function BackupsError(reason, errorOrMessage) {
@@ -39,21 +40,30 @@ function BackupsError(reason, errorOrMessage) {
util.inherits(BackupsError, Error);
BackupsError.EXTERNAL_ERROR = 'external error';
BackupsError.INTERNAL_ERROR = 'internal error';
BackupsError.MISSING_CREDENTIALS = 'missing credentials';
// choose which storage backend we use for test purpose we use s3
function api(provider) {
switch (provider) {
case 'caas': return caas;
case 's3': return s3;
default: return null;
}
}
function getAllPaged(page, perPage, callback) {
assert.strictEqual(typeof page, 'number');
assert.strictEqual(typeof perPage, 'number');
assert.strictEqual(typeof callback, 'function');
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/backups';
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
superagent.get(url).query({ token: config.token() }).end(function (error, result) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 200) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, result.text));
if (!result.body || !util.isArray(result.body.backups)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, 'Unexpected response'));
api(backupConfig.provider).getAllPaged(backupConfig, page, perPage, function (error, backups) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
// [ { creationTime, boxVersion, restoreKey, dependsOn: [ ] } ] sorted by time (latest first)
return callback(null, result.body.backups);
return callback(null, backups); // [ { creationTime, restoreKey } ] sorted by time (latest first
});
});
}
@@ -68,19 +78,23 @@ function getBackupUrl(app, callback) {
filename = util.format('backup_%s-v%s.tar.gz', (new Date()).toISOString(), config.version());
}
aws.getSignedUploadUrl(filename, function (error, result) {
if (error) return callback(error);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
var obj = {
id: filename,
url: result.url,
sessionToken: result.sessionToken,
backupKey: config.backupKey()
};
api(backupConfig.provider).getSignedUploadUrl(backupConfig, filename, function (error, result) {
if (error) return callback(error);
debug('getBackupUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
var obj = {
id: filename,
url: result.url,
sessionToken: result.sessionToken,
backupKey: backupConfig.key
};
callback(null, obj);
debug('getBackupUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
callback(null, obj);
});
});
}
@@ -89,19 +103,23 @@ function getRestoreUrl(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
aws.getSignedDownloadUrl(backupId, function (error, result) {
if (error) return callback(error);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
var obj = {
id: backupId,
url: result.url,
sessionToken: result.sessionToken,
backupKey: config.backupKey()
};
api(backupConfig.provider).getSignedDownloadUrl(backupConfig, backupId, function (error, result) {
if (error) return callback(error);
debug('getRestoreUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
var obj = {
id: backupId,
url: result.url,
sessionToken: result.sessionToken,
backupKey: backupConfig.key
};
callback(null, obj);
debug('getRestoreUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
callback(null, obj);
});
});
}
@@ -111,9 +129,14 @@ function copyLastBackup(app, callback) {
assert.strictEqual(typeof callback, 'function');
var toFilename = util.format('appbackup_%s_%s-v%s.tar.gz', app.id, (new Date()).toISOString(), app.manifest.version);
aws.copyObject(app.lastBackupId, toFilename, function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
return callback(null, toFilename);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).copyObject(backupConfig, app.lastBackupId, toFilename, function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
return callback(null, toFilename);
});
});
}

View File

@@ -1,90 +0,0 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
addSubdomain: addSubdomain,
delSubdomain: delSubdomain,
getChangeStatus: getChangeStatus
};
var assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:caas'),
SubdomainError = require('./subdomainerror.js'),
superagent = require('superagent'),
util = require('util');
function addSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
var fqdn = subdomain !== '' && type === 'TXT' ? subdomain + '.' + config.fqdn() : config.appFqdn(subdomain);
debug('addSubdomain: zoneName: %s subdomain: %s type: %s value: %s fqdn: %s', zoneName, subdomain, type, value, fqdn);
var data = {
type: type,
value: value
};
superagent
.post(config.apiServerOrigin() + '/api/v1/domains/' + fqdn)
.query({ token: config.token() })
.send(data)
.end(function (error, result) {
if (error) return callback(error);
if (result.status === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.status !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null, result.body.changeId);
});
}
function delSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('delSubdomain: %s for domain %s.', subdomain, zoneName);
var data = {
type: type,
value: value
};
superagent
.del(config.apiServerOrigin() + '/api/v1/domains/' + config.appFqdn(subdomain))
.query({ token: config.token() })
.send(data)
.end(function (error, result) {
if (error) return callback(error);
if (result.status === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.status !== 204) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null);
});
}
function getChangeStatus(changeId, callback) {
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
superagent
.get(config.apiServerOrigin() + '/api/v1/domains/' + config.fqdn() + '/status/' + changeId)
.query({ token: config.token() })
.end(function (error, result) {
if (error) return callback(error);
if (result.status !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null, result.body.status);
});
}

443
src/cert/acme.js Normal file
View File

@@ -0,0 +1,443 @@
/* jslint node:true */
'use strict';
var assert = require('assert'),
async = require('async'),
crypto = require('crypto'),
debug = require('debug')('box:cert/acme'),
fs = require('fs'),
path = require('path'),
paths = require('../paths.js'),
safe = require('safetydance'),
superagent = require('superagent'),
ursa = require('ursa'),
util = require('util'),
_ = require('underscore');
var CA_PROD = 'https://acme-v01.api.letsencrypt.org',
CA_STAGING = 'https://acme-staging.api.letsencrypt.org',
LE_AGREEMENT = 'https://letsencrypt.org/documents/LE-SA-v1.0.1-July-27-2015.pdf';
exports = module.exports = {
getCertificate: getCertificate
};
function AcmeError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
} else {
this.message = 'Internal error';
this.nestedError = errorOrMessage;
}
}
util.inherits(AcmeError, Error);
AcmeError.INTERNAL_ERROR = 'Internal Error';
AcmeError.EXTERNAL_ERROR = 'External Error';
AcmeError.ALREADY_EXISTS = 'Already Exists';
AcmeError.NOT_COMPLETED = 'Not Completed';
AcmeError.FORBIDDEN = 'Forbidden';
// http://jose.readthedocs.org/en/latest/
// https://www.ietf.org/proceedings/92/slides/slides-92-acme-1.pdf
// https://community.letsencrypt.org/t/list-of-client-implementations/2103
function Acme(options) {
assert.strictEqual(typeof options, 'object');
this.caOrigin = options.prod ? CA_PROD : CA_STAGING;
this.accountKeyPem = null; // Buffer
this.email = options.email;
this.chainPem = options.prod ? safe.fs.readFileSync(__dirname + '/lets-encrypt-x1-cross-signed.pem.txt') : new Buffer('');
}
Acme.prototype.getNonce = function (callback) {
superagent.get(this.caOrigin + '/directory', function (error, response) {
if (error) return callback(error);
if (response.statusCode !== 200) return callback(new Error('Invalid response code when fetching nonce : ' + response.statusCode));
return callback(null, response.headers['Replay-Nonce'.toLowerCase()]);
});
};
// urlsafe base64 encoding (jose)
function urlBase64Encode(string) {
return string.replace(/\+/g, '-').replace(/\//g, '_').replace(/=/g, '');
}
function b64(str) {
var buf = util.isBuffer(str) ? str : new Buffer(str);
return urlBase64Encode(buf.toString('base64'));
}
Acme.prototype.sendSignedRequest = function (url, payload, callback) {
assert.strictEqual(typeof url, 'string');
assert.strictEqual(typeof payload, 'string');
assert.strictEqual(typeof callback, 'function');
assert(util.isBuffer(this.accountKeyPem));
var privateKey = ursa.createPrivateKey(this.accountKeyPem);
var header = {
alg: 'RS256',
jwk: {
e: b64(privateKey.getExponent()),
kty: 'RSA',
n: b64(privateKey.getModulus())
}
};
var payload64 = b64(payload);
this.getNonce(function (error, nonce) {
if (error) return callback(error);
debug('sendSignedRequest: using nonce %s for url %s', nonce, url);
var protected64 = b64(JSON.stringify(_.extend({ }, header, { nonce: nonce })));
var signer = ursa.createSigner('sha256');
signer.update(protected64 + '.' + payload64, 'utf8');
var signature64 = urlBase64Encode(signer.sign(privateKey, 'base64'));
var data = {
header: header,
protected: protected64,
payload: payload64,
signature: signature64
};
superagent.post(url).set('Content-Type', 'application/x-www-form-urlencoded').send(JSON.stringify(data)).end(function (error, res) {
if (error && !error.response) return callback(error); // network errors
callback(null, res);
});
});
};
Acme.prototype.updateContact = function (registrationUri, callback) {
assert.strictEqual(typeof registrationUri, 'string');
assert.strictEqual(typeof callback, 'function');
debug('updateContact: %s %s', registrationUri, this.email);
// https://github.com/ietf-wg-acme/acme/issues/30
var payload = {
resource: 'reg',
contact: [ 'mailto:' + this.email ],
agreement: LE_AGREEMENT
};
var that = this;
this.sendSignedRequest(registrationUri, JSON.stringify(payload), function (error, result) {
if (error) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when registering user: ' + error.message));
if (result.statusCode !== 202) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to update contact. Expecting 202, got %s %s', result.statusCode, result.text)));
debug('updateContact: contact of user updated to %s', that.email);
callback();
});
};
Acme.prototype.registerUser = function (callback) {
assert.strictEqual(typeof callback, 'function');
var payload = {
resource: 'new-reg',
contact: [ 'mailto:' + this.email ],
agreement: LE_AGREEMENT
};
debug('registerUser: %s', this.email);
var that = this;
this.sendSignedRequest(this.caOrigin + '/acme/new-reg', JSON.stringify(payload), function (error, result) {
if (error) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when registering user: ' + error.message));
if (result.statusCode === 409) return that.updateContact(result.headers.location, callback); // already exists
if (result.statusCode !== 201) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to register user. Expecting 201, got %s %s', result.statusCode, result.text)));
debug('registerUser: registered user %s', that.email);
callback(null);
});
};
Acme.prototype.registerDomain = function (domain, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
var payload = {
resource: 'new-authz',
identifier: {
type: 'dns',
value: domain
}
};
debug('registerDomain: %s', domain);
this.sendSignedRequest(this.caOrigin + '/acme/new-authz', JSON.stringify(payload), function (error, result) {
if (error) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when registering domain: ' + error.message));
if (result.statusCode === 403) return callback(new AcmeError(AcmeError.FORBIDDEN, result.body.detail));
if (result.statusCode !== 201) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to register user. Expecting 201, got %s %s', result.statusCode, result.text)));
debug('registerDomain: registered %s', domain);
callback(null, result.body);
});
};
Acme.prototype.prepareHttpChallenge = function (challenge, callback) {
assert.strictEqual(typeof challenge, 'object');
assert.strictEqual(typeof callback, 'function');
debug('prepareHttpChallenge: preparing for challenge %j', challenge);
var token = challenge.token;
assert(util.isBuffer(this.accountKeyPem));
var privateKey = ursa.createPrivateKey(this.accountKeyPem);
var jwk = {
e: b64(privateKey.getExponent()),
kty: 'RSA',
n: b64(privateKey.getModulus())
};
var shasum = crypto.createHash('sha256');
shasum.update(JSON.stringify(jwk));
var thumbprint = urlBase64Encode(shasum.digest('base64'));
var keyAuthorization = token + '.' + thumbprint;
debug('prepareHttpChallenge: writing %s to %s', keyAuthorization, path.join(paths.ACME_CHALLENGES_DIR, token));
fs.writeFile(path.join(paths.ACME_CHALLENGES_DIR, token), token + '.' + thumbprint, function (error) {
if (error) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, error));
callback();
});
};
Acme.prototype.notifyChallengeReady = function (challenge, callback) {
assert.strictEqual(typeof challenge, 'object');
assert.strictEqual(typeof callback, 'function');
debug('notifyChallengeReady: %s was met', challenge.uri);
var keyAuthorization = fs.readFileSync(path.join(paths.ACME_CHALLENGES_DIR, challenge.token), 'utf8');
var payload = {
resource: 'challenge',
keyAuthorization: keyAuthorization
};
this.sendSignedRequest(challenge.uri, JSON.stringify(payload), function (error, result) {
if (error) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when notifying challenge: ' + error.message));
if (result.statusCode !== 202) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to notify challenge. Expecting 202, got %s %s', result.statusCode, result.text)));
callback();
});
};
Acme.prototype.waitForChallenge = function (challenge, callback) {
assert.strictEqual(typeof challenge, 'object');
assert.strictEqual(typeof callback, 'function');
debug('waitingForChallenge: %j', challenge);
async.retry({ times: 10, interval: 5000 }, function (retryCallback) {
debug('waitingForChallenge: getting status');
superagent.get(challenge.uri).end(function (error, result) {
if (error && !error.response) {
debug('waitForChallenge: network error getting uri %s', challenge.uri);
return retryCallback(new AcmeError(AcmeError.EXTERNAL_ERROR, error.message)); // network error
}
if (result.statusCode !== 202) {
debug('waitForChallenge: invalid response code getting uri %s', result.statusCode);
return retryCallback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Bad response code:' + result.statusCode));
}
debug('waitForChallenge: status is "%s"', result.body.status);
if (result.body.status === 'pending') return retryCallback(new AcmeError(AcmeError.NOT_COMPLETED));
else if (result.body.status === 'valid') return retryCallback();
else return retryCallback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Unexpected status: ' + result.body.status));
});
}, function retryFinished(error) {
// async.retry will pass 'undefined' as second arg making it unusable with async.waterfall()
callback(error);
});
};
// https://community.letsencrypt.org/t/public-beta-rate-limits/4772 for rate limits
Acme.prototype.signCertificate = function (domain, csrDer, callback) {
assert.strictEqual(typeof domain, 'string');
assert(util.isBuffer(csrDer));
assert.strictEqual(typeof callback, 'function');
var outdir = paths.APP_CERTS_DIR;
var payload = {
resource: 'new-cert',
csr: b64(csrDer)
};
debug('signCertificate: sending new-cert request');
this.sendSignedRequest(this.caOrigin + '/acme/new-cert', JSON.stringify(payload), function (error, result) {
if (error) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when signing certificate: ' + error.message));
// 429 means we reached the cert limit for this domain
if (result.statusCode !== 201) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to sign certificate. Expecting 201, got %s %s', result.statusCode, result.text)));
var certUrl = result.headers.location;
if (!certUrl) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Missing location in downloadCertificate'));
safe.fs.writeFileSync(path.join(outdir, domain + '.url'), certUrl, 'utf8'); // for renewal
return callback(null, result.headers.location);
});
};
Acme.prototype.createKeyAndCsr = function (domain, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
var outdir = paths.APP_CERTS_DIR;
var execSync = safe.child_process.execSync;
var privateKeyFile = path.join(outdir, domain + '.key');
var key = execSync('openssl genrsa 4096');
if (!key) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
if (!safe.fs.writeFileSync(privateKeyFile, key)) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
debug('createKeyAndCsr: key file saved at %s', privateKeyFile);
var csrDer = execSync(util.format('openssl req -new -key %s -outform DER -subj /CN=%s', privateKeyFile, domain));
if (!csrDer) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
var csrFile = path.join(outdir, domain + '.csr');
if (!safe.fs.writeFileSync(csrFile, csrFile)) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
debug('createKeyAndCsr: csr file (DER) saved at %s', csrFile);
callback(null, csrDer);
};
Acme.prototype.downloadCertificate = function (domain, certUrl, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof certUrl, 'string');
assert.strictEqual(typeof callback, 'function');
var outdir = paths.APP_CERTS_DIR;
var that = this;
superagent.get(certUrl).buffer().parse(function (res, done) {
var data = [ ];
res.on('data', function(chunk) { data.push(chunk); });
res.on('end', function () { res.text = Buffer.concat(data); done(); });
}).end(function (error, result) {
if (error && !error.response) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Network error when downloading certificate'));
if (result.statusCode === 202) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, 'Retry not implemented yet'));
if (result.statusCode !== 200) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to get cert. Expecting 200, got %s %s', result.statusCode, result.text)));
var certificateDer = result.text;
var execSync = safe.child_process.execSync;
safe.fs.writeFileSync(path.join(outdir, domain + '.der'), certificateDer);
debug('downloadCertificate: cert der file saved');
var certificatePem = execSync('openssl x509 -inform DER -outform PEM', { input: certificateDer }); // this is really just base64 encoding with header
if (!certificatePem) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
var certificateFile = path.join(outdir, domain + '.cert');
var fullChainPem = Buffer.concat([certificatePem, that.chainPem]);
if (!safe.fs.writeFileSync(certificateFile, fullChainPem)) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
debug('downloadCertificate: cert file saved at %s', certificateFile);
callback();
});
};
Acme.prototype.acmeFlow = function (domain, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
if (!fs.existsSync(paths.ACME_ACCOUNT_KEY_FILE)) {
debug('getCertificate: generating acme account key on first run');
this.accountKeyPem = safe.child_process.execSync('openssl genrsa 4096');
if (!this.accountKeyPem) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
safe.fs.writeFileSync(paths.ACME_ACCOUNT_KEY_FILE, this.accountKeyPem);
} else {
debug('getCertificate: using existing acme account key');
this.accountKeyPem = fs.readFileSync(paths.ACME_ACCOUNT_KEY_FILE);
}
var that = this;
this.registerUser(function (error) {
if (error) return callback(error);
that.registerDomain(domain, function (error, result) {
if (error) return callback(error);
debug('acmeFlow: challenges: %j', result);
var httpChallenges = result.challenges.filter(function(x) { return x.type === 'http-01'; });
if (httpChallenges.length === 0) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'no http challenges'));
var challenge = httpChallenges[0];
async.waterfall([
that.prepareHttpChallenge.bind(that, challenge),
that.notifyChallengeReady.bind(that, challenge),
that.waitForChallenge.bind(that, challenge),
that.createKeyAndCsr.bind(that, domain),
that.signCertificate.bind(that, domain),
that.downloadCertificate.bind(that, domain)
], callback);
});
});
};
Acme.prototype.getCertificate = function (domain, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
var outdir = paths.APP_CERTS_DIR;
var certUrl = safe.fs.readFileSync(path.join(outdir, domain + '.url'), 'utf8');
var certificateGetter;
if (certUrl) {
debug('getCertificate: renewing existing cert for %s from %s', domain, certUrl);
certificateGetter = this.downloadCertificate.bind(this, domain, certUrl);
} else {
debug('getCertificate: start acme flow for %s from %s', domain, this.caOrigin);
certificateGetter = this.acmeFlow.bind(this, domain);
}
certificateGetter(function (error) {
if (error) return callback(error);
callback(null, path.join(outdir, domain + '.cert'), path.join(outdir, domain + '.key'));
});
};
function getCertificate(domain, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var acme = new Acme(options || { });
acme.getCertificate(domain, callback);
}

18
src/cert/caas.js Normal file
View File

@@ -0,0 +1,18 @@
'use strict';
exports = module.exports = {
getCertificate: getCertificate
};
var assert = require('assert'),
debug = require('debug')('box:cert/caas.js');
function getCertificate(domain, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('getCertificate: using fallback certificate', domain);
return callback(null, 'cert/host.cert', 'cert/host.key');
}

View File

@@ -0,0 +1,27 @@
-----BEGIN CERTIFICATE-----
MIIEqDCCA5CgAwIBAgIRAJgT9HUT5XULQ+dDHpceRL0wDQYJKoZIhvcNAQELBQAw
PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD
Ew5EU1QgUm9vdCBDQSBYMzAeFw0xNTEwMTkyMjMzMzZaFw0yMDEwMTkyMjMzMzZa
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAJzTDPBa5S5Ht3JdN4OzaGMw6tc1Jhkl4b2+NfFwki+3uEtB
BaupnjUIWOyxKsRohwuj43Xk5vOnYnG6eYFgH9eRmp/z0HhncchpDpWRz/7mmelg
PEjMfspNdxIknUcbWuu57B43ABycrHunBerOSuu9QeU2mLnL/W08lmjfIypCkAyG
dGfIf6WauFJhFBM/ZemCh8vb+g5W9oaJ84U/l4avsNwa72sNlRZ9xCugZbKZBDZ1
gGusSvMbkEl4L6KWTyogJSkExnTA0DHNjzE4lRa6qDO4Q/GxH8Mwf6J5MRM9LTb4
4/zyM2q5OTHFr8SNDR1kFjOq+oQpttQLwNh9w5MCAwEAAaOCAZIwggGOMBIGA1Ud
EwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMH8GCCsGAQUFBwEBBHMwcTAy
BggrBgEFBQcwAYYmaHR0cDovL2lzcmcudHJ1c3RpZC5vY3NwLmlkZW50cnVzdC5j
b20wOwYIKwYBBQUHMAKGL2h0dHA6Ly9hcHBzLmlkZW50cnVzdC5jb20vcm9vdHMv
ZHN0cm9vdGNheDMucDdjMB8GA1UdIwQYMBaAFMSnsaR7LHH62+FLkHX/xBVghYkQ
MFQGA1UdIARNMEswCAYGZ4EMAQIBMD8GCysGAQQBgt8TAQEBMDAwLgYIKwYBBQUH
AgEWImh0dHA6Ly9jcHMucm9vdC14MS5sZXRzZW5jcnlwdC5vcmcwPAYDVR0fBDUw
MzAxoC+gLYYraHR0cDovL2NybC5pZGVudHJ1c3QuY29tL0RTVFJPT1RDQVgzQ1JM
LmNybDATBgNVHR4EDDAKoQgwBoIELm1pbDAdBgNVHQ4EFgQUqEpqYwR93brm0Tm3
pkVl7/Oo7KEwDQYJKoZIhvcNAQELBQADggEBANHIIkus7+MJiZZQsY14cCoBG1hd
v0J20/FyWo5ppnfjL78S2k4s2GLRJ7iD9ZDKErndvbNFGcsW+9kKK/TnY21hp4Dd
ITv8S9ZYQ7oaoqs7HwhEMY9sibED4aXw09xrJZTC9zK1uIfW6t5dHQjuOWv+HHoW
ZnupyxpsEUlEaFb+/SCI4KCSBdAsYxAcsHYI5xxEI4LutHp6s3OT2FuO90WfdsIk
6q78OMSdn875bNjdBYAqxUp2/LEIHfDBkLoQz0hFJmwAbYahqKaLn73PAAm1X2kj
f1w8DdnkabOLGeOVcj9LQ+s67vBykx4anTjURkbqZslUEUsn2k5xeua2zUk=
-----END CERTIFICATE-----

259
src/certificates.js Normal file
View File

@@ -0,0 +1,259 @@
/* jslint node:true */
'use strict';
var acme = require('./cert/acme.js'),
assert = require('assert'),
async = require('async'),
caas = require('./cert/caas.js'),
cloudron = require('./cloudron.js'),
config = require('./config.js'),
constants = require('./constants.js'),
debug = require('debug')('box:src/certificates'),
fs = require('fs'),
nginx = require('./nginx.js'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
sysinfo = require('./sysinfo.js'),
user = require('./user.js'),
util = require('util'),
waitForDns = require('./waitfordns.js'),
x509 = require('x509');
exports = module.exports = {
installAdminCertificate: installAdminCertificate,
autoRenew: autoRenew,
setFallbackCertificate: setFallbackCertificate,
setAdminCertificate: setAdminCertificate,
CertificatesError: CertificatesError,
validateCertificate: validateCertificate,
ensureCertificate: ensureCertificate
};
var NOOP_CALLBACK = function (error) { if (error) debug(error); };
function CertificatesError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
} else {
this.message = 'Internal error';
this.nestedError = errorOrMessage;
}
}
util.inherits(CertificatesError, Error);
CertificatesError.INTERNAL_ERROR = 'Internal Error';
CertificatesError.INVALID_CERT = 'Invalid certificate';
function getApi(callback) {
settings.getTlsConfig(function (error, tlsConfig) {
if (error) return callback(error);
var api = tlsConfig.provider === 'caas' ? caas : acme;
var options = { };
options.prod = tlsConfig.provider.match(/.*-prod/) !== null;
// registering user with an email requires A or MX record (https://github.com/letsencrypt/boulder/issues/1197)
// we cannot use admin@fqdn because the user might not have set it up.
// we simply update the account with the latest email we have each time when getting letsencrypt certs
// https://github.com/ietf-wg-acme/acme/issues/30
user.getOwner(function (error, owner) {
options.email = error ? 'admin@cloudron.io' : owner.email; // can error if not activated yet
callback(null, api, options);
});
});
}
function installAdminCertificate(callback) {
if (cloudron.isConfiguredSync()) return callback();
settings.getTlsConfig(function (error, tlsConfig) {
if (error) return callback(error);
if (tlsConfig.provider === 'caas') return callback();
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
waitForDns(config.adminFqdn(), ip, config.fqdn(), function (error) {
if (error) return callback(error); // this cannot happen because we retry forever
ensureCertificate(config.adminFqdn(), function (error, certFilePath, keyFilePath) {
if (error) { // currently, this can never happen
debug('Error obtaining certificate. Proceed anyway', error);
return callback();
}
nginx.configureAdmin(certFilePath, keyFilePath, callback);
});
});
});
});
}
function needsRenewalSync(certFilePath) {
var result = safe.child_process.execSync('openssl x509 -checkend %s -in %s', 60 * 60 * 24 * 5, certFilePath);
return result === null; // command errored
}
function autoRenew(callback) {
debug('autoRenew: Checking certificates for renewal');
callback = callback || NOOP_CALLBACK;
var filenames = safe.fs.readdirSync(paths.APP_CERTS_DIR);
if (!filenames) {
debug('autoRenew: Error getting filenames: %s', safe.error.message);
return;
}
var certs = filenames.filter(function (f) {
return f.match(/\.cert$/) !== null && needsRenewalSync(path.join(paths.APP_CERTS_DIR, f));
});
debug('autoRenew: %j needs to be renewed', certs);
getApi(function (error, api, apiOptions) {
if (error) return callback(error);
async.eachSeries(certs, function iterator(cert, iteratorCallback) {
var domain = cert.match(/^(.*)\.cert$/)[1];
if (domain === 'host') return iteratorCallback(); // cannot renew fallback cert
debug('autoRenew: renewing cert for %s with options %j', domain, apiOptions);
api.getCertificate(domain, apiOptions, function (error) {
if (error) debug('autoRenew: could not renew cert for %s', domain, error);
iteratorCallback(); // move on to next cert
});
});
});
}
// note: https://tools.ietf.org/html/rfc4346#section-7.4.2 (certificate_list) requires that the
// servers certificate appears first (and not the intermediate cert)
function validateCertificate(cert, key, fqdn) {
assert(cert === null || typeof cert === 'string');
assert(key === null || typeof key === 'string');
assert.strictEqual(typeof fqdn, 'string');
if (cert === null && key === null) return null;
if (!cert && key) return new Error('missing cert');
if (cert && !key) return new Error('missing key');
var content;
try {
content = x509.parseCert(cert);
} catch (e) {
return new Error('invalid cert: ' + e.message);
}
// check expiration
if (content.notAfter < new Date()) return new Error('cert expired');
function matchesDomain(domain) {
if (domain === fqdn) return true;
if (domain.indexOf('*') === 0 && domain.slice(2) === fqdn.slice(fqdn.indexOf('.') + 1)) return true;
return false;
}
// check domain
var domains = content.altNames.concat(content.subject.commonName);
if (!domains.some(matchesDomain)) return new Error(util.format('cert is not valid for this domain. Expecting %s in %j', fqdn, domains));
// http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#verify
var certModulus = safe.child_process.execSync('openssl x509 -noout -modulus', { encoding: 'utf8', input: cert });
var keyModulus = safe.child_process.execSync('openssl rsa -noout -modulus', { encoding: 'utf8', input: key });
if (certModulus !== keyModulus) return new Error('key does not match the cert');
return null;
}
function setFallbackCertificate(cert, key, callback) {
assert.strictEqual(typeof cert, 'string');
assert.strictEqual(typeof key, 'string');
assert.strictEqual(typeof callback, 'function');
var error = validateCertificate(cert, key, '*.' + config.fqdn());
if (error) return callback(new CertificatesError(CertificatesError.INVALID_CERT, error.message));
// backup the cert
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, 'host.cert'), cert)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, 'host.key'), key)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
// copy over fallback cert
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), cert)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), key)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
nginx.reload(function (error) {
if (error) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, error));
return callback(null);
});
}
function setAdminCertificate(cert, key, callback) {
assert.strictEqual(typeof cert, 'string');
assert.strictEqual(typeof key, 'string');
assert.strictEqual(typeof callback, 'function');
var vhost = config.appFqdn(constants.ADMIN_LOCATION);
var certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.key');
var error = validateCertificate(cert, key, vhost);
if (error) return callback(new CertificatesError(CertificatesError.INVALID_CERT, error.message));
// backup the cert
if (!safe.fs.writeFileSync(certFilePath, cert)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(keyFilePath, key)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
nginx.configureAdmin(certFilePath, keyFilePath, callback);
}
function ensureCertificate(domain, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
// check if user uploaded a specific cert. ideally, we should not mix user certs and automatic certs as we do here...
var userCertFilePath = path.join(paths.APP_CERTS_DIR, domain + '.cert');
var userKeyFilePath = path.join(paths.APP_CERTS_DIR, domain + '.key');
if (fs.existsSync(userCertFilePath) && fs.existsSync(userKeyFilePath)) {
debug('ensureCertificate: %s. certificate already exists at %s', domain, userKeyFilePath);
if (!needsRenewalSync(userCertFilePath)) return callback(null, userCertFilePath, userKeyFilePath);
debug('ensureCertificate: %s cert require renewal', domain);
}
getApi(function (error, api, apiOptions) {
if (error) return callback(error);
debug('ensureCertificate: getting certificate for %s with options %j', domain, apiOptions);
api.getCertificate(domain, apiOptions, function (error, certFilePath, keyFilePath) {
if (error) {
debug('ensureCertificate: could not get certificate. using fallback certs', error);
return callback(null, 'cert/host.cert', 'cert/host.key'); // use fallback certs
}
callback(null, certFilePath, keyFilePath);
});
});
}

View File

@@ -8,19 +8,27 @@ exports = module.exports = {
getAllWithTokenCountByIdentifier: getAllWithTokenCountByIdentifier,
add: add,
del: del,
update: update,
getByAppId: getByAppId,
delByAppId: delByAppId,
getByAppIdAndType: getByAppIdAndType,
_clear: clear
delByAppId: delByAppId,
delByAppIdAndType: delByAppIdAndType,
_clear: clear,
TYPE_EXTERNAL: 'external',
TYPE_OAUTH: 'addon-oauth',
TYPE_SIMPLE_AUTH: 'addon-simpleauth',
TYPE_PROXY: 'addon-proxy',
TYPE_ADMIN: 'admin'
};
var assert = require('assert'),
database = require('./database.js'),
DatabaseError = require('./databaseerror.js');
var CLIENTS_FIELDS = [ 'id', 'appId', 'clientSecret', 'redirectURI', 'scope' ].join(',');
var CLIENTS_FIELDS_PREFIXED = [ 'clients.id', 'clients.appId', 'clients.clientSecret', 'clients.redirectURI', 'clients.scope' ].join(',');
var CLIENTS_FIELDS = [ 'id', 'appId', 'type', 'clientSecret', 'redirectURI', 'scope' ].join(',');
var CLIENTS_FIELDS_PREFIXED = [ 'clients.id', 'clients.appId', 'clients.type', 'clients.clientSecret', 'clients.redirectURI', 'clients.scope' ].join(',');
function get(id, callback) {
assert.strictEqual(typeof id, 'string');
@@ -67,37 +75,33 @@ function getByAppId(appId, callback) {
});
}
function add(id, appId, clientSecret, redirectURI, scope, callback) {
assert.strictEqual(typeof id, 'string');
function getByAppIdAndType(appId, type, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof clientSecret, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
var data = [ id, appId, clientSecret, redirectURI, scope ];
database.query('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE appId = ? AND type = ? LIMIT 1', [ appId, type ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
database.query('INSERT INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (?, ?, ?, ?, ?)', data, function (error, result) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || result.affectedRows === 0) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
return callback(null, result[0]);
});
}
function update(id, appId, clientSecret, redirectURI, scope, callback) {
function add(id, appId, type, clientSecret, redirectURI, scope, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof clientSecret, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
var data = [ appId, clientSecret, redirectURI, scope, id ];
var data = [ id, appId, type, clientSecret, redirectURI, scope ];
database.query('UPDATE clients SET appId = ?, clientSecret = ?, redirectURI = ?, scope = ? WHERE id = ?', data, function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
database.query('INSERT INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (?, ?, ?, ?, ?, ?)', data, function (error, result) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || result.affectedRows === 0) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
@@ -127,6 +131,19 @@ function delByAppId(appId, callback) {
});
}
function delByAppIdAndType(appId, type, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
database.query('DELETE FROM clients WHERE appId=? AND type=?', [ appId, type ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
return callback(null);
});
}
function clear(callback) {
assert.strictEqual(typeof callback, 'function');

View File

@@ -5,7 +5,6 @@ exports = module.exports = {
add: add,
get: get,
update: update,
del: del,
getAllWithDetailsByUserId: getAllWithDetailsByUserId,
getClientTokensByUserId: getClientTokensByUserId,
@@ -43,6 +42,7 @@ function ClientsError(reason, errorOrMessage) {
}
util.inherits(ClientsError, Error);
ClientsError.INVALID_SCOPE = 'Invalid scope';
ClientsError.INVALID_CLIENT = 'Invalid client';
function validateScope(scope) {
assert.strictEqual(typeof scope, 'string');
@@ -55,8 +55,9 @@ function validateScope(scope) {
return null;
}
function add(appIdentifier, redirectURI, scope, callback) {
assert.strictEqual(typeof appIdentifier, 'string');
function add(appId, type, redirectURI, scope, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -67,12 +68,13 @@ function add(appIdentifier, redirectURI, scope, callback) {
var id = 'cid-' + uuid.v4();
var clientSecret = hat(256);
clientdb.add(id, appIdentifier, clientSecret, redirectURI, scope, function (error) {
clientdb.add(id, appId, type, clientSecret, redirectURI, scope, function (error) {
if (error) return callback(error);
var client = {
id: id,
appId: appIdentifier,
appId: appId,
type: type,
clientSecret: clientSecret,
redirectURI: redirectURI,
scope: scope
@@ -92,23 +94,6 @@ function get(id, callback) {
});
}
// we only allow appIdentifier and redirectURI to be updated
function update(id, appIdentifier, redirectURI, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appIdentifier, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof callback, 'function');
clientdb.get(id, function (error, result) {
if (error) return callback(error);
clientdb.update(id, appIdentifier, result.clientSecret, redirectURI, result.scope, function (error, result) {
if (error) return callback(error);
callback(null, result);
});
});
}
function del(id, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -127,54 +112,29 @@ function getAllWithDetailsByUserId(userId, callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, []);
if (error) return callback(error);
// We have several types of records here
// 1) webadmin has an app id of 'webadmin'
// 2) oauth proxy records are always the app id prefixed with 'proxy-'
// 3) addon oauth records for apps prefixed with 'addon-'
// 4) external app records prefixed with 'external-'
// 5) normal apps on the cloudron without a prefix
var tmp = [];
async.each(results, function (record, callback) {
if (record.appId === constants.ADMIN_CLIENT_ID) {
if (record.type === clientdb.TYPE_ADMIN) {
record.name = constants.ADMIN_NAME;
record.location = constants.ADMIN_LOCATION;
record.type = 'webadmin';
tmp.push(record);
return callback(null);
} else if (record.appId === constants.TEST_CLIENT_ID) {
record.name = constants.TEST_NAME;
record.location = constants.TEST_LOCATION;
record.type = 'test';
tmp.push(record);
return callback(null);
}
var appId = record.appId;
var type = 'app';
// Handle our different types of oauth clients
if (record.appId.indexOf('addon-') === 0) {
appId = record.appId.slice('addon-'.length);
type = 'addon';
} else if (record.appId.indexOf('proxy-') === 0) {
appId = record.appId.slice('proxy-'.length);
type = 'proxy';
}
appdb.get(appId, function (error, result) {
appdb.get(record.appId, function (error, result) {
if (error) {
console.error('Failed to get app details for oauth client', result, error);
return callback(null); // ignore error so we continue listing clients
}
record.name = result.manifest.title + (record.appId.indexOf('proxy-') === 0 ? 'OAuth Proxy' : '');
if (record.type === clientdb.TYPE_PROXY) record.name = result.manifest.title + ' Website Proxy';
if (record.type === clientdb.TYPE_OAUTH) record.name = result.manifest.title + ' OAuth';
if (record.type === clientdb.TYPE_SIMPLE_AUTH) record.name = result.manifest.title + ' Simple Auth';
if (record.type === clientdb.TYPE_EXTERNAL) record.name = result.manifest.title + ' external';
record.location = result.location;
record.type = type;
tmp.push(record);

View File

@@ -11,15 +11,24 @@ exports = module.exports = {
getConfig: getConfig,
getStatus: getStatus,
setCertificate: setCertificate,
sendHeartbeat: sendHeartbeat,
updateToLatest: updateToLatest,
update: update,
reboot: reboot,
migrate: migrate,
backup: backup,
ensureBackup: ensureBackup
retire: retire,
ensureBackup: ensureBackup,
isConfiguredSync: isConfiguredSync,
checkDiskSpace: checkDiskSpace,
events: new (require('events').EventEmitter)(),
EVENT_ACTIVATED: 'activated',
EVENT_CONFIGURED: 'configured'
};
var apps = require('./apps.js'),
@@ -31,14 +40,16 @@ var apps = require('./apps.js'),
clientdb = require('./clientdb.js'),
config = require('./config.js'),
debug = require('debug')('box:cloudron'),
df = require('node-df'),
fs = require('fs'),
locker = require('./locker.js'),
mailer = require('./mailer.js'),
os = require('os'),
path = require('path'),
paths = require('./paths.js'),
progress = require('./progress.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
SettingsError = settings.SettingsError,
shell = require('./shell.js'),
subdomains = require('./subdomains.js'),
superagent = require('superagent'),
@@ -51,14 +62,17 @@ var apps = require('./apps.js'),
util = require('util'),
webhooks = require('./webhooks.js');
var RELOAD_NGINX_CMD = path.join(__dirname, 'scripts/reloadnginx.sh'),
REBOOT_CMD = path.join(__dirname, 'scripts/reboot.sh'),
var REBOOT_CMD = path.join(__dirname, 'scripts/reboot.sh'),
BACKUP_BOX_CMD = path.join(__dirname, 'scripts/backupbox.sh'),
BACKUP_SWAP_CMD = path.join(__dirname, 'scripts/backupswap.sh'),
INSTALLER_UPDATE_URL = 'http://127.0.0.1:2020/api/v1/installer/update';
INSTALLER_UPDATE_URL = 'http://127.0.0.1:2020/api/v1/installer/update',
RETIRE_CMD = path.join(__dirname, 'scripts/retire.sh');
var gAddDnsRecordsTimerId = null,
gCloudronDetails = null; // cached cloudron details like region,size...
var NOOP_CALLBACK = function (error) { if (error) debug(error); };
var gUpdatingDns = false, // flag for dns update reentrancy
gCloudronDetails = null, // cached cloudron details like region,size...
gIsConfigured = null; // cached configured state so that return value is synchronous. null means we are not initialized yet
function debugApp(app, args) {
assert(!app || typeof app === 'object');
@@ -76,7 +90,6 @@ function ignoreError(func) {
};
}
function CloudronError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
@@ -105,27 +118,67 @@ CloudronError.BAD_EMAIL = 'Bad email';
CloudronError.BAD_PASSWORD = 'Bad password';
CloudronError.BAD_NAME = 'Bad name';
CloudronError.BAD_STATE = 'Bad state';
CloudronError.ALREADY_UPTODATE = 'No Update Available';
CloudronError.NOT_FOUND = 'Not found';
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
if (process.env.BOX_ENV !== 'test') {
addDnsRecords();
}
exports.events.on(exports.EVENT_CONFIGURED, addDnsRecords);
callback(null);
syncConfigState(callback);
}
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
clearTimeout(gAddDnsRecordsTimerId);
gAddDnsRecordsTimerId = null;
exports.events.removeListener(exports.EVENT_CONFIGURED, addDnsRecords);
callback(null);
}
function isConfiguredSync() {
return gIsConfigured === true;
}
function isConfigured(callback) {
// set of rules to see if we have the configs required for cloudron to function
// note this checks for missing configs and not invalid configs
settings.getDnsConfig(function (error, dnsConfig) {
if (error) return callback(error);
if (!dnsConfig) return callback(null, false);
var isConfigured = (config.isCustomDomain() && dnsConfig.provider === 'route53') ||
(!config.isCustomDomain() && dnsConfig.provider === 'caas');
callback(null, isConfigured);
});
}
function syncConfigState(callback) {
assert(!gIsConfigured);
callback = callback || NOOP_CALLBACK;
isConfigured(function (error, configured) {
if (error) return callback(error);
debug('syncConfigState: configured = %s', configured);
if (configured) {
exports.events.emit(exports.EVENT_CONFIGURED);
} else {
settings.events.once(settings.DNS_CONFIG_KEY, function () { syncConfigState(); }); // check again later
}
gIsConfigured = configured;
callback();
});
}
function setTimeZone(ip, callback) {
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -133,8 +186,8 @@ function setTimeZone(ip, callback) {
debug('setTimeZone ip:%s', ip);
superagent.get('http://www.telize.com/geoip/' + ip).end(function (error, result) {
if (error || result.statusCode !== 200) {
debug('Failed to get geo location', error);
if ((error && !error.response) || result.statusCode !== 200) {
debug('Failed to get geo location: %s', error.message);
return callback(null);
}
@@ -149,43 +202,39 @@ function setTimeZone(ip, callback) {
});
}
function activate(username, password, email, name, ip, callback) {
function activate(username, password, email, displayName, ip, callback) {
assert.strictEqual(typeof username, 'string');
assert.strictEqual(typeof password, 'string');
assert.strictEqual(typeof email, 'string');
assert.strictEqual(typeof displayName, 'string');
assert.strictEqual(typeof ip, 'string');
assert(!name || typeof name, 'string');
assert.strictEqual(typeof callback, 'function');
debug('activating user:%s email:%s', username, email);
setTimeZone(ip, function () { }); // TODO: get this from user. note that timezone is detected based on the browser location and not the cloudron region
if (!name) name = settings.getDefaultSync(settings.CLOUDRON_NAME_KEY);
settings.setCloudronName(name, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return callback(new CloudronError(CloudronError.BAD_NAME));
user.createOwner(username, password, email, displayName, function (error, userObject) {
if (error && error.reason === UserError.ALREADY_EXISTS) return callback(new CloudronError(CloudronError.ALREADY_PROVISIONED));
if (error && error.reason === UserError.BAD_USERNAME) return callback(new CloudronError(CloudronError.BAD_USERNAME));
if (error && error.reason === UserError.BAD_PASSWORD) return callback(new CloudronError(CloudronError.BAD_PASSWORD));
if (error && error.reason === UserError.BAD_EMAIL) return callback(new CloudronError(CloudronError.BAD_EMAIL));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
user.createOwner(username, password, email, function (error, userObject) {
if (error && error.reason === UserError.ALREADY_EXISTS) return callback(new CloudronError(CloudronError.ALREADY_PROVISIONED));
if (error && error.reason === UserError.BAD_USERNAME) return callback(new CloudronError(CloudronError.BAD_USERNAME));
if (error && error.reason === UserError.BAD_PASSWORD) return callback(new CloudronError(CloudronError.BAD_PASSWORD));
if (error && error.reason === UserError.BAD_EMAIL) return callback(new CloudronError(CloudronError.BAD_EMAIL));
clientdb.getByAppIdAndType('webadmin', clientdb.TYPE_ADMIN, function (error, result) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
clientdb.getByAppId('webadmin', function (error, result) {
// Also generate a token so the admin creation can also act as a login
var token = tokendb.generateToken();
var expires = Date.now() + 24 * 60 * 60 * 1000; // 1 day
tokendb.add(token, tokendb.PREFIX_USER + userObject.id, result.id, expires, '*', function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
// Also generate a token so the admin creation can also act as a login
var token = tokendb.generateToken();
var expires = Date.now() + 24 * 60 * 60 * 1000; // 1 day
// EE API is sync. do not keep the REST API reponse waiting
process.nextTick(function () { exports.events.emit(exports.EVENT_ACTIVATED); });
tokendb.add(token, tokendb.PREFIX_USER + userObject.id, result.id, expires, '*', function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, { token: token, expires: expires });
});
callback(null, { token: token, expires: expires });
});
});
});
@@ -203,6 +252,9 @@ function getStatus(callback) {
callback(null, {
activated: count !== 0,
version: config.version(),
boxVersionsUrl: config.get('boxVersionsUrl'),
apiServerOrigin: config.apiServerOrigin(), // used by CaaS tool
provider: config.provider(),
cloudronName: cloudronName
});
});
@@ -214,12 +266,21 @@ function getCloudronDetails(callback) {
if (gCloudronDetails) return callback(null, gCloudronDetails);
if (!config.token()) {
gCloudronDetails = {
region: null,
size: null
};
return callback(null, gCloudronDetails);
}
superagent
.get(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn())
.query({ token: config.token() })
.end(function (error, result) {
if (error) return callback(error);
if (result.status !== 200) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
gCloudronDetails = result.body.box;
@@ -230,10 +291,9 @@ function getCloudronDetails(callback) {
function getConfig(callback) {
assert.strictEqual(typeof callback, 'function');
// TODO avoid pyramid of awesomeness with async
getCloudronDetails(function (error, result) {
if (error) {
console.error('Failed to fetch cloudron details.', error);
debug('Failed to fetch cloudron details.', error);
// set fallback values to avoid dependency on appstore
result = {
@@ -248,20 +308,26 @@ function getConfig(callback) {
settings.getDeveloperMode(function (error, developerMode) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, {
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
isDev: config.isDev(),
fqdn: config.fqdn(),
ip: sysinfo.getIp(),
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
isCustomDomain: config.isCustomDomain(),
developerMode: developerMode,
region: result.region,
size: result.size,
cloudronName: cloudronName
sysinfo.getIp(function (error, ip) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, {
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
isDev: config.isDev(),
fqdn: config.fqdn(),
ip: ip,
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
isCustomDomain: config.isCustomDomain(),
developerMode: developerMode,
region: result.region,
size: result.size,
memory: os.totalmem(),
provider: config.provider(),
cloudronName: cloudronName
});
});
});
});
@@ -269,104 +335,124 @@ function getConfig(callback) {
}
function sendHeartbeat() {
// Only send heartbeats after the admin dns record is synced to give appstore a chance to know that fact
if (!config.get('dnsInSync')) return;
if (!config.token()) return;
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/heartbeat';
superagent.post(url).query({ token: config.token(), version: config.version() }).timeout(10000).end(function (error, result) {
if (error) debug('Error sending heartbeat.', error);
if (error && !error.response) debug('Network error sending heartbeat.', error);
else if (result.statusCode !== 200) debug('Server responded to heartbeat with %s %s', result.statusCode, result.text);
else debug('Heartbeat sent to %s', url);
});
}
function addDnsRecords() {
if (config.get('dnsInSync')) return sendHeartbeat(); // already registered send heartbeat
var DKIM_SELECTOR = 'mail';
var DMARC_REPORT_EMAIL = 'dmarc-report@cloudron.io';
function readDkimPublicKeySync() {
var dkimPublicKeyFile = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn() + '/public');
var publicKey = safe.fs.readFileSync(dkimPublicKeyFile, 'utf8');
if (publicKey === null) {
console.error('Error reading dkim public key. Stop DNS setup.');
return;
debug('Error reading dkim public key.', safe.error);
return null;
}
// remove header, footer and new lines
publicKey = publicKey.split('\n').slice(1, -2).join('');
// note that dmarc requires special DNS records for external RUF and RUA
var records = [
// naked domain
{ subdomain: '', type: 'A', value: sysinfo.getIp() },
// webadmin domain
{ subdomain: 'my', type: 'A', value: sysinfo.getIp() },
// softfail all mails not from our IP. Note that this uses IP instead of 'a' should we use a load balancer in the future
{ subdomain: '', type: 'TXT', value: '"v=spf1 ip4:' + sysinfo.getIp() + ' ~all"' },
// t=s limits the domainkey to this domain and not it's subdomains
{ subdomain: DKIM_SELECTOR + '._domainkey', type: 'TXT', value: '"v=DKIM1; t=s; p=' + publicKey + '"' },
// DMARC requires special setup if report email id is in different domain
{ subdomain: '_dmarc', type: 'TXT', value: '"v=DMARC1; p=none; pct=100; rua=mailto:' + DMARC_REPORT_EMAIL + '; ruf=' + DMARC_REPORT_EMAIL + '"' }
];
return publicKey;
}
debug('addDnsRecords:', records);
function txtRecordsWithSpf(callback) {
assert.strictEqual(typeof callback, 'function');
subdomains.addMany(records, function (error, changeIds) {
if (error) {
console.error('Admin DNS record addition failed', error);
gAddDnsRecordsTimerId = setTimeout(addDnsRecords, 10000);
return;
subdomains.get('', 'TXT', function (error, txtRecords) {
if (error) return callback(error);
debug('txtRecordsWithSpf: current txt records - %j', txtRecords);
var i, validSpf;
for (i = 0; i < txtRecords.length; i++) {
if (txtRecords[i].indexOf('"v=spf1 ') !== 0) continue; // not SPF
validSpf = txtRecords[i].indexOf(' a:' + config.fqdn() + ' ') !== -1;
break;
}
function checkIfInSync() {
debug('addDnsRecords: Check if admin DNS record is in sync.');
if (validSpf) return callback(null, null);
async.eachSeries(changeIds, function (changeId, callback) {
subdomains.status(changeId, function (error, result) {
if (error) return callback(new Error('Failed to check if admin DNS record is in sync.', error));
if (result !== 'done') return callback(new Error(changeId + ' is not in sync. result:' + result));
callback(null);
});
}, function (error) {
if (error) {
console.error(error);
gAddDnsRecordsTimerId = setTimeout(checkIfInSync, 5000);
return;
}
debug('addDnsRecords: done');
config.set('dnsInSync', true);
sendHeartbeat(); // send heartbeat after the dns records are done
});
if (i == txtRecords.length) {
txtRecords[i] = '"v=spf1 a:' + config.fqdn() + ' ~all"';
} else {
txtRecords[i] = '"v=spf1 a:' + config.fqdn() + ' ' + txtRecords[i].slice('"v=spf1 '.length);
}
checkIfInSync();
return callback(null, txtRecords);
});
}
function setCertificate(certificate, key, callback) {
assert.strictEqual(typeof certificate, 'string');
assert.strictEqual(typeof key, 'string');
assert.strictEqual(typeof callback, 'function');
function addDnsRecords() {
var callback = NOOP_CALLBACK;
debug('Updating certificates');
if (process.env.BOX_ENV === 'test') return callback();
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), certificate)) {
return callback(new CloudronError(CloudronError.INTERNAL_ERROR, safe.error.message));
if (gUpdatingDns) {
debug('addDnsRecords: dns update already in progress');
return callback();
}
gUpdatingDns = true;
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), key)) {
return callback(new CloudronError(CloudronError.INTERNAL_ERROR, safe.error.message));
}
var DKIM_SELECTOR = 'cloudron';
var DMARC_REPORT_EMAIL = 'dmarc-report@cloudron.io';
shell.sudo('setCertificate', [ RELOAD_NGINX_CMD ], function (error) {
var dkimKey = readDkimPublicKeySync();
if (!dkimKey) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, new Error('internal error failed to read dkim public key')));
sysinfo.getIp(function (error, ip) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
return callback(null);
var nakedDomainRecord = { subdomain: '', type: 'A', values: [ ip ] };
var webadminRecord = { subdomain: 'my', type: 'A', values: [ ip ] };
// t=s limits the domainkey to this domain and not it's subdomains
var dkimRecord = { subdomain: DKIM_SELECTOR + '._domainkey', type: 'TXT', values: [ '"v=DKIM1; t=s; p=' + dkimKey + '"' ] };
// DMARC requires special setup if report email id is in different domain
var dmarcRecord = { subdomain: '_dmarc', type: 'TXT', values: [ '"v=DMARC1; p=none; pct=100; rua=mailto:' + DMARC_REPORT_EMAIL + '; ruf=' + DMARC_REPORT_EMAIL + '"' ] };
var records = [ ];
if (config.isCustomDomain()) {
records.push(webadminRecord);
records.push(dkimRecord);
} else {
records.push(nakedDomainRecord);
records.push(webadminRecord);
records.push(dkimRecord);
records.push(dmarcRecord);
}
debug('addDnsRecords: %j', records);
async.retry({ times: 10, interval: 20000 }, function (retryCallback) {
txtRecordsWithSpf(function (error, txtRecords) {
if (error) return retryCallback(error);
if (txtRecords) records.push({ subdomain: '', type: 'TXT', values: txtRecords });
debug('addDnsRecords: will update %j', records);
async.mapSeries(records, function (record, iteratorCallback) {
subdomains.update(record.subdomain, record.type, record.values, iteratorCallback);
}, function (error, changeIds) {
if (error) debug('addDnsRecords: failed to update : %s. will retry', error);
else debug('addDnsRecords: records %j added with changeIds %j', records, changeIds);
retryCallback(error);
});
});
}, function (error) {
gUpdatingDns = false;
debug('addDnsRecords: done updating records with error:', error);
callback(error);
});
});
}
@@ -405,10 +491,10 @@ function migrate(size, region, callback) {
.query({ token: config.token() })
.send({ size: size, region: region, restoreKey: restoreKey })
.end(function (error, result) {
if (error) return unlock(error);
if (result.status === 409) return unlock(new CloudronError(CloudronError.BAD_STATE));
if (result.status === 404) return unlock(new CloudronError(CloudronError.NOT_FOUND));
if (result.status !== 202) return unlock(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
if (error && !error.response) return unlock(error);
if (result.statusCode === 409) return unlock(new CloudronError(CloudronError.BAD_STATE));
if (result.statusCode === 404) return unlock(new CloudronError(CloudronError.NOT_FOUND));
if (result.statusCode !== 202) return unlock(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return unlock(null);
});
@@ -426,12 +512,20 @@ function update(boxUpdateInfo, callback) {
var error = locker.lock(locker.OP_BOX_UPDATE);
if (error) return callback(new CloudronError(CloudronError.BAD_STATE, error.message));
// ensure tools can 'wait' on progress
progress.set(progress.UPDATE, 0, 'Starting');
// initiate the update/upgrade but do not wait for it
if (boxUpdateInfo.upgrade) {
if (config.version().match(/[-+]/) !== null && config.version().replace(/[-+].*/, '') === boxUpdateInfo.version) {
doShortCircuitUpdate(boxUpdateInfo, function (error) {
if (error) debug('Short-circuit update failed', error);
locker.unlock(locker.OP_BOX_UPDATE);
});
} else if (boxUpdateInfo.upgrade) {
debug('Starting upgrade');
doUpgrade(boxUpdateInfo, function (error) {
if (error) {
debug('Upgrade failed with error: %s', error);
console.error('Upgrade failed with error:', error);
locker.unlock(locker.OP_BOX_UPDATE);
}
});
@@ -439,7 +533,7 @@ function update(boxUpdateInfo, callback) {
debug('Starting update');
doUpdate(boxUpdateInfo, function (error) {
if (error) {
debug('Update failed with error: %s', error);
console.error('Update failed with error:', error);
locker.unlock(locker.OP_BOX_UPDATE);
}
});
@@ -448,6 +542,26 @@ function update(boxUpdateInfo, callback) {
callback(null);
}
function updateToLatest(callback) {
assert.strictEqual(typeof callback, 'function');
var boxUpdateInfo = updateChecker.getUpdateInfo().box;
if (!boxUpdateInfo) return callback(new CloudronError(CloudronError.ALREADY_UPTODATE, 'No update available'));
update(boxUpdateInfo, callback);
}
function doShortCircuitUpdate(boxUpdateInfo, callback) {
assert(boxUpdateInfo !== null && typeof boxUpdateInfo === 'object');
debug('Starting short-circuit from prerelease version %s to release version %s', config.version(), boxUpdateInfo.version);
config.setVersion(boxUpdateInfo.version);
progress.clear(progress.UPDATE);
updateChecker.resetUpdateInfo();
callback();
}
function doUpgrade(boxUpdateInfo, callback) {
assert(boxUpdateInfo !== null && typeof boxUpdateInfo === 'object');
@@ -465,14 +579,14 @@ function doUpgrade(boxUpdateInfo, callback) {
.query({ token: config.token() })
.send({ version: boxUpdateInfo.version })
.end(function (error, result) {
if (error) return upgradeError(new Error('Error making upgrade request: ' + error));
if (result.status !== 202) return upgradeError(new Error(util.format('Server not ready to upgrade. statusCode: %s body: %j', result.status, result.body)));
if (error && !error.response) return upgradeError(new Error('Network error making upgrade request: ' + error));
if (result.statusCode !== 202) return upgradeError(new Error(util.format('Server not ready to upgrade. statusCode: %s body: %j', result.status, result.body)));
progress.set(progress.UPDATE, 10, 'Updating base system');
// no need to unlock since this is the last thing we ever do on this box
callback(null);
callback();
retire();
});
});
}
@@ -490,46 +604,44 @@ function doUpdate(boxUpdateInfo, callback) {
backupBoxAndApps(function (error) {
if (error) return updateError(error);
// fetch a signed sourceTarballUrl
superagent.get(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/sourcetarballurl')
.query({ token: config.token(), boxVersion: boxUpdateInfo.version })
.end(function (error, result) {
if (error) return updateError(new Error('Error fetching sourceTarballUrl: ' + error));
if (result.status !== 200) return updateError(new Error('Error fetching sourceTarballUrl status: ' + result.status));
if (!safe.query(result, 'body.url')) return updateError(new Error('Error fetching sourceTarballUrl response: ' + JSON.stringify(result.body)));
// NOTE: the args here are tied to the installer revision, box code and appstore provisioning logic
var args = {
sourceTarballUrl: boxUpdateInfo.sourceTarballUrl,
// NOTE: the args here are tied to the installer revision, box code and appstore provisioning logic
var args = {
sourceTarballUrl: result.body.url,
// this data is opaque to the installer
data: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
fqdn: config.fqdn(),
tlsCert: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), 'utf8'),
tlsKey: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), 'utf8'),
isCustomDomain: config.isCustomDomain(),
// this data is opaque to the installer
data: {
apiServerOrigin: config.apiServerOrigin(),
aws: config.aws(),
backupKey: config.backupKey(),
boxVersionsUrl: config.get('boxVersionsUrl'),
fqdn: config.fqdn(),
isCustomDomain: config.isCustomDomain(),
restoreUrl: null,
restoreKey: null,
appstore: {
token: config.token(),
tlsCert: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), 'utf8'),
tlsKey: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), 'utf8'),
version: boxUpdateInfo.version,
apiServerOrigin: config.apiServerOrigin()
},
caas: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin()
}
};
},
debug('updating box %j', args);
version: boxUpdateInfo.version,
boxVersionsUrl: config.get('boxVersionsUrl')
}
};
superagent.post(INSTALLER_UPDATE_URL).send(args).end(function (error, result) {
if (error) return updateError(error);
if (result.status !== 202) return updateError(new Error('Error initiating update: ' + JSON.stringify(result.body)));
debug('updating box %j', args);
progress.set(progress.UPDATE, 10, 'Updating cloudron software');
superagent.post(INSTALLER_UPDATE_URL).send(args).end(function (error, result) {
if (error && !error.response) return updateError(error);
if (result.statusCode !== 202) return updateError(new Error('Error initiating update: ' + JSON.stringify(result.body)));
callback(null);
});
progress.set(progress.UPDATE, 10, 'Updating cloudron software');
callback(null);
});
// Do not add any code here. The installer script will stop the box code any instant
@@ -542,8 +654,8 @@ function backup(callback) {
var error = locker.lock(locker.OP_FULL_BACKUP);
if (error) return callback(new CloudronError(CloudronError.BAD_STATE, error.message));
// clearing backup ensures tools can 'wait' on progress
progress.clear(progress.BACKUP);
// ensure tools can 'wait' on progress
progress.set(progress.BACKUP, 0, 'Starting');
// start the backup operation in the background
backupBoxAndApps(function (error) {
@@ -556,7 +668,7 @@ function backup(callback) {
}
function ensureBackup(callback) {
callback = callback || function () { };
callback = callback || NOOP_CALLBACK;
backups.getAllPaged(1, 1, function (error, backups) {
if (error) {
@@ -613,7 +725,7 @@ function backupBox(callback) {
// this function expects you to have a lock
function backupBoxAndApps(callback) {
callback = callback || function () { }; // callback can be empty for timer triggered backup
callback = callback || NOOP_CALLBACK;
apps.getAll(function (error, allApps) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
@@ -627,13 +739,13 @@ function backupBoxAndApps(callback) {
++processed;
apps.backupApp(app, app.manifest.addons, function (error, backupId) {
progress.set(progress.BACKUP, step * processed, 'Backed up app at ' + app.location);
if (error && error.reason !== AppsError.BAD_STATE) {
debugApp(app, 'Unable to backup', error);
return iteratorCallback(error);
}
progress.set(progress.BACKUP, step * processed, 'Backed up app at ' + app.location);
iteratorCallback(null, backupId || null); // clear backupId if is in BAD_STATE and never backed up
});
}, function appsBackedUp(error, backupIds) {
@@ -651,3 +763,39 @@ function backupBoxAndApps(callback) {
});
});
}
function checkDiskSpace(callback) {
callback = callback || NOOP_CALLBACK;
debug('Checking disk space');
df(function (error, entries) {
if (error) {
debug('df error %s', error.message);
mailer.outOfDiskSpace(error.message);
return callback();
}
var oos = entries.some(function (entry) {
return (entry.mount === paths.DATA_DIR && entry.capacity >= 0.90) ||
(entry.mount === '/' && entry.used <= (1.25 * 1024 * 1024)); // 1.5G
});
debug('Disk space checked. ok: %s', !oos);
if (oos) mailer.outOfDiskSpace(JSON.stringify(entries, null, 4));
callback();
});
}
function retire(callback) {
callback = callback || NOOP_CALLBACK;
var data = {
isCustomDomain: config.isCustomDomain(),
fqdn: config.fqdn()
};
shell.sudo('retire', [ RETIRE_CMD, JSON.stringify(data) ], callback);
}

View File

@@ -1,6 +1,6 @@
LoadPlugin "table"
<Plugin table>
<Table "/sys/fs/cgroup/memory/system.slice/docker-<%= containerId %>.scope/memory.stat">
<Table "/sys/fs/cgroup/memory/system.slice/docker.service/docker/<%= containerId %>/memory.stat">
Instance "<%= appId %>-memory"
Separator " \\n"
<Result>
@@ -10,7 +10,7 @@ LoadPlugin "table"
</Result>
</Table>
<Table "/sys/fs/cgroup/memory/system.slice/docker-<%= containerId %>.scope/memory.max_usage_in_bytes">
<Table "/sys/fs/cgroup/memory/system.slice/docker.service/docker/<%= containerId %>/memory.max_usage_in_bytes">
Instance "<%= appId %>-memory"
Separator "\\n"
<Result>
@@ -20,7 +20,7 @@ LoadPlugin "table"
</Result>
</Table>
<Table "/sys/fs/cgroup/cpuacct/system.slice/docker-<%= containerId %>.scope/cpuacct.stat">
<Table "/sys/fs/cgroup/cpuacct/system.slice/docker/<%= containerId %>/cpuacct.stat">
Instance "<%= appId %>-cpu"
Separator " \\n"
<Result>

View File

@@ -4,6 +4,8 @@
exports = module.exports = {
baseDir: baseDir,
dnsInSync: dnsInSync,
setDnsInSync: setDnsInSync,
// values set here will be lost after a upgrade/update. use the sqlite database
// for persistent values that need to be backed up
@@ -15,25 +17,25 @@ exports = module.exports = {
TEST: process.env.BOX_ENV === 'test',
// convenience getters
provider: provider,
apiServerOrigin: apiServerOrigin,
webServerOrigin: webServerOrigin,
fqdn: fqdn,
token: token,
version: version,
setVersion: setVersion,
isCustomDomain: isCustomDomain,
database: database,
// these values are derived
adminOrigin: adminOrigin,
internalAdminOrigin: internalAdminOrigin,
adminFqdn: adminFqdn,
appFqdn: appFqdn,
zoneName: zoneName,
isDev: isDev,
backupKey: backupKey,
aws: aws,
// for testing resets to defaults
_reset: initConfig
};
@@ -56,6 +58,14 @@ function baseDir() {
var cloudronConfigFileName = path.join(baseDir(), 'configs/cloudron.conf');
function dnsInSync() {
return !!safe.fs.statSync(require('./paths.js').DNS_IN_SYNC_FILE);
}
function setDnsInSync(content) {
safe.fs.writeFileSync(require('./paths.js').DNS_IN_SYNC_FILE, content || 'if this file exists, dns is in sync');
}
function saveSync() {
fs.writeFileSync(cloudronConfigFileName, JSON.stringify(data, null, 4)); // functions are ignored by JSON.stringify
}
@@ -65,9 +75,7 @@ function initConfig() {
data.fqdn = 'localhost';
data.token = null;
data.mailServer = null;
data.adminEmail = null;
data.mailDnsRecordIds = [ ];
data.boxVersionsUrl = null;
data.version = null;
data.isCustomDomain = false;
@@ -75,14 +83,8 @@ function initConfig() {
data.internalPort = 3001;
data.ldapPort = 3002;
data.oauthProxyPort = 3003;
data.backupKey = 'backupKey';
data.aws = {
backupBucket: null,
backupPrefix: null,
accessKeyId: null, // selfhosting only
secretAccessKey: null // selfhosting only
};
data.dnsInSync = false;
data.simpleAuthPort = 3004;
data.provider = 'caas';
if (exports.CLOUDRON) {
data.port = 3000;
@@ -99,9 +101,7 @@ function initConfig() {
name: 'boxtest'
};
data.token = 'APPSTORE_TOKEN';
data.aws.backupBucket = 'testbucket';
data.aws.backupPrefix = 'testprefix';
data.aws.endpoint = 'http://localhost:5353';
data.adminEmail = 'test@cloudron.foo';
} else {
assert(false, 'Unknown environment. This should not happen!');
}
@@ -160,6 +160,10 @@ function appFqdn(location) {
return isCustomDomain() ? location + '.' + fqdn() : location + '-' + fqdn();
}
function adminFqdn() {
return appFqdn(constants.ADMIN_LOCATION);
}
function adminOrigin() {
return 'https://' + appFqdn(constants.ADMIN_LOCATION);
}
@@ -176,6 +180,10 @@ function version() {
return get('version');
}
function setVersion(version) {
set('version', version);
}
function isCustomDomain() {
return get('isCustomDomain');
}
@@ -195,10 +203,6 @@ function isDev() {
return /dev/i.test(get('boxVersionsUrl'));
}
function backupKey() {
return get('backupKey');
}
function aws() {
return get('aws');
function provider() {
return get('provider');
}

View File

@@ -7,10 +7,6 @@ exports = module.exports = {
ADMIN_NAME: 'Settings',
ADMIN_CLIENT_ID: 'webadmin', // oauth client id
ADMIN_APPID: 'admin', // admin appid (settingsdb)
TEST_NAME: 'Test',
TEST_LOCATION: '',
TEST_CLIENT_ID: 'test'
ADMIN_APPID: 'admin' // admin appid (settingsdb)
};

View File

@@ -7,9 +7,13 @@ exports = module.exports = {
var apps = require('./apps.js'),
assert = require('assert'),
certificates = require('./certificates.js'),
cloudron = require('./cloudron.js'),
config = require('./config.js'),
CronJob = require('cron').CronJob,
debug = require('debug')('box:cron'),
janitor = require('./janitor.js'),
scheduler = require('./scheduler.js'),
settings = require('./settings.js'),
updateChecker = require('./updatechecker.js');
@@ -17,9 +21,12 @@ var gAutoupdaterJob = null,
gBoxUpdateCheckerJob = null,
gAppUpdateCheckerJob = null,
gHeartbeatJob = null,
gBackupJob = null;
var gInitialized = false;
gBackupJob = null,
gCleanupTokensJob = null,
gDockerVolumeCleanerJob = null,
gSchedulerSyncJob = null,
gCertificateRenewJob = null,
gCheckDiskSpaceJob = null;
var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
@@ -34,14 +41,19 @@ var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
if (gInitialized) return callback();
gHeartbeatJob = new CronJob({
cronTime: '00 */1 * * * *', // every minute
onTick: cloudron.sendHeartbeat,
start: true
});
cloudron.sendHeartbeat(); // latest unpublished version of CronJob has runOnInit
settings.events.on(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.on(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
gInitialized = true;
recreateJobs(callback);
if (cloudron.isConfiguredSync()) {
recreateJobs(callback);
} else {
cloudron.events.on(cloudron.EVENT_ACTIVATED, recreateJobs);
callback();
}
}
function recreateJobs(unusedTimeZone, callback) {
@@ -50,14 +62,6 @@ function recreateJobs(unusedTimeZone, callback) {
settings.getAll(function (error, allSettings) {
debug('Creating jobs with timezone %s', allSettings[settings.TIME_ZONE_KEY]);
if (gHeartbeatJob) gHeartbeatJob.stop();
gHeartbeatJob = new CronJob({
cronTime: '00 */1 * * * *', // every minute
onTick: cloudron.sendHeartbeat,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gBackupJob) gBackupJob.stop();
gBackupJob = new CronJob({
cronTime: '00 00 */4 * * *', // every 4 hours
@@ -66,6 +70,14 @@ function recreateJobs(unusedTimeZone, callback) {
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCheckDiskSpaceJob) gCheckDiskSpaceJob.stop();
gCheckDiskSpaceJob = new CronJob({
cronTime: '00 30 */4 * * *', // every 4 hours
onTick: cloudron.checkDiskSpace,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gBoxUpdateCheckerJob) gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = new CronJob({
cronTime: '00 */10 * * * *', // every 10 minutes
@@ -82,14 +94,52 @@ function recreateJobs(unusedTimeZone, callback) {
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCleanupTokensJob) gCleanupTokensJob.stop();
gCleanupTokensJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: janitor.cleanupTokens,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gDockerVolumeCleanerJob) gDockerVolumeCleanerJob.stop();
gDockerVolumeCleanerJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: janitor.cleanupDockerVolumes,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gSchedulerSyncJob) gSchedulerSyncJob.stop();
gSchedulerSyncJob = new CronJob({
cronTime: config.TEST ? '*/10 * * * * *' : '00 */1 * * * *', // every minute
onTick: scheduler.sync,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCertificateRenewJob) gCertificateRenewJob.stop();
gCertificateRenewJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: certificates.autoRenew,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
settings.events.removeListener(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
settings.events.on(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
autoupdatePatternChanged(allSettings[settings.AUTOUPDATE_PATTERN_KEY]);
settings.events.removeListener(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.on(settings.TIME_ZONE_KEY, recreateJobs);
if (callback) callback();
});
}
function autoupdatePatternChanged(pattern) {
assert.strictEqual(typeof pattern, 'string');
assert(gBoxUpdateCheckerJob);
debug('Auto update pattern changed to %s', pattern);
@@ -119,25 +169,37 @@ function autoupdatePatternChanged(pattern) {
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
if (!gInitialized) return callback();
cloudron.events.removeListener(cloudron.EVENT_ACTIVATED, recreateJobs);
settings.events.removeListener(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.removeListener(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
if (gAutoupdaterJob) gAutoupdaterJob.stop();
gAutoupdaterJob = null;
gBoxUpdateCheckerJob.stop();
if (gBoxUpdateCheckerJob) gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = null;
gAppUpdateCheckerJob.stop();
if (gAppUpdateCheckerJob) gAppUpdateCheckerJob.stop();
gAppUpdateCheckerJob = null;
gHeartbeatJob.stop();
if (gHeartbeatJob) gHeartbeatJob.stop();
gHeartbeatJob = null;
gBackupJob.stop();
if (gBackupJob) gBackupJob.stop();
gBackupJob = null;
gInitialized = false;
if (gCleanupTokensJob) gCleanupTokensJob.stop();
gCleanupTokensJob = null;
if (gDockerVolumeCleanerJob) gDockerVolumeCleanerJob.stop();
gDockerVolumeCleanerJob = null;
if (gSchedulerSyncJob) gSchedulerSyncJob.stop();
gSchedulerSyncJob = null;
if (gCertificateRenewJob) gCertificateRenewJob.stop();
gCertificateRenewJob = null;
callback();
}

View File

@@ -126,6 +126,8 @@ function clear(callback) {
function beginTransaction(callback) {
assert.strictEqual(typeof callback, 'function');
if (gConnectionPool === null) return callback(new Error('No database connection pool.'));
gConnectionPool.getConnection(function (error, connection) {
if (error) return callback(error);

View File

@@ -13,6 +13,7 @@ exports = module.exports = {
var assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:developer'),
tokendb = require('./tokendb.js'),
settings = require('./settings.js'),
superagent = require('superagent'),
@@ -38,6 +39,7 @@ function DeveloperError(reason, errorOrMessage) {
}
util.inherits(DeveloperError, Error);
DeveloperError.INTERNAL_ERROR = 'Internal Error';
DeveloperError.EXTERNAL_ERROR = 'External Error';
function enabled(callback) {
assert.strictEqual(typeof callback, 'function');
@@ -65,7 +67,7 @@ function issueDeveloperToken(user, callback) {
var token = tokendb.generateToken();
var expiresAt = Date.now() + 24 * 60 * 60 * 1000; // 1 day
tokendb.add(token, tokendb.PREFIX_DEV + user.id, '', expiresAt, 'apps,settings,roleDeveloper', function (error) {
tokendb.add(token, tokendb.PREFIX_DEV + user.id, '', expiresAt, 'developer,apps,settings,users', function (error) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
callback(null, { token: token, expiresAt: expiresAt });
@@ -77,8 +79,12 @@ function getNonApprovedApps(callback) {
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/apps';
superagent.get(url).query({ token: config.token(), boxVersion: config.version() }).end(function (error, result) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
if (result.status !== 200) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, util.format('App listing failed. %s %j', result.status, result.body)));
if (error && !error.response) return callback(new DeveloperError(DeveloperError.EXTERNAL_ERROR, error));
if (result.statusCode === 401) {
debug('Failed to list apps in development. Appstore token invalid or missing. Returning empty list.', result.body);
return callback(null, []);
}
if (result.statusCode !== 200) return callback(new DeveloperError(DeveloperError.EXTERNAL_ERROR, util.format('App listing failed. %s %j', result.status, result.body)));
callback(null, result.body.apps || []);
});

View File

@@ -1,46 +0,0 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
checkPtrRecord: checkPtrRecord
};
var assert = require('assert'),
debug = require('debug')('box:digitalocean'),
dns = require('native-dns');
function checkPtrRecord(ip, fqdn, callback) {
assert(ip === null || typeof ip === 'string');
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof callback, 'function');
debug('checkPtrRecord: ' + ip);
if (!ip) return callback(new Error('Network down'));
dns.resolve4('ns1.digitalocean.com', function (error, rdnsIps) {
if (error || rdnsIps.length === 0) return callback(new Error('Failed to query DO DNS'));
var reversedIp = ip.split('.').reverse().join('.');
var req = dns.Request({
question: dns.Question({ name: reversedIp + '.in-addr.arpa', type: 'PTR' }),
server: { address: rdnsIps[0] },
timeout: 5000
});
req.on('timeout', function () { return callback(new Error('Timedout')); });
req.on('message', function (error, message) {
if (error || !message.answer || message.answer.length === 0) return callback(new Error('Failed to query PTR'));
debug('checkPtrRecord: Actual:%s Expecting:%s', message.answer[0].data, fqdn);
callback(null, message.answer[0].data === fqdn);
});
req.send();
});
}

136
src/dns/caas.js Normal file
View File

@@ -0,0 +1,136 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
add: add,
del: del,
update: update,
getChangeStatus: getChangeStatus,
get: get
};
var assert = require('assert'),
config = require('../config.js'),
debug = require('debug')('box:dns/caas'),
SubdomainError = require('../subdomains.js').SubdomainError,
superagent = require('superagent'),
util = require('util'),
_ = require('underscore');
function add(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
var fqdn = subdomain !== '' && type === 'TXT' ? subdomain + '.' + config.fqdn() : config.appFqdn(subdomain);
debug('add: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
var data = {
type: type,
values: values
};
superagent
.post(config.apiServerOrigin() + '/api/v1/domains/' + fqdn)
.query({ token: dnsConfig.token })
.send(data)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.statusCode !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null, result.body.changeId);
});
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
var fqdn = subdomain !== '' && type === 'TXT' ? subdomain + '.' + config.fqdn() : config.appFqdn(subdomain);
debug('get: zoneName: %s subdomain: %s type: %s fqdn: %s', zoneName, subdomain, type, fqdn);
superagent
.get(config.apiServerOrigin() + '/api/v1/domains/' + fqdn)
.query({ token: dnsConfig.token, type: type })
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null, result.body.values);
});
}
function update(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
get(dnsConfig, zoneName, subdomain, type, function (error, result) {
if (error) return callback(error);
if (_.isEqual(values, result)) return callback();
add(dnsConfig, zoneName, subdomain, type, values, callback);
});
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
debug('add: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
var data = {
type: type,
values: values
};
superagent
.del(config.apiServerOrigin() + '/api/v1/domains/' + config.appFqdn(subdomain))
.query({ token: dnsConfig.token })
.send(data)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.statusCode === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND));
if (result.statusCode !== 204) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null);
});
}
function getChangeStatus(dnsConfig, changeId, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
superagent
.get(config.apiServerOrigin() + '/api/v1/domains/' + config.fqdn() + '/status/' + changeId)
.query({ token: dnsConfig.token })
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null, result.body.status);
});
}

214
src/dns/route53.js Normal file
View File

@@ -0,0 +1,214 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
add: add,
get: get,
del: del,
update: update,
getChangeStatus: getChangeStatus
};
var assert = require('assert'),
AWS = require('aws-sdk'),
config = require('../config.js'),
debug = require('debug')('box:dns/route53'),
SubdomainError = require('../subdomains.js').SubdomainError,
util = require('util'),
_ = require('underscore');
function getDnsCredentials(dnsConfig) {
assert.strictEqual(typeof dnsConfig, 'object');
var credentials = {
accessKeyId: dnsConfig.accessKeyId,
secretAccessKey: dnsConfig.secretAccessKey,
region: dnsConfig.region
};
if (dnsConfig.endpoint) credentials.endpoint = new AWS.Endpoint(dnsConfig.endpoint);
return credentials;
}
function getZoneByName(dnsConfig, zoneName, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof callback, 'function');
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.listHostedZones({}, function (error, result) {
if (error) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
var zone = result.HostedZones.filter(function (zone) {
return zone.Name.slice(0, -1) === zoneName; // aws zone name contains a '.' at the end
})[0];
if (!zone) return callback(new SubdomainError(SubdomainError.NOT_FOUND, 'no such zone'));
callback(null, zone);
});
}
function add(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
debug('add: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
getZoneByName(dnsConfig, zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var records = values.map(function (v) { return { Value: v }; });
var params = {
ChangeBatch: {
Changes: [{
Action: 'UPSERT',
ResourceRecordSet: {
Type: type,
Name: fqdn,
ResourceRecords: records,
TTL: 1
}
}]
},
HostedZoneId: zone.Id
};
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.code === 'PriorRequestNotComplete') {
return callback(new SubdomainError(SubdomainError.STILL_BUSY, error.message));
} else if (error) {
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
callback(null, result.ChangeInfo.Id);
});
});
}
function update(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
get(dnsConfig, zoneName, subdomain, type, function (error, result) {
if (error) return callback(error);
if (_.isEqual(values, result)) return callback();
add(dnsConfig, zoneName, subdomain, type, values, callback);
});
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
getZoneByName(dnsConfig, zoneName, function (error, zone) {
if (error) return callback(error);
var params = {
HostedZoneId: zone.Id,
MaxItems: '1',
StartRecordName: (subdomain ? subdomain + '.' : '') + zoneName + '.',
StartRecordType: type
};
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.listResourceRecordSets(params, function (error, result) {
if (error) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
if (result.ResourceRecordSets.length === 0) return callback(null, [ ]);
if (result.ResourceRecordSets[0].Name !== params.StartRecordName && result.ResourceRecordSets[0].Type !== params.StartRecordType) return callback(null, [ ]);
var values = result.ResourceRecordSets[0].ResourceRecords.map(function (record) { return record.Value; });
callback(null, values);
});
});
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
getZoneByName(dnsConfig, zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var records = values.map(function (v) { return { Value: v }; });
var resourceRecordSet = {
Name: fqdn,
Type: type,
ResourceRecords: records,
TTL: 1
};
var params = {
ChangeBatch: {
Changes: [{
Action: 'DELETE',
ResourceRecordSet: resourceRecordSet
}]
},
HostedZoneId: zone.Id
};
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.message && error.message.indexOf('it was not found') !== -1) {
debug('del: resource record set not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'NoSuchHostedZone') {
debug('del: hosted zone not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'PriorRequestNotComplete') {
debug('del: resource is still busy', error);
return callback(new SubdomainError(SubdomainError.STILL_BUSY, new Error(error)));
} else if (error && error.code === 'InvalidChangeBatch') {
debug('del: invalid change batch. No such record to be deleted.');
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error) {
debug('del: error', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
}
callback(null);
});
});
}
function getChangeStatus(dnsConfig, changeId, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.getChange({ Id: changeId }, function (error, result) {
if (error) return callback(error);
callback(null, result.ChangeInfo.Status);
});
}

View File

@@ -1,42 +1,348 @@
'use strict';
var Docker = require('dockerode'),
fs = require('fs'),
os = require('os'),
path = require('path'),
url = require('url');
var addons = require('./addons.js'),
async = require('async'),
assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:src/docker.js'),
Docker = require('dockerode'),
safe = require('safetydance'),
semver = require('semver'),
util = require('util'),
_ = require('underscore');
exports = module.exports = (function () {
exports = module.exports = {
connection: connectionInstance(),
downloadImage: downloadImage,
createContainer: createContainer,
startContainer: startContainer,
stopContainer: stopContainer,
stopContainerByName: stopContainer,
stopContainers: stopContainers,
deleteContainer: deleteContainer,
deleteContainerByName: deleteContainer,
deleteImage: deleteImage,
deleteContainers: deleteContainers,
createSubcontainer: createSubcontainer
};
function connectionInstance() {
var docker;
var options = connectOptions(); // the real docker
if (process.env.BOX_ENV === 'test') {
// test code runs a docker proxy on this port
docker = new Docker({ host: 'http://localhost', port: 5687 });
// proxy code uses this to route to the real docker
docker.options = { socketPath: '/var/run/docker.sock' };
} else {
docker = new Docker(options);
docker = new Docker({ socketPath: '/var/run/docker.sock' });
}
// proxy code uses this to route to the real docker
docker.options = options;
return docker;
})();
function connectOptions() {
if (os.platform() === 'linux') return { socketPath: '/var/run/docker.sock' };
// boot2docker configuration
var DOCKER_CERT_PATH = process.env.DOCKER_CERT_PATH || path.join(process.env.HOME, '.boot2docker/certs/boot2docker-vm');
var DOCKER_HOST = process.env.DOCKER_HOST || 'tcp://192.168.59.103:2376';
return {
protocol: 'https',
host: url.parse(DOCKER_HOST).hostname,
port: url.parse(DOCKER_HOST).port,
ca: fs.readFileSync(path.join(DOCKER_CERT_PATH, 'ca.pem')),
cert: fs.readFileSync(path.join(DOCKER_CERT_PATH, 'cert.pem')),
key: fs.readFileSync(path.join(DOCKER_CERT_PATH, 'key.pem'))
};
}
function debugApp(app, args) {
assert(!app || typeof app === 'object');
var prefix = app ? (app.location || '(bare)') : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function targetBoxVersion(manifest) {
if ('targetBoxVersion' in manifest) return manifest.targetBoxVersion;
if ('minBoxVersion' in manifest) return manifest.minBoxVersion;
return '99999.99999.99999'; // compatible with the latest version
}
function pullImage(manifest, callback) {
var docker = exports.connection;
docker.pull(manifest.dockerImage, function (err, stream) {
if (err) return callback(new Error('Error connecting to docker. statusCode: %s' + err.statusCode));
// https://github.com/dotcloud/docker/issues/1074 says each status message
// is emitted as a chunk
stream.on('data', function (chunk) {
var data = safe.JSON.parse(chunk) || { };
debug('pullImage %s: %j', manifest.id, data);
// The information here is useless because this is per layer as opposed to per image
if (data.status) {
} else if (data.error) {
debug('pullImage error %s: %s', manifest.id, data.errorDetail.message);
}
});
stream.on('end', function () {
debug('downloaded image %s of %s successfully', manifest.dockerImage, manifest.id);
var image = docker.getImage(manifest.dockerImage);
image.inspect(function (err, data) {
if (err) return callback(new Error('Error inspecting image:' + err.message));
if (!data || !data.Config) return callback(new Error('Missing Config in image:' + JSON.stringify(data, null, 4)));
if (!data.Config.Entrypoint && !data.Config.Cmd) return callback(new Error('Only images with entry point are allowed'));
debug('This image of %s exposes ports: %j', manifest.id, data.Config.ExposedPorts);
callback(null);
});
});
stream.on('error', function (error) {
debug('error pulling image %s of %s: %j', manifest.dockerImage, manifest.id, error);
callback(error);
});
});
}
function downloadImage(manifest, callback) {
assert.strictEqual(typeof manifest, 'object');
assert.strictEqual(typeof callback, 'function');
debug('downloadImage %s %s', manifest.id, manifest.dockerImage);
var attempt = 1;
async.retry({ times: 10, interval: 15000 }, function (retryCallback) {
debug('Downloading image %s %s. attempt: %s', manifest.id, manifest.dockerImage, attempt++);
pullImage(manifest, function (error) {
if (error) console.error(error);
retryCallback(error);
});
}, callback);
}
function createSubcontainer(app, name, cmd, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof name, 'string');
assert(!cmd || util.isArray(cmd));
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var docker = exports.connection,
isAppContainer = !cmd;
var manifest = app.manifest;
var developmentMode = !!manifest.developmentMode;
var exposedPorts = {}, dockerPortBindings = { };
var stdEnv = [
'CLOUDRON=1',
'WEBADMIN_ORIGIN=' + config.adminOrigin(),
'API_ORIGIN=' + config.adminOrigin(),
'APP_ORIGIN=https://' + config.appFqdn(app.location),
'APP_DOMAIN=' + config.appFqdn(app.location)
];
// docker portBindings requires ports to be exposed
exposedPorts[manifest.httpPort + '/tcp'] = {};
dockerPortBindings[manifest.httpPort + '/tcp'] = [ { HostIp: '127.0.0.1', HostPort: app.httpPort + '' } ];
var portEnv = [];
for (var e in app.portBindings) {
var hostPort = app.portBindings[e];
var containerPort = manifest.tcpPorts[e].containerPort || hostPort;
exposedPorts[containerPort + '/tcp'] = {};
portEnv.push(e + '=' + hostPort);
dockerPortBindings[containerPort + '/tcp'] = [ { HostIp: '0.0.0.0', HostPort: hostPort + '' } ];
}
var memoryLimit = manifest.memoryLimit || (developmentMode ? 0 : 1024 * 1024 * 200); // 200mb by default
// for subcontainers, this should ideally be false. but docker does not allow network sharing if the app container is not running
// this means cloudron exec does not work
var isolatedNetworkNs = true;
addons.getEnvironment(app, function (error, addonEnv) {
if (error) return callback(new Error('Error getting addon environment : ' + error));
var containerOptions = {
name: name, // used for filtering logs
// do _not_ set hostname to app fqdn. doing so sets up the dns name to look up the internal docker ip. this makes curl from within container fail
// for subcontainers, this should not be set because we already share the network namespace with app container
Hostname: isolatedNetworkNs ? (semver.gte(targetBoxVersion(app.manifest), '0.0.77') ? app.location : config.appFqdn(app.location)) : null,
Tty: isAppContainer,
Image: app.manifest.dockerImage,
Cmd: (isAppContainer && developmentMode) ? [ '/bin/bash', '-c', 'echo "Development mode. Use cloudron exec to debug. Sleeping" && sleep infinity' ] : cmd,
Env: stdEnv.concat(addonEnv).concat(portEnv),
ExposedPorts: isAppContainer ? exposedPorts : { },
Volumes: { // see also ReadonlyRootfs
'/tmp': {},
'/run': {}
},
Labels: {
"location": app.location,
"appId": app.id,
"isSubcontainer": String(!isAppContainer)
},
HostConfig: {
Binds: addons.getBindsSync(app, app.manifest.addons),
Memory: memoryLimit / 2,
MemorySwap: memoryLimit, // Memory + Swap
PortBindings: isAppContainer ? dockerPortBindings : { },
PublishAllPorts: false,
ReadonlyRootfs: !developmentMode, // see also Volumes in startContainer
RestartPolicy: {
"Name": isAppContainer ? "always" : "no",
"MaximumRetryCount": 0
},
CpuShares: 512, // relative to 1024 for system processes
VolumesFrom: isAppContainer ? null : [ app.containerId + ":rw" ],
NetworkMode: isolatedNetworkNs ? 'default' : ('container:' + app.containerId), // share network namespace with parent
Links: isolatedNetworkNs ? addons.getLinksSync(app, app.manifest.addons) : null, // links is redundant with --net=container
SecurityOpt: config.CLOUDRON ? [ "apparmor:docker-cloudron-app" ] : null // profile available only on cloudron
}
};
containerOptions = _.extend(containerOptions, options);
debugApp(app, 'Creating container for %s with options %j', app.manifest.dockerImage, containerOptions);
docker.createContainer(containerOptions, callback);
});
}
function createContainer(app, callback) {
createSubcontainer(app, app.id /* name */, null /* cmd */, { } /* options */, callback);
}
function startContainer(containerId, callback) {
assert.strictEqual(typeof containerId, 'string');
assert.strictEqual(typeof callback, 'function');
var docker = exports.connection;
var container = docker.getContainer(containerId);
debug('Starting container %s', containerId);
container.start(function (error) {
if (error && error.statusCode !== 304) return callback(new Error('Error starting container :' + error));
return callback(null);
});
}
function stopContainer(containerId, callback) {
assert(!containerId || typeof containerId === 'string');
assert.strictEqual(typeof callback, 'function');
if (!containerId) {
debug('No previous container to stop');
return callback();
}
var docker = exports.connection;
var container = docker.getContainer(containerId);
debug('Stopping container %s', containerId);
var options = {
t: 10 // wait for 10 seconds before killing it
};
container.stop(options, function (error) {
if (error && (error.statusCode !== 304 && error.statusCode !== 404)) return callback(new Error('Error stopping container:' + error));
debug('Waiting for container ' + containerId);
container.wait(function (error, data) {
if (error && (error.statusCode !== 304 && error.statusCode !== 404)) return callback(new Error('Error waiting on container:' + error));
debug('Container %s stopped with status code [%s]', containerId, data ? String(data.StatusCode) : '');
return callback(null);
});
});
}
function deleteContainer(containerId, callback) {
assert(!containerId || typeof containerId === 'string');
assert.strictEqual(typeof callback, 'function');
debug('deleting container %s', containerId);
if (containerId === null) return callback(null);
var docker = exports.connection;
var container = docker.getContainer(containerId);
var removeOptions = {
force: true, // kill container if it's running
v: true // removes volumes associated with the container (but not host mounts)
};
container.remove(removeOptions, function (error) {
if (error && error.statusCode === 404) return callback(null);
if (error) debug('Error removing container %s : %j', containerId, error);
callback(error);
});
}
function deleteContainers(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
var docker = exports.connection;
debug('deleting containers of %s', appId);
docker.listContainers({ all: 1, filters: JSON.stringify({ label: [ 'appId=' + appId ] }) }, function (error, containers) {
if (error) return callback(error);
async.eachSeries(containers, function (container, iteratorDone) {
deleteContainer(container.Id, iteratorDone);
}, callback);
});
}
function stopContainers(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
var docker = exports.connection;
debug('stopping containers of %s', appId);
docker.listContainers({ all: 1, filters: JSON.stringify({ label: [ 'appId=' + appId ] }) }, function (error, containers) {
if (error) return callback(error);
async.eachSeries(containers, function (container, iteratorDone) {
stopContainer(container.Id, iteratorDone);
}, callback);
});
}
function deleteImage(manifest, callback) {
assert(!manifest || typeof manifest === 'object');
assert.strictEqual(typeof callback, 'function');
var dockerImage = manifest ? manifest.dockerImage : null;
if (!dockerImage) return callback(null);
var docker = exports.connection;
var removeOptions = {
force: false, // might be shared with another instance of this app
noprune: false // delete untagged parents
};
// registry v1 used to pull down all *tags*. this meant that deleting image by tag was not enough (since that
// just removes the tag). we used to remove the image by id. this is not required anymore because aliases are
// not created anymore after https://github.com/docker/docker/pull/10571
docker.getImage(dockerImage).remove(removeOptions, function (error) {
if (error && error.statusCode === 404) return callback(null);
if (error && error.statusCode === 409) return callback(null); // another container using the image
if (error) debug('Error removing image %s : %j', dockerImage, error);
callback(error);
});
}

103
src/janitor.js Normal file
View File

@@ -0,0 +1,103 @@
'use strict';
var assert = require('assert'),
async = require('async'),
authcodedb = require('./authcodedb.js'),
debug = require('debug')('box:src/janitor'),
docker = require('./docker.js').connection,
tokendb = require('./tokendb.js');
exports = module.exports = {
cleanupTokens: cleanupTokens,
cleanupDockerVolumes: cleanupDockerVolumes
};
var NOOP_CALLBACK = function () { };
function ignoreError(func) {
return function (callback) {
func(function (error) {
if (error) console.error('Ignored error:', error);
callback();
});
};
}
function cleanupExpiredTokens(callback) {
assert.strictEqual(typeof callback, 'function');
tokendb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired tokens.', result);
callback(null);
});
}
function cleanupExpiredAuthCodes(callback) {
assert.strictEqual(typeof callback, 'function');
authcodedb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired authcodes.', result);
callback(null);
});
}
function cleanupTokens(callback) {
assert(!callback || typeof callback === 'function'); // callback is null when called from cronjob
debug('Cleaning up expired tokens');
async.series([
ignoreError(cleanupExpiredTokens),
ignoreError(cleanupExpiredAuthCodes)
], callback);
}
function cleanupTmpVolume(containerInfo, callback) {
assert.strictEqual(typeof containerInfo, 'object');
assert.strictEqual(typeof callback, 'function');
var cmd = 'find /tmp -mtime +10 -exec rm -rf {} +'.split(' '); // 10 days old
debug('cleanupTmpVolume %j', containerInfo.Names);
docker.getContainer(containerInfo.Id).exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true, Tty: false }, function (error, execContainer) {
if (error) return callback(new Error('Failed to exec container : ' + error.message));
execContainer.start(function(err, stream) {
if (error) return callback(new Error('Failed to start exec container : ' + error.message));
stream.on('error', callback);
stream.on('end', callback);
stream.setEncoding('utf8');
stream.pipe(process.stdout);
});
});
}
function cleanupDockerVolumes(callback) {
assert(!callback || typeof callback === 'function'); // callback is null when called from cronjob
callback = callback || NOOP_CALLBACK;
debug('Cleaning up docker volumes');
docker.listContainers({ all: 0 }, function (error, containers) {
if (error) return callback(error);
async.eachSeries(containers, function (container, iteratorDone) {
cleanupTmpVolume(container, function (error) {
if (error) debug('Error cleaning tmp: %s', error);
iteratorDone(); // intentionally ignore error
});
}, callback);
});
}

View File

@@ -54,7 +54,7 @@ function start(callback) {
cn: entry.id,
uid: entry.id,
mail: entry.email,
displayname: entry.username,
displayname: entry.displayName || entry.username,
username: entry.username,
samaccountname: entry.username, // to support ActiveDirectory clients
memberof: groups
@@ -115,9 +115,11 @@ function start(callback) {
gServer.bind('ou=users,dc=cloudron', function(req, res, next) {
debug('ldap user bind: %s', req.dn.toString());
if (!req.dn.rdns[0].cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
// extract the common name which might have different attribute names
var commonName = req.dn.rdns[0][Object.keys(req.dn.rdns[0])[0]];
if (!commonName) return next(new ldap.NoSuchObjectError(req.dn.toString()));
user.verify(req.dn.rdns[0].cn, req.credentials || '', function (error, result) {
user.verify(commonName, req.credentials || '', function (error, result) {
if (error && error.reason === UserError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error && error.reason === UserError.WRONG_PASSWORD) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error));

View File

@@ -5,8 +5,7 @@ Dear Admin,
The application titled '<%= title %>' that you installed at <%= appFqdn %>
is not responding.
This is most likely a problem in the application. Please report this issue to
support@cloudron.io (by forwarding this email).
This is most likely a problem in the application.
You are receiving this email because you are an Admin of the Cloudron at <%= fqdn %>.

View File

@@ -4,10 +4,10 @@ Dear Admin,
A new version of the app '<%= app.manifest.title %>' installed at <%= app.fqdn %> is available!
Please update at your convenience at <%= webadminUrl %>.
The app will update automatically tonight. Alternately, update immediately at <%= webadminUrl %>.
Thank you,
Update Manager
your Cloudron
<% } else { %>

View File

@@ -4,7 +4,7 @@ Dear Admin,
A new version of Cloudron <%= fqdn %> is available!
Please update at your convenience at <%= webadminUrl %>.
Your Cloudron will update automatically tonight. Alternately, update immediately at <%= webadminUrl %>.
Changelog:
<% for (var i = 0; i < changelog.length; i++) { %>
@@ -12,7 +12,7 @@ Changelog:
<% } %>
Thank you,
Update Manager
your Cloudron
<% } else { %>

View File

@@ -0,0 +1,19 @@
<%if (format === 'text') { %>
Dear Cloudron Team,
<%= fqdn %> is running out of disk space.
Please see some excerpts of the logs below.
Thank you,
Your Cloudron
-------------------------------------
<%- message %>
<% } else { %>
<% } %>

View File

@@ -0,0 +1,23 @@
<%if (format === 'text') { %>
Dear Admin,
User with name '<%= username %>' (<%= email %>) was added in the Cloudron at <%= fqdn %>.
You are receiving this email because you are an Admin of the Cloudron at <%= fqdn %>.
<% if (inviteLink) { %>
This user was not invited immediately, he has to get invited manually later, using the "send invite" button in the admin panel.
To perform any configuration on behalf of the user, please use this link
<%= inviteLink %>
It allows to setup a temporary password, which the user will be able to override, once he gets invited.
This link will become invalid as soon as the user was invited.
<% } %>
Thank you,
User Manager
<% } else { %>
<% } %>

View File

@@ -2,9 +2,9 @@
Dear <%= user.username %>,
I am excited to welcome you to my Cloudron <%= fqdn %>!
Welcome to my Cloudron <%= fqdn %>!
The Cloudron is our own Private Cloud. You can read more about it
The Cloudron is our own Smart Server. You can read more about it
at https://www.cloudron.io.
You username is '<%= user.username %>'
@@ -15,8 +15,12 @@ To get started, create your account by visiting the following page:
When you visit the above page, you will be prompted to enter a new password.
After you have submitted the form, you can login using the new password.
<% if (invitor && invitor.email) { %>
Thank you,
<%= invitor.email %>
<% } else { %>
Thank you
<% } %>
<% } else { %>

View File

@@ -13,29 +13,35 @@ exports = module.exports = {
boxUpdateAvailable: boxUpdateAvailable,
appUpdateAvailable: appUpdateAvailable,
sendInvite: sendInvite,
sendCrashNotification: sendCrashNotification,
appDied: appDied,
outOfDiskSpace: outOfDiskSpace,
FEEDBACK_TYPE_FEEDBACK: 'feedback',
FEEDBACK_TYPE_TICKET: 'ticket',
FEEDBACK_TYPE_APP: 'app',
sendFeedback: sendFeedback
sendFeedback: sendFeedback,
_getMailQueue: _getMailQueue,
_clearMailQueue: _clearMailQueue
};
var assert = require('assert'),
async = require('async'),
cloudron = require('./cloudron.js'),
config = require('./config.js'),
debug = require('debug')('box:mailer'),
digitalocean = require('./digitalocean.js'),
docker = require('./docker.js'),
dns = require('native-dns'),
docker = require('./docker.js').connection,
ejs = require('ejs'),
nodemailer = require('nodemailer'),
path = require('path'),
safe = require('safetydance'),
smtpTransport = require('nodemailer-smtp-transport'),
sysinfo = require('./sysinfo.js'),
userdb = require('./userdb.js'),
users = require('./user.js'),
util = require('util'),
_ = require('underscore');
@@ -48,13 +54,20 @@ var gMailQueue = [ ],
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
checkDns();
if (cloudron.isConfiguredSync()) {
checkDns();
} else {
cloudron.events.on(cloudron.EVENT_CONFIGURED, checkDns);
}
callback(null);
}
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
cloudron.events.removeListener(cloudron.EVENT_CONFIGURED, checkDns);
// TODO: interrupt processQueue as well
clearTimeout(gCheckDnsTimerId);
gCheckDnsTimerId = null;
@@ -65,20 +78,76 @@ function uninitialize(callback) {
callback(null);
}
function getTxtRecords(callback) {
dns.resolveNs(config.zoneName(), function (error, nameservers) {
if (error || !nameservers) return callback(error || new Error('Unable to get nameservers'));
var nameserver = nameservers[0];
dns.resolve4(nameserver, function (error, nsIps) {
if (error || !nsIps || nsIps.length === 0) return callback(error);
var req = dns.Request({
question: dns.Question({ name: config.fqdn(), type: 'TXT' }),
server: { address: nsIps[0] },
timeout: 5000
});
req.on('timeout', function () { return callback(new Error('ETIMEOUT')); });
req.on('message', function (error, message) {
if (error || !message.answer || message.answer.length === 0) return callback(null, null);
var records = message.answer.map(function (a) { return a.data[0]; });
callback(null, records);
});
req.send();
});
});
}
function checkDns() {
digitalocean.checkPtrRecord(sysinfo.getIp(), config.fqdn(), function (error, ok) {
if (error || !ok) {
debug('PTR record not setup yet');
gCheckDnsTimerId = setTimeout(checkDns, 10000);
getTxtRecords(function (error, records) {
if (error || !records) {
debug('checkDns: DNS error or no records looking up TXT records for %s %s', config.fqdn(), error, records);
gCheckDnsTimerId = setTimeout(checkDns, 60000);
return;
}
var allowedToSendMail = false;
for (var i = 0; i < records.length; i++) {
if (records[i].indexOf('v=spf1 ') !== 0) continue; // not SPF
allowedToSendMail = records[i].indexOf('a:' + config.fqdn()) !== -1;
break; // only one SPF record can exist (https://support.google.com/a/answer/4568483?hl=en)
}
if (!allowedToSendMail) {
debug('checkDns: SPF records disallow sending email from cloudron. %j', records);
gCheckDnsTimerId = setTimeout(checkDns, 60000);
return;
}
debug('checkDns: SPF check passed. commencing mail processing');
gDnsReady = true;
processQueue();
});
}
function processQueue() {
assert(gDnsReady);
sendMails(gMailQueue);
gMailQueue = [ ];
}
// note : this function should NOT access the database. it is called by the crashnotifier
// which does not initialize mailer or the databse
function sendMails(queue) {
assert(util.isArray(queue));
docker.getContainer('mail').inspect(function (error, data) {
if (error) return console.error(error);
@@ -90,12 +159,9 @@ function processQueue() {
port: 2500 // this value comes from mail container
}));
var mailQueueCopy = gMailQueue;
gMailQueue = [ ];
debug('Processing mail queue of size %d (through %s:2500)', queue.length, mailServerIp);
debug('Processing mail queue of size %d (through %s:2500)', mailQueueCopy.length, mailServerIp);
async.mapSeries(mailQueueCopy, function iterator(mailOptions, callback) {
async.mapSeries(queue, function iterator(mailOptions, callback) {
transport.sendMail(mailOptions, function (error) {
if (error) return console.error(error); // TODO: requeue?
debug('Email sent to ' + mailOptions.to);
@@ -110,6 +176,9 @@ function processQueue() {
function enqueue(mailOptions) {
assert.strictEqual(typeof mailOptions, 'object');
if (!mailOptions.from) console.error('from is missing');
if (!mailOptions.to) console.error('to is missing');
debug('Queued mail for ' + mailOptions.from + ' to ' + mailOptions.to);
gMailQueue.push(mailOptions);
@@ -124,7 +193,7 @@ function render(templateFile, params) {
}
function getAdminEmails(callback) {
userdb.getAllAdmins(function (error, admins) {
users.getAllAdmins(function (error, admins) {
if (error) return callback(error);
var adminEmails = [ ];
@@ -154,11 +223,11 @@ function mailUserEventToAdmins(user, event) {
});
}
function userAdded(user, invitor) {
function sendInvite(user, invitor) {
assert.strictEqual(typeof user, 'object');
assert(typeof invitor === 'object');
debug('Sending mail for userAdded');
debug('Sending invite mail');
var templateData = {
user: user,
@@ -177,8 +246,30 @@ function userAdded(user, invitor) {
};
enqueue(mailOptions);
}
mailUserEventToAdmins(user, 'was added');
function userAdded(user, inviteSent) {
assert.strictEqual(typeof user, 'object');
assert.strictEqual(typeof inviteSent, 'boolean');
debug('Sending mail for userAdded %s including invite link', inviteSent ? 'not' : '');
getAdminEmails(function (error, adminEmails) {
if (error) return console.log('Error getting admins', error);
adminEmails = _.difference(adminEmails, [ user.email ]);
var inviteLink = inviteSent ? null : config.adminOrigin() + '/api/v1/session/password/setup.html?reset_token=' + user.resetToken;
var mailOptions = {
from: config.get('adminEmail'),
to: adminEmails.join(', '),
subject: util.format('%s added in Cloudron %s', user.username, config.fqdn()),
text: render('user_added.ejs', { fqdn: config.fqdn(), username: user.username, email: user.email, inviteLink: inviteLink, format: 'text' }),
};
enqueue(mailOptions);
});
}
function userRemoved(username) {
@@ -224,7 +315,7 @@ function appDied(app) {
var mailOptions = {
from: config.get('adminEmail'),
to: adminEmails.join(', '),
to: adminEmails.concat('support@cloudron.io').join(', '),
subject: util.format('App %s is down', app.location),
text: render('app_down.ejs', { fqdn: config.fqdn(), title: app.manifest.title, appFqdn: config.appFqdn(app.location), format: 'text' })
};
@@ -269,6 +360,21 @@ function appUpdateAvailable(app, updateInfo) {
});
}
function outOfDiskSpace(message) {
assert.strictEqual(typeof message, 'string');
var mailOptions = {
from: config.get('adminEmail'),
to: 'admin@cloudron.io',
subject: util.format('[%s] Out of disk space alert', config.fqdn()),
text: render('out_of_disk_space.ejs', { fqdn: config.fqdn(), message: message, format: 'text' })
};
sendMails([ mailOptions ]);
}
// this function bypasses the queue intentionally. it is also expected to work without the mailer module initialized
// crashnotifier should be able to send mail when there is no db
function sendCrashNotification(program, context) {
assert.strictEqual(typeof program, 'string');
assert.strictEqual(typeof context, 'string');
@@ -280,7 +386,7 @@ function sendCrashNotification(program, context) {
text: render('crash_notification.ejs', { fqdn: config.fqdn(), program: program, context: context, format: 'text' })
};
enqueue(mailOptions);
sendMails([ mailOptions ]);
}
function sendFeedback(user, type, subject, description) {
@@ -300,3 +406,11 @@ function sendFeedback(user, type, subject, description) {
enqueue(mailOptions);
}
function _getMailQueue() {
return gMailQueue;
}
function _clearMailQueue() {
gMailQueue = [];
}

View File

@@ -1,8 +0,0 @@
'use strict';
module.exports = function contentType(type) {
return function (req, res, next) {
res.setHeader('Content-Type', type);
next();
};
};

View File

@@ -1,7 +1,6 @@
'use strict';
exports = module.exports = {
contentType: require('./contentType'),
cookieParser: require('cookie-parser'),
cors: require('./cors'),
csrf: require('csurf'),

93
src/nginx.js Normal file
View File

@@ -0,0 +1,93 @@
/* jslint node:true */
'use strict';
var assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:src/nginx'),
ejs = require('ejs'),
fs = require('fs'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
shell = require('./shell.js');
exports = module.exports = {
configureAdmin: configureAdmin,
configureApp: configureApp,
unconfigureApp: unconfigureApp,
reload: reload
};
var NGINX_APPCONFIG_EJS = fs.readFileSync(__dirname + '/../setup/start/nginx/appconfig.ejs', { encoding: 'utf8' }),
RELOAD_NGINX_CMD = path.join(__dirname, 'scripts/reloadnginx.sh');
function configureAdmin(certFilePath, keyFilePath, callback) {
assert.strictEqual(typeof certFilePath, 'string');
assert.strictEqual(typeof keyFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
var data = {
sourceDir: path.resolve(__dirname, '..'),
adminOrigin: config.adminOrigin(),
vhost: config.adminFqdn(),
endpoint: 'admin',
certFilePath: certFilePath,
keyFilePath: keyFilePath
};
var nginxConf = ejs.render(NGINX_APPCONFIG_EJS, data);
var nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, 'admin.conf');
if (!safe.fs.writeFileSync(nginxConfigFilename, nginxConf)) return callback(safe.error);
reload(callback);
}
function configureApp(app, certFilePath, keyFilePath, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof certFilePath, 'string');
assert.strictEqual(typeof keyFilePath, 'string');
assert.strictEqual(typeof callback, 'function');
var sourceDir = path.resolve(__dirname, '..');
var endpoint = app.oauthProxy ? 'oauthproxy' : 'app';
var vhost = config.appFqdn(app.location);
var data = {
sourceDir: sourceDir,
adminOrigin: config.adminOrigin(),
vhost: vhost,
port: app.httpPort,
endpoint: endpoint,
certFilePath: certFilePath,
keyFilePath: keyFilePath
};
var nginxConf = ejs.render(NGINX_APPCONFIG_EJS, data);
var nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, app.id + '.conf');
debug('writing config for "%s" to %s', app.location, nginxConfigFilename);
if (!safe.fs.writeFileSync(nginxConfigFilename, nginxConf)) {
debug('Error creating nginx config for "%s" : %s', app.location, safe.error.message);
return callback(safe.error);
}
reload(callback);
}
function unconfigureApp(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
var nginxConfigFilename = path.join(paths.NGINX_APPCONFIG_DIR, app.id + '.conf');
if (!safe.fs.unlinkSync(nginxConfigFilename)) {
debug('Error removing nginx configuration of "%s": %s', app.location, safe.error.message);
return callback(null);
}
reload(callback);
}
function reload(callback) {
shell.sudo('reload', [ RELOAD_NGINX_CMD ], callback);
}

View File

@@ -1,4 +1,4 @@
<% include header %>
<!-- callback tester -->
<script>
@@ -35,13 +35,11 @@ args.forEach(function (arg) {
});
if (code && redirectURI) {
window.location.href = redirectURI + '?code=' + code + (state ? '&state=' + state : '');
window.location.href = redirectURI + (redirectURI.indexOf('?') !== -1 ? '&' : '?') + 'code=' + code + (state ? '&state=' + state : '');
} else if (redirectURI && accessToken) {
window.location.href = redirectURI + '?token=' + accessToken + (state ? '&state=' + state : '');
window.location.href = redirectURI + (redirectURI.indexOf('?') !== -1 ? '&' : '?') + 'token=' + accessToken + (state ? '&state=' + state : '');
} else {
window.location.href = '/api/v1/session/login';
}
</script>
<% include footer %>;

View File

@@ -1,38 +0,0 @@
<% include header %>
<form action="/api/v1/oauth/dialog/authorize/decision" method="post">
<input name="transaction_id" type="hidden" value="<%= transactionID %>">
<input type="hidden" name="_csrf" value="<%= csrf %>"/>
<div class="row">
<div class="col-md-3"></div>
<div class="col-md-6">
<div class="container-fluid">
<div class="row">
<div class="col-sm-12">
Hi <%= user.username %>!
</div>
</div>
<div class="row">
<div class="col-sm-12">
<b><%= client.name %></b> is requesting access to your account.
</div>
</div>
<div class="row">
<div class="col-sm-12">
Do you approve?
</div>
</div>
<div class="row">
<div class="col-sm-12">
<input class="btn btn-danger btn-outline" type="submit" value="Deny" name="cancel" id="deny"/>
<input class="btn btn-success btn-outline" type="submit" value="Allow"/>
</div>
</div>
</div>
</div>
<div class="col-md-3"></div>
</div>
</form>
<% include footer %>

View File

@@ -1,5 +1,7 @@
<% include header %>
<!-- error tester -->
<br/>
<div class="container">

View File

@@ -4,7 +4,9 @@
<meta charset="utf-8" />
<meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height" />
<title> Cloudron Login </title>
<title> <%= title %> </title>
<link href="/api/v1/cloudron/avatar" rel="icon" type="image/png">
<!-- Custom Fonts -->
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
@@ -26,3 +28,13 @@
</head>
<body class="oauth">
<!-- Navigation -->
<nav class="navbar navbar-default navbar-static-top shadow" role="navigation" style="margin-bottom: 0">
<div class="container-fluid">
<div class="navbar-header">
<span class="navbar-brand navbar-brand-icon"><img src="/api/v1/cloudron/avatar" width="40" height="40"/></span>
<span class="navbar-brand">Cloudron</span>
</div>
</div>
</nav>

View File

@@ -1,5 +1,7 @@
<% include header %>
<!-- login tester -->
<div class="container">
<div class="row">
<div class="col-md-6 col-md-offset-3">
@@ -7,7 +9,7 @@
<div class="row">
<div class="col-md-12" style="text-align: center;">
<img width="128" height="128" src="<%= applicationLogo %>"/>
<h1>Login to <%= applicationName %> on <%= cloudronName %></h1>
<h1><small>Login to</small> <%= applicationName %></h1>
<br/>
</div>
</div>

View File

@@ -25,10 +25,16 @@ app.controller('Controller', [function () {}]);
<div class="form-group" ng-class="{ 'has-error': resetForm.password.$dirty && resetForm.password.$invalid }">
<label class="control-label" for="inputPassword">New Password</label>
<input type="password" class="form-control" id="inputPassword" ng-model="password" name="password" ng-maxlength="512" ng-minlength="5" autofocus required>
<div class="control-label" ng-show="resetForm.password.$dirty && resetForm.password.$invalid">
<small ng-show="resetForm.password.$dirty && resetForm.password.$invalid">Password must be 8-30 character with at least one uppercase, one numeric and one special character</small>
</div>
<input type="password" class="form-control" id="inputPassword" ng-model="password" name="password" ng-pattern="/^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[^a-zA-Z0-9])(?!.*\s).{8,30}$/" autofocus required>
</div>
<div class="form-group" ng-class="{ 'has-error': resetForm.passwordRepeat.$dirty && (resetForm.passwordRepeat.$invalid || password !== passwordRepeat) }">
<div class="form-group" ng-class="{ 'has-error': resetForm.passwordRepeat.$dirty && (password !== passwordRepeat) }">
<label class="control-label" for="inputPasswordRepeat">Repeat Password</label>
<div class="control-label" ng-show="resetForm.passwordRepeat.$dirty && (password !== passwordRepeat)">
<small ng-show="resetForm.passwordRepeat.$dirty && (password !== passwordRepeat)">Passwords don't match</small>
</div>
<input type="password" class="form-control" id="inputPasswordRepeat" ng-model="passwordRepeat" name="passwordRepeat" required>
</div>
<input class="btn btn-primary btn-outline pull-right" type="submit" value="Create" ng-disabled="resetForm.$invalid || password !== passwordRepeat"/>

View File

@@ -25,10 +25,16 @@ app.controller('Controller', [function () {}]);
<div class="form-group" ng-class="{ 'has-error': setupForm.password.$dirty && setupForm.password.$invalid }">
<label class="control-label" for="inputPassword">New Password</label>
<input type="password" class="form-control" id="inputPassword" ng-model="password" name="password" ng-maxlength="512" ng-minlength="5" autofocus required>
<div class="control-label" ng-show="setupForm.password.$dirty && setupForm.password.$invalid">
<small ng-show="setupForm.password.$dirty && setupForm.password.$invalid">Password must be 8-30 character with at least one uppercase, one numeric and one special character</small>
</div>
<input type="password" class="form-control" id="inputPassword" ng-model="password" name="password" ng-pattern="/^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[^a-zA-Z0-9])(?!.*\s).{8,30}$/" autofocus required>
</div>
<div class="form-group" ng-class="{ 'has-error': setupForm.passwordRepeat.$dirty && (setupForm.passwordRepeat.$invalid || password !== passwordRepeat) }">
<div class="form-group" ng-class="{ 'has-error': setupForm.passwordRepeat.$dirty && (password !== passwordRepeat) }">
<label class="control-label" for="inputPasswordRepeat">Repeat Password</label>
<div class="control-label" ng-show="setupForm.passwordRepeat.$dirty && (password !== passwordRepeat)">
<small ng-show="setupForm.passwordRepeat.$dirty && (password !== passwordRepeat)">Passwords don't match</small>
</div>
<input type="password" class="form-control" id="inputPasswordRepeat" ng-model="passwordRepeat" name="passwordRepeat" required>
</div>
<input class="btn btn-primary btn-outline pull-right" type="submit" value="Create" ng-disabled="setupForm.$invalid || password !== passwordRepeat"/>

View File

@@ -9,12 +9,14 @@ var appdb = require('./appdb.js'),
assert = require('assert'),
clientdb = require('./clientdb.js'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:proxy'),
express = require('express'),
http = require('http'),
proxy = require('proxy-middleware'),
session = require('cookie-session'),
superagent = require('superagent'),
tokendb = require('./tokendb.js'),
url = require('url'),
uuid = require('node-uuid');
@@ -24,13 +26,20 @@ var gHttpServer = null;
var CALLBACK_URI = '/callback';
function clearSession(req) {
delete gSessions[req.session.id];
req.session.id = uuid.v4();
gSessions[req.session.id] = {};
req.sessionData = gSessions[req.session.id];
}
function attachSessionData(req, res, next) {
assert.strictEqual(typeof req.session, 'object');
if (!req.session.id || !gSessions[req.session.id]) {
req.session.id = uuid.v4();
gSessions[req.session.id] = {};
}
if (!req.session.id || !gSessions[req.session.id]) clearSession(req);
// attach the session data to the requeset
req.sessionData = gSessions[req.session.id];
@@ -46,16 +55,10 @@ function verifySession(req, res, next) {
return next();
}
// use http admin origin so that it works with self-signed certs
superagent
.get(config.internalAdminOrigin() + '/api/v1/profile')
.query({ access_token: req.sessionData.accessToken})
.end(function (error, result) {
if (error) {
console.error(error);
req.authenticated = false;
} else if (result.statusCode !== 200) {
req.sessionData.accessToken = null;
tokendb.get(req.sessionData.accessToken, function (error, token) {
if (error) {
if (error.reason !== DatabaseError.NOT_FOUND) console.error(error);
clearSession(req);
req.authenticated = false;
} else {
req.authenticated = true;
@@ -89,7 +92,7 @@ function authenticate(req, res, next) {
.post(config.internalAdminOrigin() + '/api/v1/oauth/token')
.query(query).send(data)
.end(function (error, result) {
if (error) {
if (error && !error.response) {
console.error(error);
return res.send(500, 'Unable to contact the oauth server.');
}
@@ -121,7 +124,7 @@ function authenticate(req, res, next) {
return res.send(500, 'Unknown app.');
}
clientdb.getByAppId('proxy-' + result.id, function (error, result) {
clientdb.getByAppIdAndType(result.id, clientdb.TYPE_PROXY, function (error, result) {
if (error) {
console.error('Unkonwn OAuth client.', error);
return res.send(500, 'Unknown OAuth client.');
@@ -133,7 +136,7 @@ function authenticate(req, res, next) {
req.sessionData.clientSecret = result.clientSecret;
var callbackUrl = result.redirectURI + CALLBACK_URI;
var scope = 'profile,roleUser';
var scope = 'profile';
var oauthLogin = config.adminOrigin() + '/api/v1/oauth/dialog/authorize?response_type=code&client_id=' + result.id + '&redirect_uri=' + callbackUrl + '&scope=' + scope;
debug('begin OAuth flow for client %s.', result.name);

47
src/password.js Normal file
View File

@@ -0,0 +1,47 @@
/* jslint node:true */
'use strict';
// From https://www.npmjs.com/package/password-generator
exports = module.exports = {
generate: generate,
validate: validate
};
var assert = require('assert'),
generatePassword = require('password-generator');
// http://www.w3resource.com/javascript/form/example4-javascript-form-validation-password.html
// WARNING!!! if this is changed, the UI parts in the setup and account view have to be adjusted!
var gPasswordTestRegExp = /^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[^a-zA-Z0-9])(?!.*\s).{8,30}$/;
var UPPERCASE_RE = /([A-Z])/g;
var LOWERCASE_RE = /([a-z])/g;
var NUMBER_RE = /([\d])/g;
var SPECIAL_CHAR_RE = /([\?\-])/g;
function isStrongEnough(password) {
var uc = password.match(UPPERCASE_RE);
var lc = password.match(LOWERCASE_RE);
var n = password.match(NUMBER_RE);
var sc = password.match(SPECIAL_CHAR_RE);
return uc && lc && n && sc;
}
function generate() {
var password = '';
while (!isStrongEnough(password)) password = generatePassword(8, false, /[\w\d\?\-]/);
return password;
}
function validate(password) {
assert.strictEqual(typeof password, 'string');
if (!password.match(gPasswordTestRegExp)) return new Error('Password must be 8-30 character with at least one uppercase, one numeric and one special character');
return null;
}

View File

@@ -13,18 +13,22 @@ exports = module.exports = {
ADDON_CONFIG_DIR: path.join(config.baseDir(), 'data/addons'),
DNS_IN_SYNC_FILE: path.join(config.baseDir(), 'data/dns_in_sync'),
COLLECTD_APPCONFIG_DIR: path.join(config.baseDir(), 'data/collectd/collectd.conf.d'),
DATA_DIR: path.join(config.baseDir(), 'data'),
BOX_DATA_DIR: path.join(config.baseDir(), 'data/box'),
// this is not part of appdata because an icon may be set before install
APPICONS_DIR: path.join(config.baseDir(), 'data/box/appicons'),
APP_CERTS_DIR: path.join(config.baseDir(), 'data/box/certs'),
MAIL_DATA_DIR: path.join(config.baseDir(), 'data/box/mail'),
CLOUDRON_AVATAR_FILE: path.join(config.baseDir(), 'data/box/avatar.png'),
CLOUDRON_DEFAULT_AVATAR_FILE: path.join(__dirname + '/../assets/avatar.png'),
FAVICON_FILE: path.join(__dirname + '/../assets/favicon.ico'),
UPDATE_CHECKER_FILE: path.join(config.baseDir(), 'data/box/updatechecker.json'),
UPDATE_CHECKER_FILE: path.join(config.baseDir(), 'data/box/updatechecker.json')
ACME_CHALLENGES_DIR: path.join(config.baseDir(), 'data/acme'),
ACME_ACCOUNT_KEY_FILE: path.join(config.baseDir(), 'data/box/acme/acme.key')
};

View File

@@ -15,6 +15,7 @@ exports = module.exports = {
updateApp: updateApp,
getLogs: getLogs,
getLogStream: getLogStream,
listBackups: listBackups,
stopApp: stopApp,
startApp: startApp,
@@ -43,6 +44,7 @@ function removeInternalAppFields(app) {
health: app.health,
location: app.location,
accessRestriction: app.accessRestriction,
oauthProxy: app.oauthProxy,
lastBackupId: app.lastBackupId,
manifest: app.manifest,
portBindings: app.portBindings,
@@ -113,20 +115,28 @@ function installApp(req, res, next) {
if (typeof data.appStoreId !== 'string') return next(new HttpError(400, 'appStoreId is required'));
if (typeof data.location !== 'string') return next(new HttpError(400, 'location is required'));
if (('portBindings' in data) && typeof data.portBindings !== 'object') return next(new HttpError(400, 'portBindings must be an object'));
if (typeof data.accessRestriction !== 'string') return next(new HttpError(400, 'accessRestriction is required'));
if (typeof data.accessRestriction !== 'object') return next(new HttpError(400, 'accessRestriction is required'));
if (typeof data.oauthProxy !== 'boolean') return next(new HttpError(400, 'oauthProxy must be a boolean'));
if ('icon' in data && typeof data.icon !== 'string') return next(new HttpError(400, 'icon is not a string'));
if (data.cert && typeof data.cert !== 'string') return next(new HttpError(400, 'cert must be a string'));
if (data.key && typeof data.key !== 'string') return next(new HttpError(400, 'key must be a string'));
if (data.cert && !data.key) return next(new HttpError(400, 'key must be provided'));
if (!data.cert && data.key) return next(new HttpError(400, 'cert must be provided'));
// allow tests to provide an appId for testing
var appId = (process.env.BOX_ENV === 'test' && typeof data.appId === 'string') ? data.appId : uuid.v4();
debug('Installing app id:%s storeid:%s loc:%s port:%j restrict:%s manifest:%j', appId, data.appStoreId, data.location, data.portBindings, data.accessRestriction, data.manifest);
debug('Installing app id:%s storeid:%s loc:%s port:%j accessRestriction:%j oauthproxy:%s manifest:%j', appId, data.appStoreId, data.location, data.portBindings, data.accessRestriction, data.oauthProxy, data.manifest);
apps.install(appId, data.appStoreId, data.manifest, data.location, data.portBindings || null, data.accessRestriction, data.icon || null, function (error) {
apps.install(appId, data.appStoreId, data.manifest, data.location, data.portBindings || null, data.accessRestriction, data.oauthProxy, data.icon || null, data.cert || null, data.key || null, function (error) {
if (error && error.reason === AppsError.ALREADY_EXISTS) return next(new HttpError(409, error.message));
if (error && error.reason === AppsError.PORT_RESERVED) return next(new HttpError(409, 'Port ' + error.message + ' is reserved.'));
if (error && error.reason === AppsError.PORT_CONFLICT) return next(new HttpError(409, 'Port ' + error.message + ' is already in use.'));
if (error && error.reason === AppsError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error && error.reason === AppsError.BILLING_REQUIRED) return next(new HttpError(402, 'Billing required'));
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_CERTIFICATE) return next(new HttpError(400, error.message));
if (error && error.reason === AppsError.USER_REQUIRED) return next(new HttpError(400, 'accessRestriction must specify one user'));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, { id: appId } ));
@@ -149,17 +159,23 @@ function configureApp(req, res, next) {
if (!data) return next(new HttpError(400, 'Cannot parse data field'));
if (typeof data.location !== 'string') return next(new HttpError(400, 'location is required'));
if (('portBindings' in data) && typeof data.portBindings !== 'object') return next(new HttpError(400, 'portBindings must be an object'));
if (typeof data.accessRestriction !== 'string') return next(new HttpError(400, 'accessRestriction is required'));
if (typeof data.accessRestriction !== 'object') return next(new HttpError(400, 'accessRestriction is required'));
if (typeof data.oauthProxy !== 'boolean') return next(new HttpError(400, 'oauthProxy must be a boolean'));
if (data.cert && typeof data.cert !== 'string') return next(new HttpError(400, 'cert must be a string'));
if (data.key && typeof data.key !== 'string') return next(new HttpError(400, 'key must be a string'));
if (data.cert && !data.key) return next(new HttpError(400, 'key must be provided'));
if (!data.cert && data.key) return next(new HttpError(400, 'cert must be provided'));
debug('Configuring app id:%s location:%s bindings:%j', req.params.id, data.location, data.portBindings);
debug('Configuring app id:%s location:%s bindings:%j accessRestriction:%j oauthProxy:%s', req.params.id, data.location, data.portBindings, data.accessRestriction, data.oauthProxy);
apps.configure(req.params.id, data.location, data.portBindings || null, data.accessRestriction, function (error) {
apps.configure(req.params.id, data.location, data.portBindings || null, data.accessRestriction, data.oauthProxy, data.cert || null, data.key || null, function (error) {
if (error && error.reason === AppsError.ALREADY_EXISTS) return next(new HttpError(409, error.message));
if (error && error.reason === AppsError.PORT_RESERVED) return next(new HttpError(409, 'Port ' + error.message + ' is reserved.'));
if (error && error.reason === AppsError.PORT_CONFLICT) return next(new HttpError(409, 'Port ' + error.message + ' is already in use.'));
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_STATE) return next(new HttpError(409, error.message));
if (error && error.reason === AppsError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error && error.reason === AppsError.BAD_CERTIFICATE) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, { }));
@@ -253,7 +269,7 @@ function updateApp(req, res, next) {
if ('icon' in data && typeof data.icon !== 'string') return next(new HttpError(400, 'icon is not a string'));
if ('force' in data && typeof data.force !== 'boolean') return next(new HttpError(400, 'force must be a boolean'));
debug('Update app id:%s to manifest:%j', req.params.id, data.manifest);
debug('Update app id:%s to manifest:%j with portBindings:%j', req.params.id, data.manifest, data.portBindings);
apps.update(req.params.id, data.force || false, data.manifest, data.portBindings, data.icon, function (error) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
@@ -272,14 +288,14 @@ function getLogStream(req, res, next) {
debug('Getting logstream of app id:%s', req.params.id);
var fromLine = req.query.fromLine ? parseInt(req.query.fromLine, 10) : -10; // we ignore last-event-id
if (isNaN(fromLine)) return next(new HttpError(400, 'fromLine must be a valid number'));
var lines = req.query.lines ? parseInt(req.query.lines, 10) : -10; // we ignore last-event-id
if (isNaN(lines)) return next(new HttpError(400, 'lines must be a valid number'));
function sse(id, data) { return 'id: ' + id + '\ndata: ' + data + '\n\n'; }
if (req.headers.accept !== 'text/event-stream') return next(new HttpError(400, 'This API call requires EventStream'));
apps.getLogStream(req.params.id, fromLine, function (error, logStream) {
apps.getLogs(req.params.id, lines, true /* follow */, function (error, logStream) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_STATE) return next(new HttpError(412, error.message));
if (error) return next(new HttpError(500, error));
@@ -295,7 +311,7 @@ function getLogStream(req, res, next) {
res.on('close', logStream.close);
logStream.on('data', function (data) {
var obj = JSON.parse(data);
res.write(sse(obj.lineNumber, JSON.stringify(obj)));
res.write(sse(obj.monotonicTimestamp, JSON.stringify(obj))); // send timestamp as id
});
logStream.on('end', res.end.bind(res));
logStream.on('error', res.end.bind(res, null));
@@ -305,9 +321,12 @@ function getLogStream(req, res, next) {
function getLogs(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
var lines = req.query.lines ? parseInt(req.query.lines, 10) : 100;
if (isNaN(lines)) return next(new HttpError(400, 'lines must be a number'));
debug('Getting logs of app id:%s', req.params.id);
apps.getLogs(req.params.id, function (error, logStream) {
apps.getLogs(req.params.id, lines, false /* follow */, function (error, logStream) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_STATE) return next(new HttpError(412, error.message));
if (error) return next(new HttpError(500, error));
@@ -339,7 +358,9 @@ function exec(req, res, next) {
var rows = req.query.rows ? parseInt(req.query.rows, 10) : null;
if (isNaN(rows)) return next(new HttpError(400, 'rows must be a number'));
apps.exec(req.params.id, { cmd: cmd, rows: rows, columns: columns }, function (error, duplexStream) {
var tty = req.query.tty === 'true' ? true : false;
apps.exec(req.params.id, { cmd: cmd, rows: rows, columns: columns, tty: tty }, function (error, duplexStream) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error && error.reason === AppsError.BAD_STATE) return next(new HttpError(409, error.message));
if (error) return next(new HttpError(500, error));
@@ -354,3 +375,13 @@ function exec(req, res, next) {
});
}
function listBackups(req, res, next) {
assert.strictEqual(typeof req.params.id, 'string');
apps.listBackups(req.params.id, function (error, result) {
if (error && error.reason === AppsError.NOT_FOUND) return next(new HttpError(404, 'No such app'));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200, { backups: result }));
});
}

View File

@@ -5,7 +5,6 @@
exports = module.exports = {
add: add,
get: get,
update: update,
del: del,
getAllByUserId: getAllByUserId,
getClientTokens: getClientTokens,
@@ -13,12 +12,13 @@ exports = module.exports = {
};
var assert = require('assert'),
validUrl = require('valid-url'),
clientdb = require('../clientdb.js'),
clients = require('../clients.js'),
ClientsError = clients.ClientsError,
DatabaseError = require('../databaseerror.js'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess;
HttpSuccess = require('connect-lastmile').HttpSuccess,
validUrl = require('valid-url');
function add(req, res, next) {
var data = req.body;
@@ -29,10 +29,7 @@ function add(req, res, next) {
if (typeof data.scope !== 'string' || !data.scope) return next(new HttpError(400, 'scope is required'));
if (!validUrl.isWebUri(data.redirectURI)) return next(new HttpError(400, 'redirectURI must be a valid uri'));
// prefix as this route only allows external apps for developers
var appId = 'external-' + data.appId;
clients.add(appId, data.redirectURI, data.scope, function (error, result) {
clients.add(data.appId, clientdb.TYPE_EXTERNAL, data.redirectURI, data.scope, function (error, result) {
if (error && error.reason === ClientsError.INVALID_SCOPE) return next(new HttpError(400, 'Invalid scope'));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(201, result));
@@ -49,22 +46,6 @@ function get(req, res, next) {
});
}
function update(req, res, next) {
assert.strictEqual(typeof req.params.clientId, 'string');
var data = req.body;
if (!data) return next(new HttpError(400, 'Cannot parse data field'));
if (typeof data.appId !== 'string' || !data.appId) return next(new HttpError(400, 'appId is required'));
if (typeof data.redirectURI !== 'string' || !data.redirectURI) return next(new HttpError(400, 'redirectURI is required'));
if (!validUrl.isWebUri(data.redirectURI)) return next(new HttpError(400, 'redirectURI must be a valid uri'));
clients.update(req.params.clientId, data.appId, data.redirectURI, function (error, result) {
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, result));
});
}
function del(req, res, next) {
assert.strictEqual(typeof req.params.clientId, 'string');

View File

@@ -11,7 +11,6 @@ exports = module.exports = {
getConfig: getConfig,
update: update,
migrate: migrate,
setCertificate: setCertificate,
feedback: feedback
};
@@ -24,9 +23,7 @@ var assert = require('assert'),
debug = require('debug')('box:routes/cloudron'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess,
superagent = require('superagent'),
safe = require('safetydance'),
updateChecker = require('../updatechecker.js');
superagent = require('superagent');
/**
* Creating an admin user and activate the cloudron.
@@ -44,30 +41,32 @@ function activate(req, res, next) {
if (typeof req.body.username !== 'string') return next(new HttpError(400, 'username must be string'));
if (typeof req.body.password !== 'string') return next(new HttpError(400, 'password must be string'));
if (typeof req.body.email !== 'string') return next(new HttpError(400, 'email must be string'));
if ('name' in req.body && typeof req.body.name !== 'string') return next(new HttpError(400, 'name must be a string'));
if ('displayName' in req.body && typeof req.body.displayName !== 'string') return next(new HttpError(400, 'displayName must be string'));
var username = req.body.username;
var password = req.body.password;
var email = req.body.email;
var name = req.body.name || null;
var displayName = req.body.displayName || '';
var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
debug('activate: username:%s ip:%s', username, ip);
cloudron.activate(username, password, email, name, ip, function (error, info) {
cloudron.activate(username, password, email, displayName, ip, function (error, info) {
if (error && error.reason === CloudronError.ALREADY_PROVISIONED) return next(new HttpError(409, 'Already setup'));
if (error && error.reason === CloudronError.BAD_USERNAME) return next(new HttpError(400, 'Bad username'));
if (error && error.reason === CloudronError.BAD_PASSWORD) return next(new HttpError(400, 'Bad password'));
if (error && error.reason === CloudronError.BAD_EMAIL) return next(new HttpError(400, 'Bad email'));
if (error && error.reason === CloudronError.BAD_NAME) return next(new HttpError(400, 'Bad name'));
if (error) return next(new HttpError(500, error));
// only in caas case do we have to notify the api server about activation
if (config.provider() !== 'caas') return next(new HttpSuccess(201, info));
// Now let the api server know we got activated
superagent.post(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/setup/done').query({ setupToken:req.query.setupToken }).end(function (error, result) {
if (error) return next(new HttpError(500, error));
superagent.post(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/setup/done').query({ setupToken: req.query.setupToken }).end(function (error, result) {
if (error && !error.response) return next(new HttpError(500, error));
if (result.statusCode === 403) return next(new HttpError(403, 'Invalid token'));
if (result.statusCode === 409) return next(new HttpError(409, 'Already setup'));
if (result.statusCode !== 201) return next(new HttpError(500, result.text ? result.text.message : 'Internal error'));
if (result.statusCode !== 201) return next(new HttpError(500, result.text || 'Internal error'));
next(new HttpSuccess(201, info));
});
@@ -77,13 +76,16 @@ function activate(req, res, next) {
function setupTokenAuth(req, res, next) {
assert.strictEqual(typeof req.query, 'object');
// skip setupToken auth for non caas case
if (config.provider() !== 'caas') return next();
if (typeof req.query.setupToken !== 'string') return next(new HttpError(400, 'no setupToken provided'));
superagent.get(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/setup/verify').query({ setupToken:req.query.setupToken }).end(function (error, result) {
if (error) return next(new HttpError(500, error));
if (error && !error.response) return next(new HttpError(500, error));
if (result.statusCode === 403) return next(new HttpError(403, 'Invalid token'));
if (result.statusCode === 409) return next(new HttpError(409, 'Already setup'));
if (result.statusCode !== 200) return next(new HttpError(500, result.text ? result.text.message : 'Internal error'));
if (result.statusCode !== 200) return next(new HttpError(500, result.text || 'Internal error'));
next();
});
@@ -117,11 +119,9 @@ function getConfig(req, res, next) {
}
function update(req, res, next) {
var boxUpdateInfo = updateChecker.getUpdateInfo().box;
if (!boxUpdateInfo) return next(new HttpError(422, 'No update available'));
// this only initiates the update, progress can be checked via the progress route
cloudron.update(boxUpdateInfo, function (error) {
cloudron.updateToLatest(function (error) {
if (error && error.reason === CloudronError.ALREADY_UPTODATE) return next(new HttpError(422, error.message));
if (error && error.reason === CloudronError.BAD_STATE) return next(new HttpError(409, error.message));
if (error) return next(new HttpError(500, error));
@@ -143,23 +143,6 @@ function migrate(req, res, next) {
});
}
function setCertificate(req, res, next) {
assert.strictEqual(typeof req.files, 'object');
if (!req.files.certificate) return next(new HttpError(400, 'certificate must be provided'));
var certificate = safe.fs.readFileSync(req.files.certificate.path, 'utf8');
if (!req.files.key) return next(new HttpError(400, 'key must be provided'));
var key = safe.fs.readFileSync(req.files.key.path, 'utf8');
cloudron.setCertificate(certificate, key, function (error) {
if (error && error.reason === CloudronError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, {}));
});
}
function feedback(req, res, next) {
assert.strictEqual(typeof req.user, 'object');

View File

@@ -3,7 +3,9 @@
'use strict';
exports = module.exports = {
backup: backup
backup: backup,
update: update,
retire: retire
};
var cloudron = require('../cloudron.js'),
@@ -13,7 +15,7 @@ var cloudron = require('../cloudron.js'),
HttpSuccess = require('connect-lastmile').HttpSuccess;
function backup(req, res, next) {
debug('trigger backup');
debug('triggering backup');
// note that cloudron.backup only waits for backup initiation and not for backup to complete
// backup progress can be checked up ny polling the progress api call
@@ -24,3 +26,29 @@ function backup(req, res, next) {
next(new HttpSuccess(202, {}));
});
}
function update(req, res, next) {
debug('triggering update');
// this only initiates the update, progress can be checked via the progress route
cloudron.updateToLatest(function (error) {
if (error && error.reason === CloudronError.ALREADY_UPTODATE) return next(new HttpError(422, error.message));
if (error && error.reason === CloudronError.BAD_STATE) return next(new HttpError(409, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, {}));
});
}
function retire(req, res, next) {
debug('triggering retire');
// note that cloudron.backup only waits for backup initiation and not for backup to complete
// backup progress can be checked up ny polling the progress api call
cloudron.retire(function (error) {
if (error && error.reason === CloudronError.BAD_STATE) return next(new HttpError(409, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, {}));
});
}

View File

@@ -3,6 +3,7 @@
'use strict';
var assert = require('assert'),
apps = require('../apps'),
authcodedb = require('../authcodedb'),
clientdb = require('../clientdb'),
config = require('../config.js'),
@@ -27,34 +28,21 @@ var assert = require('assert'),
// create OAuth 2.0 server
var gServer = oauth2orize.createServer();
// Register serialialization and deserialization functions.
//
// When a client redirects a user to user authorization endpoint, an
// authorization transaction is initiated. To complete the transaction, the
// user must authenticate and approve the authorization request. Because this
// may involve multiple HTTP request/response exchanges, the transaction is
// stored in the session.
//
// An application must supply serialization functions, which determine how the
// client object is serialized into the session. Typically this will be a
// simple matter of serializing the client's ID, and deserializing by finding
// the client by ID from the database.
// The client id is stored in the session and can thus be retrieved for each
// step in the oauth flow transaction, which involves multiple http requests.
gServer.serializeClient(function (client, callback) {
debug('server serialize:', client);
return callback(null, client.id);
});
gServer.deserializeClient(function (id, callback) {
debug('server deserialize:', id);
clientdb.get(id, function (error, client) {
if (error) { return callback(error); }
return callback(null, client);
});
clientdb.get(id, callback);
});
// Register supported grant types.
// Grant authorization codes. The callback takes the `client` requesting
@@ -64,21 +52,17 @@ gServer.deserializeClient(function (id, callback) {
// the application. The application issues a code, which is bound to these
// values, and will be exchanged for an access token.
// we use , (comma) as scope separator
gServer.grant(oauth2orize.grant.code({ scopeSeparator: ',' }, function (client, redirectURI, user, ares, callback) {
debug('grant code:', client, redirectURI, user.id, ares);
debug('grant code:', client.id, redirectURI, user.id, ares);
var code = hat(256);
var expiresAt = Date.now() + 60 * 60000; // 1 hour
var scopes = client.scope ? client.scope.split(',') : ['profile','roleUser'];
if (scopes.indexOf('roleAdmin') !== -1 && !user.admin) {
debug('grant code: not allowed, you need to be admin');
return callback(new Error('Admin capabilities required'));
}
authcodedb.add(code, client.id, user.username, expiresAt, function (error) {
if (error) return callback(error);
debug('grant code: new auth code for client %s code %s', client.id, code);
callback(null, code);
});
}));
@@ -93,7 +77,7 @@ gServer.grant(oauth2orize.grant.token({ scopeSeparator: ',' }, function (client,
tokendb.add(token, tokendb.PREFIX_USER + user.id, client.id, expires, client.scope, function (error) {
if (error) return callback(error);
debug('new access token for client ' + client.id + ' token ' + token);
debug('grant token: new access token for client %s token %s', client.id, token);
callback(null, token);
});
@@ -123,7 +107,7 @@ gServer.exchange(oauth2orize.exchange.code(function (client, code, redirectURI,
tokendb.add(token, tokendb.PREFIX_USER + authCode.userId, authCode.clientId, expires, client.scope, function (error) {
if (error) return callback(error);
debug('new access token for client ' + client.id + ' token ' + token);
debug('exchange: new access token for client %s token %s', client.id, token);
callback(null, token);
});
@@ -148,24 +132,36 @@ session.ensureLoggedIn = function (redirectTo) {
};
};
function renderTemplate(res, template, data) {
assert.strictEqual(typeof res, 'object');
assert.strictEqual(typeof template, 'string');
assert.strictEqual(typeof data, 'object');
res.render(template, data);
}
function sendErrorPageOrRedirect(req, res, message) {
assert.strictEqual(typeof req, 'object');
assert.strictEqual(typeof res, 'object');
assert.strictEqual(typeof message, 'string');
debug('sendErrorPageOrRedirect: returnTo "%s".', req.query.returnTo, message);
debug('sendErrorPageOrRedirect: returnTo %s.', req.query.returnTo, message);
if (typeof req.query.returnTo !== 'string') {
res.render('error', {
renderTemplate(res, 'error', {
adminOrigin: config.adminOrigin(),
message: message
message: message,
title: 'Cloudron Error'
});
} else {
var u = url.parse(req.query.returnTo);
if (!u.protocol || !u.host) return res.render('error', {
adminOrigin: config.adminOrigin(),
message: 'Invalid request. returnTo query is not a valid URI. ' + message
});
if (!u.protocol || !u.host) {
return renderTemplate(res, 'error', {
adminOrigin: config.adminOrigin(),
message: 'Invalid request. returnTo query is not a valid URI. ' + message,
title: 'Cloudron Error'
});
}
res.redirect(util.format('%s//%s', u.protocol, u.host));
}
@@ -178,65 +174,51 @@ function sendError(req, res, message) {
assert.strictEqual(typeof res, 'object');
assert.strictEqual(typeof message, 'string');
res.render('error', {
renderTemplate(res, 'error', {
adminOrigin: config.adminOrigin(),
message: message
message: message,
title: 'Cloudron Error'
});
}
// Main login form username and password
// -> GET /api/v1/session/login
function loginForm(req, res) {
if (typeof req.session.returnTo !== 'string') return sendErrorPageOrRedirect(req, res, 'Invalid login request. No returnTo provided.');
var u = url.parse(req.session.returnTo, true);
if (!u.query.client_id) return sendErrorPageOrRedirect(req, res, 'Invalid login request. No client_id provided.');
var cloudronName = '';
function render(applicationName, applicationLogo) {
res.render('login', {
renderTemplate(res, 'login', {
adminOrigin: config.adminOrigin(),
csrf: req.csrfToken(),
cloudronName: cloudronName,
applicationName: applicationName,
applicationLogo: applicationLogo,
error: req.query.error || null
error: req.query.error || null,
title: applicationName + ' Login'
});
}
settings.getCloudronName(function (error, name) {
if (error) return sendError(req, res, 'Internal Error');
clientdb.get(u.query.client_id, function (error, result) {
if (error) return sendError(req, res, 'Unknown OAuth client');
cloudronName = name;
switch (result.type) {
case clientdb.TYPE_ADMIN: return render(constants.ADMIN_NAME, '/api/v1/cloudron/avatar');
case clientdb.TYPE_EXTERNAL: return render('External Application', '/api/v1/cloudron/avatar');
case clientdb.TYPE_SIMPLE_AUTH: return sendError(req, res, 'Unknown OAuth client');
default: break;
}
clientdb.get(u.query.client_id, function (error, result) {
if (error) return sendError(req, res, 'Unknown OAuth client');
appdb.get(result.appId, function (error, result) {
if (error) return sendErrorPageOrRedirect(req, res, 'Unknown Application for those OAuth credentials');
// Handle our different types of oauth clients
var appId = result.appId;
if (appId === constants.ADMIN_CLIENT_ID) {
return render(constants.ADMIN_NAME, '/api/v1/cloudron/avatar');
} else if (appId === constants.TEST_CLIENT_ID) {
return render(constants.TEST_NAME, '/api/v1/cloudron/avatar');
} else if (appId.indexOf('external-') === 0) {
return render('External Application', '/api/v1/cloudron/avatar');
} else if (appId.indexOf('addon-') === 0) {
appId = appId.slice('addon-'.length);
} else if (appId.indexOf('proxy-') === 0) {
appId = appId.slice('proxy-'.length);
}
appdb.get(appId, function (error, result) {
if (error) return sendErrorPageOrRedirect(req, res, 'Unknown Application for those OAuth credentials');
var applicationName = result.location || config.fqdn();
render(applicationName, '/api/v1/cloudron/avatar');
});
var applicationName = result.location || config.fqdn();
render(applicationName, '/api/v1/apps/' + result.id + '/icon');
});
});
}
// performs the login POST from the login form
// -> POST /api/v1/session/login
function login(req, res) {
var returnTo = req.session.returnTo || req.query.returnTo;
@@ -248,7 +230,7 @@ function login(req, res) {
});
}
// ends the current session
// -> GET /api/v1/session/logout
function logout(req, res) {
req.logout();
@@ -259,7 +241,7 @@ function logout(req, res) {
// Form to enter email address to send a password reset request mail
// -> GET /api/v1/session/password/resetRequest.html
function passwordResetRequestSite(req, res) {
res.render('password_reset_request', { adminOrigin: config.adminOrigin(), csrf: req.csrfToken() });
renderTemplate(res, 'password_reset_request', { adminOrigin: config.adminOrigin(), csrf: req.csrfToken(), title: 'Cloudron Password Reset' });
}
// This route is used for above form submission
@@ -283,19 +265,23 @@ function passwordResetRequest(req, res, next) {
// -> GET /api/v1/session/password/sent.html
function passwordSentSite(req, res) {
res.render('password_reset_sent', { adminOrigin: config.adminOrigin() });
renderTemplate(res, 'password_reset_sent', { adminOrigin: config.adminOrigin(), title: 'Cloudron Password Reset' });
}
// -> GET /api/v1/session/password/setup.html
function passwordSetupSite(req, res, next) {
if (!req.query.reset_token) return next(new HttpError(400, 'Missing reset_token'));
debug('passwordSetupSite: with token %s.', req.query.reset_token);
user.getByResetToken(req.query.reset_token, function (error, user) {
if (error) return next(new HttpError(401, 'Invalid reset_token'));
res.render('password_setup', { adminOrigin: config.adminOrigin(), user: user, csrf: req.csrfToken(), resetToken: req.query.reset_token });
renderTemplate(res, 'password_setup', {
adminOrigin: config.adminOrigin(),
user: user,
csrf: req.csrfToken(),
resetToken: req.query.reset_token,
title: 'Cloudron Password Setup'
});
});
}
@@ -303,12 +289,16 @@ function passwordSetupSite(req, res, next) {
function passwordResetSite(req, res, next) {
if (!req.query.reset_token) return next(new HttpError(400, 'Missing reset_token'));
debug('passwordResetSite: with token %s.', req.query.reset_token);
user.getByResetToken(req.query.reset_token, function (error, user) {
if (error) return next(new HttpError(401, 'Invalid reset_token'));
res.render('password_reset', { adminOrigin: config.adminOrigin(), user: user, csrf: req.csrfToken(), resetToken: req.query.reset_token });
renderTemplate(res, 'password_reset', {
adminOrigin: config.adminOrigin(),
user: user,
csrf: req.csrfToken(),
resetToken: req.query.reset_token,
title: 'Cloudron Password Reset'
});
});
}
@@ -326,6 +316,7 @@ function passwordReset(req, res, next) {
// setPassword clears the resetToken
user.setPassword(userObject.id, req.body.password, function (error, result) {
if (error && error.reason === UserError.BAD_PASSWORD) return next(new HttpError(406, 'Password does not meet the requirements'));
if (error) return next(new HttpError(500, error));
res.redirect(util.format('%s?accessToken=%s&expiresAt=%s', config.adminOrigin(), result.token, result.expiresAt));
@@ -334,50 +325,28 @@ function passwordReset(req, res, next) {
}
/*
The callback page takes the redirectURI and the authCode and redirects the browser accordingly
*/
// The callback page takes the redirectURI and the authCode and redirects the browser accordingly
//
// -> GET /api/v1/session/callback
var callback = [
session.ensureLoggedIn('/api/v1/session/login'),
function (req, res) {
debug('callback: with callback server ' + req.query.redirectURI);
res.render('callback', { adminOrigin: config.adminOrigin(), callbackServer: req.query.redirectURI });
renderTemplate(res, 'callback', { adminOrigin: config.adminOrigin(), callbackServer: req.query.redirectURI });
}
];
/*
This indicates a missing OAuth client session or invalid client ID
*/
var error = [
session.ensureLoggedIn('/api/v1/session/login'),
function (req, res) {
sendErrorPageOrRedirect(req, res, 'Invalid OAuth Client');
}
];
/*
The authorization endpoint is the entry point for an OAuth login.
Each app would start OAuth by redirecting the user to:
/api/v1/oauth/dialog/authorize?response_type=code&client_id=<clientId>&redirect_uri=<callbackURL>&scope=<ignored>
- First, this will ensure the user is logged in.
- Then in normal OAuth it would ask the user for permissions to the scopes, which we will do on app installation
- Then it will redirect the browser to the given <callbackURL> containing the authcode in the query
Scopes are set by the app during installation, the ones given on OAuth transaction start are simply ignored.
*/
// The authorization endpoint is the entry point for an OAuth login.
//
// Each app would start OAuth by redirecting the user to:
//
// /api/v1/oauth/dialog/authorize?response_type=code&client_id=<clientId>&redirect_uri=<callbackURL>&scope=<ignored>
//
// - First, this will ensure the user is logged in.
// - Then it will redirect the browser to the given <callbackURL> containing the authcode in the query
//
// -> GET /api/v1/oauth/dialog/authorize
var authorization = [
// extract the returnTo origin and set as query param
function (req, res, next) {
if (!req.query.redirect_uri) return sendErrorPageOrRedirect(req, res, 'Invalid request. redirect_uri query param is not set.');
if (!req.query.client_id) return sendErrorPageOrRedirect(req, res, 'Invalid request. client_id query param is not set.');
@@ -386,10 +355,10 @@ var authorization = [
session.ensureLoggedIn('/api/v1/session/login?returnTo=' + req.query.redirect_uri)(req, res, next);
},
gServer.authorization(function (clientID, redirectURI, callback) {
debug('authorization: client %s with callback to %s.', clientID, redirectURI);
gServer.authorization({}, function (clientId, redirectURI, callback) {
debug('authorization: client %s with callback to %s.', clientId, redirectURI);
clientdb.get(clientID, function (error, client) {
clientdb.get(clientId, function (error, client) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, false);
if (error) return callback(error);
@@ -397,50 +366,36 @@ var authorization = [
var redirectPath = url.parse(redirectURI).path;
var redirectOrigin = client.redirectURI;
callback(null, client, '/api/v1/session/callback?redirectURI=' + url.resolve(redirectOrigin, redirectPath));
callback(null, client, '/api/v1/session/callback?redirectURI=' + encodeURIComponent(url.resolve(redirectOrigin, redirectPath)));
});
}),
// Until we have OAuth scopes, skip decision dialog
// OAuth sopes skip START
function (req, res, next) {
assert.strictEqual(typeof req.body, 'object');
assert.strictEqual(typeof req.oauth2, 'object');
// Handle our different types of oauth clients
var type = req.oauth2.client.type;
var scopes = req.oauth2.client.scope ? req.oauth2.client.scope.split(',') : ['profile','roleUser'];
if (type === clientdb.TYPE_ADMIN) return next();
if (type === clientdb.TYPE_EXTERNAL) return next();
if (type === clientdb.TYPE_SIMPLE_AUTH) return sendError(req, res, 'Unkonwn OAuth client.');
if (scopes.indexOf('roleAdmin') !== -1 && !req.user.admin) {
return sendErrorPageOrRedirect(req, res, 'Admin capabilities required');
}
appdb.get(req.oauth2.client.appId, function (error, appObject) {
if (error) return sendErrorPageOrRedirect(req, res, 'Invalid request. Unknown app for this client_id.');
req.body.transaction_id = req.oauth2.transactionID;
next();
if (!apps.hasAccessTo(appObject, req.oauth2.user)) return sendErrorPageOrRedirect(req, res, 'No access to this app.');
next();
});
},
gServer.decision(function(req, done) {
debug('decision: with scope', req.oauth2.req.scope);
return done(null, { scope: req.oauth2.req.scope });
})
// OAuth sopes skip END
// function (req, res) {
// res.render('dialog', { transactionID: req.oauth2.transactionID, user: req.user, client: req.oauth2.client, csrf: req.csrfToken() });
// }
];
// this triggers the above grant middleware and handles the user's decision if he accepts the access
var decision = [
session.ensureLoggedIn('/api/v1/session/login'),
gServer.decision()
gServer.decision({ loadTransaction: false })
];
/*
The token endpoint allows an OAuth client to exchange an authcode with an accesstoken.
Authcodes are obtained using the authorization endpoint. The route is authenticated by
providing a Basic auth with clientID as username and clientSecret as password.
An authcode is only good for one such exchange to an accesstoken.
*/
// The token endpoint allows an OAuth client to exchange an authcode with an accesstoken.
//
// Authcodes are obtained using the authorization endpoint. The route is authenticated by
// providing a Basic auth with clientID as username and clientSecret as password.
// An authcode is only good for one such exchange to an accesstoken.
//
// -> POST /api/v1/oauth/token
var token = [
passport.authenticate(['basic', 'oauth2-client-password'], { session: false }),
gServer.token(),
@@ -448,18 +403,15 @@ var token = [
];
/*
The scope middleware provides an auth middleware for routes.
It is used for API routes, which are authenticated using accesstokens.
Those accesstokens carry OAuth scopes and the middleware takes the required
scope as an argument and will verify the accesstoken against it.
See server.js:
var profileScope = routes.oauth2.scope('profile');
*/
// The scope middleware provides an auth middleware for routes.
//
// It is used for API routes, which are authenticated using accesstokens.
// Those accesstokens carry OAuth scopes and the middleware takes the required
// scope as an argument and will verify the accesstoken against it.
//
// See server.js:
// var profileScope = routes.oauth2.scope('profile');
//
function scope(requestedScope) {
assert.strictEqual(typeof requestedScope, 'string');
@@ -501,7 +453,6 @@ exports = module.exports = {
login: login,
logout: logout,
callback: callback,
error: error,
passwordResetRequestSite: passwordResetRequestSite,
passwordResetRequest: passwordResetRequest,
passwordSentSite: passwordSentSite,
@@ -509,7 +460,6 @@ exports = module.exports = {
passwordSetupSite: passwordSetupSite,
passwordReset: passwordReset,
authorization: authorization,
decision: decision,
token: token,
scope: scope,
csrf: csrf

View File

@@ -10,10 +10,21 @@ exports = module.exports = {
setCloudronName: setCloudronName,
getCloudronAvatar: getCloudronAvatar,
setCloudronAvatar: setCloudronAvatar
setCloudronAvatar: setCloudronAvatar,
getDnsConfig: getDnsConfig,
setDnsConfig: setDnsConfig,
getBackupConfig: getBackupConfig,
setBackupConfig: setBackupConfig,
setCertificate: setCertificate,
setAdminCertificate: setAdminCertificate
};
var assert = require('assert'),
certificates = require('../certificates.js'),
CertificatesError = require('../certificates.js').CertificatesError,
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess,
safe = require('safetydance'),
@@ -83,3 +94,75 @@ function getCloudronAvatar(req, res, next) {
res.status(200).send(avatar);
});
}
function getDnsConfig(req, res, next) {
settings.getDnsConfig(function (error, config) {
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200, config));
});
}
function setDnsConfig(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (typeof req.body.provider !== 'string') return next(new HttpError(400, 'provider is required'));
settings.setDnsConfig(req.body, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200));
});
}
function getBackupConfig(req, res, next) {
settings.getBackupConfig(function (error, config) {
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200, config));
});
}
function setBackupConfig(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (typeof req.body.provider !== 'string') return next(new HttpError(400, 'provider is required'));
settings.setBackupConfig(req.body, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(200));
});
}
// default fallback cert
function setCertificate(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.cert || typeof req.body.cert !== 'string') return next(new HttpError(400, 'cert must be a string'));
if (!req.body.key || typeof req.body.key !== 'string') return next(new HttpError(400, 'key must be a string'));
certificates.setFallbackCertificate(req.body.cert, req.body.key, function (error) {
if (error && error.reason === CertificatesError.INVALID_CERT) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, {}));
});
}
// only webadmin cert, until it can be treated just like a normal app
function setAdminCertificate(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.cert || typeof req.body.cert !== 'string') return next(new HttpError(400, 'cert must be a string'));
if (!req.body.key || typeof req.body.key !== 'string') return next(new HttpError(400, 'key must be a string'));
certificates.setAdminCertificate(req.body.cert, req.body.key, function (error) {
if (error && error.reason === CertificatesError.INVALID_CERT) return next(new HttpError(400, error.message));
if (error) return next(new HttpError(500, error));
next(new HttpSuccess(202, {}));
});
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More