Compare commits

..

1167 Commits

Author SHA1 Message Date
Girish Ramakrishnan 0cee6de476 Check if cloudron.conf file exists 2017-01-31 01:53:06 -08:00
Girish Ramakrishnan 854d29330c Fix email display logic again 2017-01-30 22:55:20 -08:00
Girish Ramakrishnan 34a3dd6d46 Always generate default nginx config
If we don't, https://ip won't work (caas relies on this for
health checks)
2017-01-30 16:17:07 -08:00
Girish Ramakrishnan 4787ee3301 Fix email note display logic 2017-01-30 15:49:50 -08:00
Girish Ramakrishnan 7b547e7ae9 Revert scaleway specific overlay2 support
This reverts commit 16d65d3665.

Rainloop app breaks with overlay2
2017-01-30 15:43:42 -08:00
Girish Ramakrishnan fe5e31e528 Save update json in /root
/tmp is not very secure. But the real reason is so that we can
re-run the setup script again should things fail.

/home/yellowtent/box/scripts/installer.sh --data-file /root/cloudron-update-data.json
2017-01-30 15:21:04 -08:00
Girish Ramakrishnan 841a838910 Fix text 2017-01-30 15:08:51 -08:00
Girish Ramakrishnan 4f27fe4f1e Fix email text 2017-01-30 14:24:08 -08:00
Girish Ramakrishnan 96eab86341 Applications -> Apps 2017-01-30 14:20:11 -08:00
Girish Ramakrishnan 95d7a991dc install grub2 2017-01-30 14:01:33 -08:00
Girish Ramakrishnan dc309afbbd Add --allow-downgrades
The following packages will be DOWNGRADED:
  docker-engine
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
E: Packages were downgraded and -y was used without --allow-downgrades.
2017-01-30 14:01:32 -08:00
Girish Ramakrishnan 16d65d3665 Use overlay2 for scaleway
https://github.com/scaleway/image-ubuntu/issues/68
2017-01-30 14:01:29 -08:00
Girish Ramakrishnan ccb340cf80 Use systemd drop in to configure docker
The built-in service files get overwritten by updates

Fixes #203
2017-01-30 12:41:07 -08:00
Girish Ramakrishnan 56b0f57e11 Move unbound systemd config to separate file 2017-01-30 12:39:19 -08:00
Girish Ramakrishnan 7c1e056152 Add 0.99.0 changes 2017-01-30 10:25:11 -08:00
Girish Ramakrishnan 08ffa99c78 Use %s instead of %d
awk's %d behaves differently with mawk (scaleway) and gawk (do)

Fixes #200
2017-01-30 10:24:26 -08:00
Johannes Zellner cdede5a009 Add dns provider information on change dialog 2017-01-29 15:00:30 -08:00
Johannes Zellner 4cadffa6ea Remove automatic appstore account signup in setup view 2017-01-29 14:39:54 -08:00
Johannes Zellner 04e13eac55 Improve appstore signup 2017-01-29 14:38:38 -08:00
Johannes Zellner 2b3ae69f63 Selectivly show the correct labels when email is enabled in users view 2017-01-29 14:27:05 -08:00
Johannes Zellner 8f4813f691 Fix text for emails 2017-01-29 14:23:27 -08:00
Johannes Zellner 5b05baeced Make oauth view navbar entries links 2017-01-29 13:33:34 -08:00
Johannes Zellner 3d60e36c98 Fix top margin in oauth views 2017-01-29 13:33:34 -08:00
Johannes Zellner 40c7bd114a Add footer to oauth views 2017-01-29 13:33:34 -08:00
Johannes Zellner e0033b31f2 Fix text on settings and support views 2017-01-29 13:33:34 -08:00
Girish Ramakrishnan 2d3bdda1c8 Make tests pass 2017-01-29 13:01:09 -08:00
Girish Ramakrishnan fd40940ef5 Reserve ports <= 1023
Just being conservative here

Fixes #202
2017-01-29 12:43:24 -08:00
Girish Ramakrishnan 6d58f65a1a Reserve ssh ports 2017-01-29 12:38:58 -08:00
Johannes Zellner 44775e1791 Cleanup the graphs ui 2017-01-29 11:39:28 -08:00
Johannes Zellner 4be1f4dd73 Remove developerMode toggle in token ui 2017-01-29 10:26:14 -08:00
Johannes Zellner 93bab552c9 Fix text in certs, tokens and settings views 2017-01-29 02:50:26 -08:00
Johannes Zellner 023c03ddcd Use the same busy indicator everywhere 2017-01-29 02:01:01 -08:00
Johannes Zellner a5bffad556 Improve text on users page and remove username validation on delete 2017-01-29 01:40:33 -08:00
Johannes Zellner 836348cbc0 Improve text for app installation and configuration 2017-01-29 01:00:15 -08:00
Johannes Zellner 1ac7570cfb Autofocus appstore search field 2017-01-28 20:26:38 -08:00
Johannes Zellner 0dceba8a1c Do not reload all apps when search is empty 2017-01-28 19:57:32 -08:00
Johannes Zellner 599b070779 Remove appstore view title 2017-01-28 19:52:42 -08:00
Johannes Zellner c581e0ad09 webadmin: only show backup settings notification in settings view 2017-01-28 19:22:56 -08:00
Johannes Zellner e14b59af5d Append random query to ensure the avatar is refetched 2017-01-28 19:10:55 -08:00
Johannes Zellner eff9de3ded Adjust dns wait text 2017-01-28 18:33:37 -08:00
Johannes Zellner 4f128c6503 setup: improve text on dnssetup page 2017-01-28 18:27:22 -08:00
Johannes Zellner 8dc9d4c083 webadmin: Give better feedback on update schedule saving 2017-01-28 14:50:30 -08:00
Girish Ramakrishnan 21e3300396 tutorial: fix node version 2017-01-28 14:44:13 -08:00
Girish Ramakrishnan d136895598 Generate cert with cloudron.self CN instead of ip 2017-01-28 09:10:53 -08:00
Girish Ramakrishnan dac3eef57c Skip generating self-signed cert if we have a domain 2017-01-28 09:10:53 -08:00
Girish Ramakrishnan 2fac7dd736 delete old nginx configs on infra update
we changed the cert location and reloading nginx fails...
2017-01-28 09:10:49 -08:00
Girish Ramakrishnan 74e2415308 Make this an infra update
This has to be an infra update since the nginx configuration has
to be rewritten for the new data layout
2017-01-28 01:01:24 -08:00
Girish Ramakrishnan 41fae04b69 more 0.98.0 changes 2017-01-27 10:14:10 -08:00
Johannes Zellner 32a88a342c Add update notification mail tests 2017-01-27 09:51:26 -08:00
Johannes Zellner b5bcde5093 Fix update email tests 2017-01-27 09:51:26 -08:00
Johannes Zellner 68c36e8a18 Only send update notification mails if autoupdate is disabled 2017-01-27 09:51:26 -08:00
Johannes Zellner f6a9e1f4d8 Revert "Fix tests: we do not send mails anymore"
This reverts commit 7c72cd4399.
2017-01-27 09:51:26 -08:00
Johannes Zellner 2abd42096e Add showdown node module for update mails 2017-01-27 09:51:26 -08:00
Johannes Zellner 922e214c52 Revert "Remove now unused mailer.boxUpdateAvailable()"
This reverts commit 558093eab1.
2017-01-27 09:51:26 -08:00
Johannes Zellner 6ce8899231 Revert "Do not send box update emails to admins"
This reverts commit 865b041474.
2017-01-27 09:51:26 -08:00
Girish Ramakrishnan cbfad632c2 Handle 401 in app purchase 2017-01-27 07:47:56 -08:00
Johannes Zellner 7804aed5d7 Query graphite for 10 apps at a time at most
If many apps are installed, we may reach graphite's query string
size limit, so we get the app details now 10 at a time
2017-01-26 22:53:52 -08:00
Johannes Zellner b90b1dbbbe Show graph labels on the side 2017-01-26 22:38:00 -08:00
Johannes Zellner 020ec54264 Allow changing the autoupdate pattern in the settings view 2017-01-26 21:31:05 -08:00
Johannes Zellner 0568093a2a Add rest wrapper for autoupdate pattern route 2017-01-26 21:31:05 -08:00
Johannes Zellner c9281bf863 docs: Remove oauth proxy from the authentication docs 2017-01-26 16:17:21 -08:00
Johannes Zellner de451b2fe8 Redirect to the webadmin if update progress is 100 2017-01-26 15:52:57 -08:00
Girish Ramakrishnan ddf5c51737 Make it 90 instead 2017-01-26 15:45:07 -08:00
Johannes Zellner a33ccb32d2 Use autoupdate pattern constant in tests 2017-01-26 15:38:29 -08:00
Johannes Zellner 0b03018a7b Add constant for special 'never' autoupdate pattern 2017-01-26 15:36:24 -08:00
Johannes Zellner 1b688410e7 Add more changes 2017-01-26 15:27:29 -08:00
Johannes Zellner 6d031af012 Allow changing domain on caas always 2017-01-26 15:22:02 -08:00
Johannes Zellner 67a5151070 Also pick the token when migrating a caas cloudron to a different domain 2017-01-26 15:22:02 -08:00
Johannes Zellner a4b299bf6e Use domain validation for dns setup dialog 2017-01-26 15:22:02 -08:00
Johannes Zellner 383d1eb406 Add angular directive for domain validation input fields 2017-01-26 15:22:02 -08:00
Johannes Zellner 3901144eae Do not use the caas token as a do token 2017-01-26 15:22:02 -08:00
Johannes Zellner 317c6db1d5 Show all DNS providers also for caas 2017-01-26 15:22:02 -08:00
Johannes Zellner 1e14f8e2b9 Update and sync the footer in all webadmin pages 2017-01-26 15:22:02 -08:00
Girish Ramakrishnan 88fc7ca915 move the files and not the directory
... because box is a btrfs subvolume
2017-01-26 14:16:27 -08:00
Girish Ramakrishnan b983e205d2 Add more changes 2017-01-26 13:24:59 -08:00
Girish Ramakrishnan 9cdbc6ba36 capitalize 2017-01-26 13:08:56 -08:00
Girish Ramakrishnan 895f5f7398 Expand backup error in the mail 2017-01-26 13:03:36 -08:00
Girish Ramakrishnan f41b08d573 Add timestamp to emails 2017-01-26 12:47:23 -08:00
Girish Ramakrishnan 3e21b6cad3 Add ensureBackup log 2017-01-26 12:47:23 -08:00
Johannes Zellner 1a32482f66 Remove unused code in ami creation script 2017-01-26 11:11:07 -08:00
Johannes Zellner ee1e083f32 Add initial version of the AMI creation script 2017-01-25 14:06:26 -08:00
Girish Ramakrishnan ebd3a15140 always restart nginx 2017-01-25 12:04:52 -08:00
Girish Ramakrishnan d93edc6375 box.service: start after nginx 2017-01-25 11:28:31 -08:00
Girish Ramakrishnan 3ed17f3a2a doc: restore-url -> encryption-key 2017-01-25 09:47:25 -08:00
Girish Ramakrishnan 8d9cfbd3de Add 0.98.0 changes 2017-01-24 19:20:47 -08:00
Girish Ramakrishnan f142d34f83 Move box data out of appdata volume
This lets us restore the box if the app volume becomes full

Fixes #186
2017-01-24 13:48:09 -08:00
Girish Ramakrishnan 357ca55dec remove unused var 2017-01-24 10:41:58 -08:00
Girish Ramakrishnan d7a8731027 remove unused var 2017-01-24 10:41:38 -08:00
Girish Ramakrishnan 9117c7d141 Use $USER 2017-01-24 10:32:32 -08:00
Girish Ramakrishnan 472020f90c APPICONS_DIR -> APP_ICONS_DIR 2017-01-24 10:13:25 -08:00
Girish Ramakrishnan 2256a0dd3a group paths together 2017-01-24 10:12:05 -08:00
Girish Ramakrishnan 458b5d1e32 bump mail container 2017-01-23 16:26:44 -08:00
Girish Ramakrishnan 1e6abed4aa tests: create mail directory 2017-01-23 15:09:08 -08:00
Girish Ramakrishnan cdd4b426d5 use elif 2017-01-23 14:03:36 -08:00
Girish Ramakrishnan 75b60a2949 Make restore work without a domain
Fixes #195
2017-01-23 13:04:08 -08:00
Girish Ramakrishnan 9ab34ee43a Check for ubuntu version 2017-01-23 12:58:08 -08:00
Johannes Zellner 3c9d7706de Let the api call fail instead of explictily checking the token 2017-01-23 21:40:06 +01:00
Johannes Zellner 8b5b954cbb Only ever send heartbeats for caas cloudrons 2017-01-23 21:38:22 +01:00
Johannes Zellner b2204925d3 Remove unused setup_start.sh creation 2017-01-23 21:36:47 +01:00
Girish Ramakrishnan 63734155f2 doc: domain arg is redundant 2017-01-23 11:10:21 -08:00
Girish Ramakrishnan eb0ae3400a send mailConfig stat 2017-01-23 10:01:54 -08:00
Johannes Zellner db8db430b9 Avoid warning from systemd by reloading the daemon after chaning journald config 2017-01-23 11:01:02 +01:00
Johannes Zellner c0b2b1c26d Escape shell vars in the unbound unit file 2017-01-23 10:27:23 +01:00
Johannes Zellner 7da20e95e3 Use a proper systemd unit file for unbound
Part of #191
2017-01-23 10:14:20 +01:00
Girish Ramakrishnan f30f90e6be Stop mail container before moving the dirs 2017-01-22 21:57:34 -08:00
Girish Ramakrishnan 7f05b48bd7 Revert "Migrate mail data after downloading restore data"
This reverts commit e7c399c36a.
2017-01-22 02:42:14 -08:00
Girish Ramakrishnan ea257b95d9 Fix dirnames when backing up 2017-01-21 23:40:41 -08:00
Girish Ramakrishnan e7c399c36a Migrate mail data after downloading restore data
This allows us to be backward compatible
2017-01-21 23:33:57 -08:00
Girish Ramakrishnan d84666fb43 Move mail data out of box
This will help us with putting a size on box data

Mail container version is bumped because we want to recreate it

Part of #186
2017-01-20 20:22:08 -08:00
Girish Ramakrishnan 1eb33099af dkim directory is now automatically created in cloudron.js 2017-01-20 15:18:03 -08:00
Girish Ramakrishnan e35dbd522f More debugMode fixes 2017-01-20 09:56:44 -08:00
Girish Ramakrishnan db6474ef2a Merge readonlyRootfs and development mode into debug mode
The core issue we want to solve is to debug a running app.
Let's make it explicit that it is in debugging mode because
functions like update/backup/restore don't work.

Part of #171
2017-01-20 09:29:32 -08:00
Johannes Zellner e437671baf Add basic --help for gulp develop 2017-01-20 15:11:17 +01:00
Johannes Zellner f60d640c8e Set developmentMode default to false 2017-01-20 12:07:25 +01:00
Johannes Zellner 56c992e51b Check for 19GB instead of 20GB in cloudron-setup
This is as reporting the disk size may vary from the one selected when
creating the server. Eg EC2 20GB storage results in 21474836480 bytes
which in turn will be calculated as less than 20GB in the script
2017-01-20 11:22:43 +01:00
Girish Ramakrishnan 12ee7b9521 send readonly and dev mode fields 2017-01-19 19:01:29 -08:00
Girish Ramakrishnan c8de557ff7 More 0.97.0 changes 2017-01-19 15:59:52 -08:00
Girish Ramakrishnan 90adaf29d7 Update manifestformat (remove developmentMode)
Fixes #171
2017-01-19 15:57:29 -08:00
Girish Ramakrishnan a71323f8b3 Add developmentMode flag to appdb
Part of #171
2017-01-19 15:57:24 -08:00
Girish Ramakrishnan 155995c7f3 Allow memoryLimit to be unrestricted programatically 2017-01-19 15:11:40 -08:00
Girish Ramakrishnan 319632e996 add readonlyRootfs to the database 2017-01-19 15:11:40 -08:00
Johannes Zellner 33d55318d8 Do not read oauth details in gulpfile from env 2017-01-19 23:41:07 +01:00
Johannes Zellner ec1abf8926 Remove creation of now unused and broken provision.sh 2017-01-19 23:18:01 +01:00
Girish Ramakrishnan 9a41f111b0 Fix failing tests 2017-01-19 12:51:16 -08:00
Girish Ramakrishnan 7ef6bd0d3f Add readonlyRootfs flag to apps table
When turned off, it will put the app in a writable rootfs. This
allows us to debug live/production apps (like change start.sh) and
just get them up and running. Once turned off, this app cannot be
updated anymore (unless the force flag is set). This way we can
then update it using the CLI if we are convinced that the upcoming
update fixes the problem.

Part of #171
2017-01-19 11:55:25 -08:00
Girish Ramakrishnan 02f0bb3ea5 Add readonly flag
Part of #171
2017-01-19 10:55:13 -08:00
Girish Ramakrishnan e12b236617 More 0.97.0 changes 2017-01-19 10:45:41 -08:00
Girish Ramakrishnan 6662a4d7d6 Collect every 60min
If we are crashing so problem, we have bigger problems...
2017-01-19 10:11:36 -08:00
Girish Ramakrishnan 85315d8fc5 Do not stash more than 2mb in log file
For reference, each crash increases the file size by 112K.
So we can store around 20 crashes.

Fixes #190
2017-01-19 10:09:49 -08:00
Girish Ramakrishnan 9f5a7e4c08 cloudron-setup: keep the cursor in the same line 2017-01-19 10:09:47 -08:00
Girish Ramakrishnan ea0e61e6a4 Remove unused function 2017-01-19 09:12:54 -08:00
Johannes Zellner c301e9b088 Show better backup progress in settings ui 2017-01-19 17:30:01 +01:00
Johannes Zellner 70e861b106 Distinguish between app task and backup in progress 2017-01-19 17:08:18 +01:00
Johannes Zellner f5c6862627 Improve backup creation UI
- Do not prompt the user if he really wants to create a backup
- Show error message if a backup can't be created at the moment
2017-01-19 17:04:22 +01:00
Johannes Zellner d845f1ae5b Indicate in the mail subject if it contains more than one crash 2017-01-19 16:52:44 +01:00
Johannes Zellner 7c7d67c6c2 Append the log separator looks nicer 2017-01-19 16:30:20 +01:00
Johannes Zellner c9fcbcc61c No need to print the unitName in the separator 2017-01-19 15:42:30 +01:00
Johannes Zellner 9ac06e7f85 Stash crash logs for up to 30min
This avoids spaming us with crash logs

Part of #190
2017-01-19 15:23:20 +01:00
Johannes Zellner 6eafac2cad Do not rely on fdisk's human readable unit output
Using the bytes output will fix an issue where the disk size is reported
either as terrabyte or also megabyte.
So far we disallowed 1TB disks but allowed 20MB disks.
2017-01-19 13:53:50 +01:00
Johannes Zellner 60cb0bdfb1 Add 0.97.0 changes 2017-01-19 13:17:09 +01:00
Johannes Zellner 979956315c Only ever remove the app icon on uninstall 2017-01-19 12:39:31 +01:00
Johannes Zellner 62ba031702 Skip icon download without an appStoreId 2017-01-19 12:38:41 +01:00
Girish Ramakrishnan 284cb7bee5 doc: remove double header 2017-01-18 23:41:41 -08:00
Girish Ramakrishnan 735c22bc98 doc: more cleanup on selfhosting doc 2017-01-18 23:37:33 -08:00
Girish Ramakrishnan a2beed01a1 doc: move cli section down 2017-01-18 23:31:21 -08:00
Girish Ramakrishnan 93fc6b06a2 doc: add alerts section 2017-01-18 23:14:22 -08:00
Girish Ramakrishnan a327ce8a82 doc: cleanup selfhosting guide 2017-01-18 23:09:06 -08:00
Girish Ramakrishnan f8374929ac generate mail.ini and not mail_vars.ini 2017-01-18 09:11:34 -08:00
Girish Ramakrishnan 5f93290fc7 Fix crash 2017-01-18 08:43:11 -08:00
Johannes Zellner 4d139232bf caas always has a valid appstore token to show the appstore view 2017-01-18 13:05:25 +01:00
Girish Ramakrishnan 804947f039 use dir mount instead of file mount
file mounting is fraught with problems wrt change notifications.

first, we must be carefule that the inode does not change.

second, changes outside container do not result in fs events inside the container.
haraka cache settings files and relies on fs events. So, even
though the file gets updated inside the container, haraka doesn't
see it.

https://github.com/docker/docker/issues/15793
2017-01-17 23:59:23 -08:00
Girish Ramakrishnan 89fb2b57ff recreate mail config when we have owner email id 2017-01-17 23:34:05 -08:00
Girish Ramakrishnan 1262d11cb3 Prefix event enum with EVENT_ 2017-01-17 23:18:08 -08:00
Girish Ramakrishnan 1ba72db4f8 Add prerelease option 2017-01-17 21:23:57 -08:00
Girish Ramakrishnan 7d2304e4a1 Move 0.94.1 changes 2017-01-17 11:01:12 -08:00
Girish Ramakrishnan ebf1dc1b08 listen for cert changed events and restart mail container
neither haraka nor dovecot restarts on cert change

Fixes #47
2017-01-17 10:59:00 -08:00
Girish Ramakrishnan ce31f56eb6 Keep configurePlainIP private 2017-01-17 10:32:46 -08:00
Girish Ramakrishnan 7dd52779dc generate cert files for mail container
this allows us to not track paths anymore

part of #47
2017-01-17 10:21:44 -08:00
Girish Ramakrishnan 2eb5cab74b enable route to set admin certificate 2017-01-17 10:01:05 -08:00
Girish Ramakrishnan db50382b18 check user cert and then the le cert
part of #47
2017-01-17 09:59:40 -08:00
Girish Ramakrishnan 32b061c768 user certs are saved with extension user.cert/key
part of #47
2017-01-17 09:59:30 -08:00
Girish Ramakrishnan 740e85d28c make code a bit readable 2017-01-17 09:57:15 -08:00
Girish Ramakrishnan 568a7f814d rename func 2017-01-17 09:51:04 -08:00
Girish Ramakrishnan b99438e550 remove unused function 2017-01-17 09:18:48 -08:00
Girish Ramakrishnan bcdf90a8d9 typo 2017-01-17 09:17:09 -08:00
Girish Ramakrishnan 536c16929b Remove showTutorial 2017-01-17 09:11:34 -08:00
Johannes Zellner d392293b50 Remove unused require 2017-01-17 16:32:22 +01:00
Johannes Zellner 16371d4528 Use the apps.js layer instead of the raw appdb in apphealthmonitor.js 2017-01-17 16:32:12 +01:00
Johannes Zellner cdd0b48023 Remove redundant information in user event email 2017-01-17 16:16:39 +01:00
Johannes Zellner 15cac726c4 Use the correct var 2017-01-17 16:15:19 +01:00
Johannes Zellner 6dc69a4d5d Streamline the email subject lines 2017-01-17 16:02:42 +01:00
Johannes Zellner c52dfcf52f Adjust user deletion dialog based on feedback 2017-01-17 16:02:26 +01:00
Johannes Zellner eaac13b1c1 app.fqdn already takes care of altDomain 2017-01-17 16:01:10 +01:00
Johannes Zellner 3e83f3d4ee Put our link to all mails and sync the formatting 2017-01-17 15:47:18 +01:00
Johannes Zellner 3845a8f02b HTMLify user added email to admins 2017-01-17 15:34:50 +01:00
Johannes Zellner c932be77f8 Mention that backup storage configuration is about S3 configuration 2017-01-17 15:23:52 +01:00
Johannes Zellner d89324162f Remove tutorial route tests 2017-01-17 13:05:47 +01:00
Johannes Zellner a0ef86f287 Remove now unused tutorial route and business logic
We can bring that back again if needed
2017-01-17 12:50:59 +01:00
Johannes Zellner 7255a86b32 Remove welcome tutorial css parts 2017-01-17 12:47:05 +01:00
Johannes Zellner 81862bf934 Remove the tutorial components and logic 2017-01-17 12:44:07 +01:00
Johannes Zellner 81b7e5645c This not an error if a cloudron is not yet registered
The change avoids scary logs with backtrace
2017-01-17 11:41:50 +01:00
Johannes Zellner 801367b68d Use specific functions for configureAdmin (with domain) and configurePlainIp (always)
This prevents from double configuring on startup on caas cloudrons
2017-01-17 11:38:33 +01:00
Johannes Zellner f2e8f325d1 Correct debug lines for cert renewal or not existing 2017-01-17 10:35:42 +01:00
Girish Ramakrishnan 138743b55f More 0.94.1 changes 2017-01-16 16:39:18 -08:00
Johannes Zellner 7f8db644d1 Use in-memory rate limit
Related to #187
2017-01-16 16:49:03 +01:00
Johannes Zellner c7e410c41b Add express-rate-limit module 2017-01-16 16:48:43 +01:00
Johannes Zellner 08f3b0b612 Add rate limit test 2017-01-16 16:48:17 +01:00
Johannes Zellner a2782ef7a6 Normal users do not have access to the tutorial 2017-01-16 12:59:21 +01:00
Johannes Zellner 34fac8eb05 Do not show appstore for non-admins 2017-01-16 12:58:05 +01:00
Johannes Zellner 56338beae1 Ensure the appstore login input field has focus 2017-01-16 12:53:34 +01:00
Johannes Zellner 17e9f3b41d Move error label in app error dialog to the title 2017-01-16 12:47:58 +01:00
Johannes Zellner 2c06b9325f Add missing callback 2017-01-16 12:35:26 +01:00
Johannes Zellner 2dfb91dcc9 Embed the appstore login instead of a dialog 2017-01-16 12:34:33 +01:00
Johannes Zellner 9f20dfb237 Allow installation on reported main memory of 990 2017-01-16 10:36:16 +01:00
Girish Ramakrishnan da2aecc76a Save generated fallback certs as part of the backup
this way we don't get a new cert across restarts
2017-01-14 13:18:54 -08:00
Girish Ramakrishnan 7c72cd4399 Fix tests: we do not send mails anymore 2017-01-14 13:01:21 -08:00
Girish Ramakrishnan 5647b0430a Simplify onConfigured logic
We had all this logic because we allowed the user to create a CaaS
cloudron with a custom domain from the appstore. This flow has changed
now.

One can only set the DNS config after verification. Only thing that
is required is a domain check.
2017-01-14 12:59:16 -08:00
Girish Ramakrishnan 7c94543da8 bump test version 2017-01-13 20:06:15 -08:00
Girish Ramakrishnan 2118952120 send the ownerType as part of mailbox query 2017-01-13 19:53:58 -08:00
Girish Ramakrishnan d45927cdf4 unbound: listen on 0.0.0.0 2017-01-13 15:22:54 -08:00
Johannes Zellner c8e99e351e Update the selfhosting installation docs to reflect the dns setup changes 2017-01-13 15:15:25 +01:00
Girish Ramakrishnan fb56237122 0.94.1 changes 2017-01-12 19:28:27 -08:00
Girish Ramakrishnan 89152fabde use latest test image 2017-01-12 19:28:27 -08:00
Girish Ramakrishnan 726463d497 use le-staging in dev for better testing 2017-01-12 19:28:27 -08:00
Girish Ramakrishnan 055e41ac90 Make unbound reply on cloudron network
Because of the docker upgrade, dnsbl queries are failing again
since we are not using the unbound server from the containers.

For some reason, docker cannot query 127.0.0.1 (https://github.com/docker/docker/issues/14627).

Make unbound listed on the cloudron network and let docker proxy
DNS calls to unbound (docker always use the embedded DNS server
when using UDN).

See also #130
2017-01-12 19:28:23 -08:00
Girish Ramakrishnan 878878e5e4 Bump mail container for testing 2017-01-12 12:04:24 -08:00
Girish Ramakrishnan 7742c8a58e Remove unused function 2017-01-12 11:50:59 -08:00
Girish Ramakrishnan 04476999f7 Fix grammar 2017-01-12 11:48:03 -08:00
Girish Ramakrishnan 5bff7ebaa1 remove dead comment 2017-01-12 11:46:52 -08:00
Girish Ramakrishnan 44742ea3ae Fix bug where cloudron cannot be setup if initial dns credentials were invalid
To reproduce:
* https://ip
* provide invalid dns creds. at this point, config.fqdn gets set already
* cannot setup anymore
2017-01-12 11:46:52 -08:00
Girish Ramakrishnan d6ea7fc3a0 Move setupDns to cloudron.js 2017-01-12 11:46:49 -08:00
Girish Ramakrishnan 2b49cde2c2 cloudron-setup: validate tlsProvider 2017-01-12 10:31:54 -08:00
Johannes Zellner 1008981306 Adapt to new notification library version
the notification template is now in the html pages itself
2017-01-12 16:00:57 +01:00
Johannes Zellner 146f3ad00e Do not show 0 progress in update
If the initial app takes very long to backup, do not show 0 progress for
a long time
2017-01-12 16:00:57 +01:00
Johannes Zellner 5219eff190 Remove 'app at' for app backup message 2017-01-12 16:00:57 +01:00
Johannes Zellner abfd7b8aea Update angular notification library to support maxCount 2017-01-12 16:00:57 +01:00
Johannes Zellner d98f64094e Set the correct progress percentage 2017-01-12 16:00:56 +01:00
Johannes Zellner a8d254738e Only set the update page title to Cloudron 2017-01-12 16:00:56 +01:00
Johannes Zellner 1c9f2495e3 Show the detailed backup progress during update
Fixes #157
2017-01-12 16:00:34 +01:00
Johannes Zellner aa4d95f352 Remove unused node module showdown 2017-01-12 13:13:37 +01:00
Johannes Zellner 558093eab1 Remove now unused mailer.boxUpdateAvailable() 2017-01-12 13:11:18 +01:00
Johannes Zellner 865b041474 Do not send box update emails to admins
Fixes #160
2017-01-12 13:09:12 +01:00
Johannes Zellner 1888319313 Send altDomain as Host header if it is set
At least nextcloud will respond with 400 if the Host header is not
matching
2017-01-12 10:45:16 +01:00
Girish Ramakrishnan 0be7679619 Hold the docker package
One idea was to use docker binary packages. However, docker binaries
are statically linked and are incompatible with devicemapper.

See https://github.com/docker/docker/issues/14035 for more info.

Holding will let the user turn on automatic updates for non-security
packages as well.

Fixes #183
2017-01-12 01:09:19 -08:00
Girish Ramakrishnan bbef6c2bc2 Fix docker storage driver detection
When docker is not passed the --storage-driver option, it tries to
auto detect the storage driver. Roughly:
1. If existing storage paths like /var/lib/docker/aufs exist, it will
   choose that driver.

2. It has a priority list of drivers to scan in order (driver.go)
   As it stands the ordering is aufs, btrfs and then devicemapper.

3. Docker will attempt to "init" each driver. aufs, for example,
   tests for insmod'ing aufs and also looks into /proc/filesystems.

The fact that we installed aufs-tools and linux drivers (for aufs
driver) was a programming error since we want docker to use devicemapper.

However, what is curious is why docker still ended up choosing devicemapper
despite having all aufs requirements (as we do not pass --storage-driver explicitly).

The answer is that "apt-get install aufs-tool linux-image-* docker-engine"
can install packages in any order! This means there is a race on how docker
chooses the storage engine. In most cases, since linux-image-* is a big package,
docker gets to install first and ends up using devicemapper since aufs module is not found yet.
For some people, linux-image-* possibly installs first and thus docker
chooses aufs!

Mystery solved.

Part of #183
2017-01-12 01:08:22 -08:00
Girish Ramakrishnan be59267747 Enable unattended upgrades
This is usually installed and enabled by default

https://help.ubuntu.com/community/AutomaticSecurityUpdates

Note that automatic reboot is not enabled. Not clear if we should be.

Part of #183
2017-01-11 22:36:51 -08:00
Girish Ramakrishnan b4477d26b7 Reload the docker service file 2017-01-11 15:40:16 -08:00
Girish Ramakrishnan ce0afb3d80 Explicitly specify the storage driver as devicemapper
For reasons unknown, the images build by the buildbot (which currently
uses btrfs), does not work with devicemapper.

Existing cloudrons with aufs will not be affected because docker will
just ignore it.

devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem will be ignored.

Existing AUFS users can move to devicemapper either by restoring to
a new cloudron (recommended) OR
* systemctl stop box
* systemctl stop docker
* rm -rf /var/lib/docker
* Edit /home/yellowtent/data/INFRA_VERSION. Change the "version" field to "1"
* systemctl start docker
* systemctl start box # this will download images all over

Fixes #182
2017-01-11 14:53:11 -08:00
Johannes Zellner 0b5cd304ea We also don't need to prefix with my. when using the adminFqdn 2017-01-11 23:09:06 +01:00
Girish Ramakrishnan e54ad97fa7 cloudron-setup: set the apiServerOrigin for --env 2017-01-11 12:36:01 -08:00
Girish Ramakrishnan 66960ea785 cloudron-setup: Add --env flag 2017-01-10 20:42:24 -08:00
Girish Ramakrishnan 72dd3026ca collect docker info output
this has information like the storage driver
2017-01-10 20:42:24 -08:00
Girish Ramakrishnan 4c719de86c restart docker only if config changed 2017-01-10 18:50:21 -08:00
Girish Ramakrishnan c7a0b017b4 Fix crash 2017-01-10 18:50:21 -08:00
Johannes Zellner 91c931b53c Revert "Remove broken external domain validation"
This reverts commit 9b1b833fac.
2017-01-11 03:46:41 +01:00
Girish Ramakrishnan 6f2b2adca9 Enable apparmor explicitly 2017-01-10 18:15:10 -08:00
Girish Ramakrishnan 3176bc1afa Fix failing tests 2017-01-10 16:54:15 -08:00
Girish Ramakrishnan b929adf2dd Fix migration 2017-01-10 16:23:01 -08:00
Girish Ramakrishnan f3d3b31bed Fix error return type 2017-01-10 16:16:42 -08:00
Girish Ramakrishnan f17eaaf025 Add TODO note 2017-01-10 16:16:37 -08:00
Girish Ramakrishnan 80d65acd0d Set the domain only during dns setup
If we change the domain when dns settings are changed, then migration
fails because we callout to appstore API via the domain (for example,
backup url call will fail because it uses the new domain name).
2017-01-10 16:16:32 -08:00
Girish Ramakrishnan ba02d333d1 remove unused requires 2017-01-10 16:16:25 -08:00
Johannes Zellner 9b9d30c092 Remove commented out section of the nginx.conf 2017-01-11 00:09:51 +01:00
Johannes Zellner d47de31744 Rename nakeddomain.html to noapp.html 2017-01-11 00:08:13 +01:00
Johannes Zellner edc7efae5f Do not overwrite the provider previously set 2017-01-11 00:02:19 +01:00
Johannes Zellner 18007be9e1 Also use adminFqdn in setup.js 2017-01-10 23:58:28 +01:00
Johannes Zellner d68ae4866c The adminFqdn already has the my. part 2017-01-10 23:58:28 +01:00
Girish Ramakrishnan f4b635a169 Fix error type 2017-01-10 14:21:36 -08:00
Johannes Zellner d674d72508 Add missing https:// for adminFqdn 2017-01-10 22:54:45 +01:00
Johannes Zellner 6ee76f8ee4 No need for my. my- magic anymore 2017-01-10 22:54:45 +01:00
Johannes Zellner 06338e0a1f Redirect to naked domain if we are not on a webadmin origin 2017-01-10 22:54:45 +01:00
Johannes Zellner 349c261238 Remove configStatus.domain and replace with toplevel adminFqdn 2017-01-10 22:54:45 +01:00
Girish Ramakrishnan eb057fb399 Add note that port 25 is blocked on some DO accounts 2017-01-10 12:38:34 -08:00
Johannes Zellner 5d739f012c Never use the cloudron email account for LetsEncrypt 2017-01-10 18:14:59 +01:00
Johannes Zellner 741d56635f show a maximum of 3 error notifications at once 2017-01-10 15:58:15 +01:00
Johannes Zellner 35404a2832 Return expected dns records also if we hit NXDOMAIN 2017-01-10 15:51:53 +01:00
Johannes Zellner 99505fc287 Call the correct function to get dns email records in the webadmin 2017-01-10 15:43:14 +01:00
Johannes Zellner a20b331095 Convert settings JSON to objects 2017-01-10 15:24:16 +01:00
Johannes Zellner 06a9a82da0 Disable query for non approved apps 2017-01-10 14:01:46 +01:00
Johannes Zellner 03383eecbc Also remind the user on app install if manual dns is used 2017-01-10 13:47:58 +01:00
Johannes Zellner 89ae1a8b92 Ensure wildcard backend is pre-selected on configure 2017-01-10 13:43:33 +01:00
Johannes Zellner 7061195059 Show different text for manual and wildcard dns backends 2017-01-10 13:41:20 +01:00
Johannes Zellner 9556d4b72c Fix the busy state of the dns backend change form 2017-01-10 13:34:00 +01:00
Johannes Zellner dd764f1508 Sync the dns provider selection in the ui parts 2017-01-10 13:16:25 +01:00
Johannes Zellner 0a154339e6 Fix the normal case of changing dns provider 2017-01-10 13:15:14 +01:00
Johannes Zellner 2502b94f20 Remind the user to setup the DNS record on app configuration 2017-01-10 13:11:37 +01:00
Johannes Zellner 9b1b833fac Remove broken external domain validation 2017-01-10 13:05:06 +01:00
Johannes Zellner 848ca9817d Give better DNS error feedback after app installation 2017-01-10 13:01:15 +01:00
Johannes Zellner 9a159b50c6 Do not recommend manual dns backend 2017-01-10 12:34:28 +01:00
Johannes Zellner 11fb0d9850 Verify the my.domain instead of the zone 2017-01-10 12:30:14 +01:00
Johannes Zellner 3f925e5b96 Improve manual dns backend error message 2017-01-10 12:09:30 +01:00
Johannes Zellner 714ae18658 Fix the manual dns verification 2017-01-10 12:07:32 +01:00
Johannes Zellner 226164c591 This error is already a SubdomainError 2017-01-10 11:40:05 +01:00
Johannes Zellner 1d44d0a987 Remove dns validation code in settings.js 2017-01-10 11:33:33 +01:00
Johannes Zellner babfb5efbb Make the verifyDnsConfig() api return the valid credentials 2017-01-10 11:32:44 +01:00
Johannes Zellner badbb89c92 Add INVALID_PROVIDER to SubdomainError 2017-01-10 11:32:24 +01:00
Johannes Zellner 50e705fb25 Remove unused requires 2017-01-10 11:14:16 +01:00
Johannes Zellner b9e0530ced Fill in the noops in the other backends 2017-01-10 11:13:33 +01:00
Johannes Zellner 9c793f1317 Make the new interface available in subdomains.js 2017-01-10 11:13:02 +01:00
Johannes Zellner cef93012bf Implement verifyDnsConfig() for manual dns 2017-01-10 11:12:38 +01:00
Johannes Zellner bd099cc844 Implement verifyDnsConfig() for route53 2017-01-10 11:12:25 +01:00
Johannes Zellner c1029ba3b0 Implement verifyDnsConfig() for digitalocean 2017-01-10 11:12:13 +01:00
Johannes Zellner 152025baa7 Add verifyDnsConfig() to the dns backend where it belongs 2017-01-10 11:11:41 +01:00
Johannes Zellner 94f0f48cba Send backend provider with stats route 2017-01-10 10:22:47 +01:00
Girish Ramakrishnan 9b5c312aa1 Disable Testing tab
Part of #180
2017-01-09 21:08:01 -08:00
Girish Ramakrishnan fdb488a4c3 installApp bundle first because syncConfigState might block 2017-01-09 19:06:32 -08:00
Girish Ramakrishnan 69536e2263 Do not show multiple Access control sections for email apps 2017-01-09 19:00:15 -08:00
Girish Ramakrishnan 3f8ea6f2ee Make app auto install as part of async flow
It was called in nextTick() and was done async but had no chance to
run because the platform.initialize() which is sync was blocking it
2017-01-09 18:24:41 -08:00
Girish Ramakrishnan 3b035405b0 debug.formatArgs API has changed 2017-01-09 16:41:04 -08:00
Girish Ramakrishnan 7b1a6e605b ensure backup directory exists
this is because the filename can now contain subpaths
2017-01-09 16:09:54 -08:00
Girish Ramakrishnan 26ed331f8e Add default clients in clients.js 2017-01-09 15:41:29 -08:00
Johannes Zellner 29581b1f48 cog is a circle 2017-01-09 22:58:01 +01:00
Girish Ramakrishnan 16ea13b88c Check status for cloudron to be ready 2017-01-09 13:29:17 -08:00
Girish Ramakrishnan 2311107465 remove misleading comments 2017-01-09 12:35:39 -08:00
Girish Ramakrishnan 35cf9c454a taskmanager: track paused state 2017-01-09 12:26:18 -08:00
Girish Ramakrishnan 4c2a57daf3 0.94.0 changes 2017-01-09 11:26:29 -08:00
Girish Ramakrishnan ed9889af11 Add note about alive and heartbeat job 2017-01-09 11:14:11 -08:00
Girish Ramakrishnan 89dc2ec3f6 Remove configured event 2017-01-09 11:02:33 -08:00
Girish Ramakrishnan 7811359b2f Move cron.initialize to cloudron.js 2017-01-09 11:00:09 -08:00
Girish Ramakrishnan 21c66915a6 Refactor taskmanager resume flow 2017-01-09 10:49:34 -08:00
Girish Ramakrishnan e3e99408d5 say the container was restarted automatically 2017-01-09 10:46:43 -08:00
Girish Ramakrishnan 01f16659ac remove unused requires 2017-01-09 10:33:23 -08:00
Girish Ramakrishnan 9e8f120fdd Make ensureFallbackCertificate error without a domain 2017-01-09 10:28:28 -08:00
Girish Ramakrishnan 3b9b9a1629 ensure fallback cert exists before platform is started 2017-01-09 10:28:28 -08:00
Girish Ramakrishnan 9e2f43c3b1 initialize platform only when domain is available 2017-01-09 10:28:25 -08:00
Girish Ramakrishnan 588bb2df2f Pull docker images in initialize script
This allows us to move platform.initialize to whenever the domain
is setup. Thus allowing box code to startup faster the first time
around.
2017-01-09 09:22:23 -08:00
Girish Ramakrishnan 3c55ba1ea9 doc: clarify httpPort 2017-01-09 09:17:35 -08:00
Johannes Zellner 2a86216a4a Fix race for mailConfig in settings view 2017-01-09 13:58:11 +01:00
Johannes Zellner e3ea2323c5 Defer configure checks to after tutorial
Fixes #154
2017-01-09 13:45:01 +01:00
Johannes Zellner 6b55f3ae11 Highlight the domain for the manual/wildcard DNS setup 2017-01-09 13:37:54 +01:00
Johannes Zellner f3496a421b Remove tooltip for memory requirement 2017-01-09 11:53:18 +01:00
Girish Ramakrishnan a4bba37606 Call mailer.start on configured 2017-01-07 23:40:34 -08:00
Girish Ramakrishnan 56c4908365 restart mail container on configure event 2017-01-07 23:33:20 -08:00
Girish Ramakrishnan 18f6c4f2cd Refactor configure event handling into onConfigured event 2017-01-07 23:31:29 -08:00
Girish Ramakrishnan d0ea1a4cf4 Send bounce alerts to cloudron owner
Fixes #166
2017-01-07 23:24:12 -08:00
Girish Ramakrishnan aa75824cc6 Pass alerts_from and alerts_to to mail container
Part of #166
2017-01-07 22:31:40 -08:00
Girish Ramakrishnan 61d5005c4b Use mail_vars.ini to pass mail container config 2017-01-07 16:42:24 -08:00
Girish Ramakrishnan 72d58f48e4 Remove invalid event 2017-01-07 14:28:33 -08:00
Girish Ramakrishnan 3f3b97dc16 Send oom email to cloudron admins
Part of #166
2017-01-07 13:52:33 -08:00
Girish Ramakrishnan 8a05fdcb10 Fix language 2017-01-07 12:35:26 -08:00
Girish Ramakrishnan 6fd3466db1 Send cert renewal errors to support@cloudron.io as well
Part of #166
2017-01-07 12:29:43 -08:00
Girish Ramakrishnan f354baf685 Inc -> UG 2017-01-07 11:59:13 -08:00
Girish Ramakrishnan d009acf8e0 doc: upgrading from filesystem backend
Fixes #156
2017-01-07 11:57:37 -08:00
Johannes Zellner fd479d04a0 Fix nginx config to make non vhost configs default_server
Nginx does not match on the ip as a vhost. This no basically replaces
the commented out section in the nginx.conf
2017-01-06 22:09:10 +01:00
Girish Ramakrishnan a3dc641be1 Skip sending heartbeat if we have no fqdn 2017-01-06 09:42:56 -08:00
Johannes Zellner a59f179e9d warn the user in manual and wildcard cert case 2017-01-06 18:42:22 +01:00
Johannes Zellner 4128bc437b Ensure text is center in the footer 2017-01-06 18:23:59 +01:00
Johannes Zellner e1b176594a The matching location needs to be my.domain 2017-01-06 18:17:27 +01:00
Johannes Zellner 35b11d7b22 Add footers to the setup views 2017-01-06 17:57:22 +01:00
Johannes Zellner bd65e1f35d Put some redirects in the setup pages to end up in the correct one always 2017-01-06 17:25:24 +01:00
Johannes Zellner a243478fff Create separate ip and my. domain nginx configs 2017-01-06 16:01:49 +01:00
Johannes Zellner f0fdc00e78 Always setup an nginx config for ip as the webadmin config 2017-01-06 12:42:21 +01:00
Johannes Zellner a21210ab29 Fix bug where we check for mail dns records without mail being enabled 2017-01-06 12:20:48 +01:00
Johannes Zellner 684e7df939 At least resolve nameservers for dns settings validator 2017-01-06 11:08:10 +01:00
Johannes Zellner 9be5f5d837 If we already have a domain set, directly wait for dns 2017-01-06 10:54:56 +01:00
Johannes Zellner 6c5fb67b58 Give the actual domain in status if set
This allows the webui served up on ip to redirect correctly
2017-01-06 10:47:42 +01:00
Girish Ramakrishnan 616ec408d6 Remove redundant reboot message 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 5969b4825c dns_ready is not required since it is part of status 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 64c888fbdb Send config state as part of the status 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 8a0fe413ba Visit IP if no domain provided 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 270a1f4b95 Merge gIsConfigured into config state 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 8f4ed47b63 track the config state in cloudron.js 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 09997398b1 Disallow dnsSetup if domain already set 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 0b68d1c9aa Reconfigure admin when domain gets set 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan cc9904c8c7 Move nginx config and cert generation to box code 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 16ab523cb2 Store IP certs as part of nginx cert dir (otherwise, it will get backed up) 2017-01-06 10:23:10 +01:00
Girish Ramakrishnan 20a75b7819 tag -> prefix 2017-01-05 23:20:02 -08:00
Girish Ramakrishnan 49e299b62d Add ubuntu-standard
Fixes #170
2017-01-05 14:05:46 -08:00
Girish Ramakrishnan 98a2090c72 install curl and python before using them 2017-01-05 14:03:30 -08:00
Johannes Zellner 38c542b05a Add route to check dns and cert status 2017-01-05 20:37:26 +01:00
Johannes Zellner fc5fa621f3 Ensure the dkim folder for the domain exists 2017-01-05 17:14:27 +01:00
Johannes Zellner 6ec1a75cbb Ensure Dkim key in the readDkimPublicKeySync() function 2017-01-05 17:04:03 +01:00
Johannes Zellner bbba16cc9a make input fields shorter 2017-01-05 16:35:38 +01:00
Johannes Zellner 564d3d563c Preselect dns provider if possible 2017-01-05 16:32:34 +01:00
Johannes Zellner a858a4b4c1 Let the user know what we are waiting for 2017-01-05 16:31:23 +01:00
Johannes Zellner 2d6d8a7ea8 Create fallback certs only if fqdn is already set 2017-01-05 16:29:10 +01:00
Johannes Zellner 5b5ed9e043 Always create box/mail/dkim folder 2017-01-05 16:15:00 +01:00
Johannes Zellner 801c40420c Create setup nginx config and cert for ip setup 2017-01-05 16:02:03 +01:00
Johannes Zellner c185b3db71 Set correct busy states in setup views 2017-01-05 15:59:07 +01:00
Johannes Zellner 0f70b73e81 Cleanup some of the setup html code 2017-01-05 14:43:18 +01:00
Johannes Zellner d9865f9b0f Allow box to startup without fqdn 2017-01-05 14:02:04 +01:00
Johannes Zellner 59deb8b708 Do not fire configured event if no fqdn is set 2017-01-05 13:05:36 +01:00
Johannes Zellner 617fa98dee Further improve the dns setup ui 2017-01-05 12:31:37 +01:00
Johannes Zellner c9cb1cabc4 Improve dns setup ui 2017-01-05 12:08:52 +01:00
Johannes Zellner 92ab6b5aa4 Cleanup the dns setup code 2017-01-05 11:53:45 +01:00
Johannes Zellner a66f250350 Redirect to setupdns.html for non caas if not activated 2017-01-05 11:53:23 +01:00
Johannes Zellner 39200f4418 Add client.js wrapper for dns setup route 2017-01-05 11:53:05 +01:00
Johannes Zellner 4f1c7742ef Add public route for dns setup
This route is only available until the Cloudron is activated and also
only in self-hosted ones
2017-01-05 11:52:38 +01:00
Johannes Zellner e812cbcbe9 add setupdns to gulpfile 2017-01-05 11:17:39 +01:00
Johannes Zellner 2e0670a5c1 Strip dns setup from normal setup.html 2017-01-05 11:02:52 +01:00
Johannes Zellner 92c92db595 Add separate file for dns setup 2017-01-05 11:02:43 +01:00
Johannes Zellner 1764567e1f Make domain optional in cloudron-setup 2017-01-05 10:49:41 +01:00
Johannes Zellner 7eeb8bcac1 Only mark dns fields red if dirty and invalid 2017-01-05 10:49:41 +01:00
Johannes Zellner c718b4ccdd ngEnter directive is now unused 2017-01-05 10:49:41 +01:00
Johannes Zellner 4f5ffc92a6 Cleanup setup.js 2017-01-05 10:49:41 +01:00
Johannes Zellner 4c485f7bd0 Remove old setup wizard step templates 2017-01-05 10:49:41 +01:00
Johannes Zellner 7076a31821 Also send domain with dns credentials 2017-01-05 10:49:41 +01:00
Johannes Zellner 68965f6da3 Change the location to the new domain at the end of setup 2017-01-05 10:49:41 +01:00
Johannes Zellner b6a545d1f5 Add separate entry for wildcard in dns setup
Fixes #168
2017-01-05 10:49:41 +01:00
Johannes Zellner c0afff4d13 Add view for dns credentials in setup 2017-01-05 10:49:41 +01:00
Johannes Zellner 604faa6669 Skip forward for caas after admin setup 2017-01-05 10:49:41 +01:00
Johannes Zellner d94d1af7f5 Avoid angular flicker in setup 2017-01-05 10:49:41 +01:00
Johannes Zellner 9feb5dedd5 Remove all the wizard step logic from setup 2017-01-05 10:49:41 +01:00
Johannes Zellner 99948c4ed5 Use class nesting for setup 2017-01-05 10:49:41 +01:00
Girish Ramakrishnan 967bab678d Fix listing of app backups
The id can now contain path and not just the filename
2017-01-05 01:03:44 -08:00
Girish Ramakrishnan 135c296ac7 Remove the Z suffix 2017-01-05 00:12:31 -08:00
Girish Ramakrishnan e83ee48ed5 Pass collation tag to backup functions
Fixes #159
2017-01-05 00:10:16 -08:00
Girish Ramakrishnan 1539fe0906 preserve msecs portion in backup file format
this is required because the second precision causes backups to fail
because of duplicate file name. this happens in tests.

part of #159
2017-01-04 21:57:03 -08:00
Girish Ramakrishnan c06bddd19e Fix backup filename prefix in sql query 2017-01-04 21:41:31 -08:00
Girish Ramakrishnan ceb78f21bb remove redundant reuseOldAppBackup 2017-01-04 21:20:36 -08:00
Girish Ramakrishnan 5af201d4ee remove unused require 2017-01-04 19:37:39 -08:00
Girish Ramakrishnan 794efb5ef5 Merge backupDone webhook into caas storage backend 2017-01-04 16:29:25 -08:00
Girish Ramakrishnan 31a9437b2c Add backupDone hook 2017-01-04 16:23:12 -08:00
Girish Ramakrishnan 2b27e554fd Change backup filenames
appbackup_%s_%s-v%s.tar.gz -> app_%s_%s_v%s.tar.gz
    drop 'backup'. rationale: it is known these files are backups
    timestamp has '-'. rationale: colon in filename confuses tools like scp (they think it is a hostname)

backup_%s-v%s.tar.gz -> box_%s_v%s.tar.gz
    drop 'backup' and name it 'box'. this makes it clear it related to the box backup
    timestamp has '-'. rationale: colon in filename confuses tools like scp (they think it is a hostname)

Part of #159
2017-01-04 13:36:25 -08:00
Girish Ramakrishnan 4784b7b00e Fix coding style 2017-01-04 13:36:16 -08:00
Girish Ramakrishnan e547a719f6 remove dead code 2017-01-04 13:35:39 -08:00
Johannes Zellner 24f2d201ed Remove ip cache in sysinfo 2017-01-04 21:40:47 +01:00
Girish Ramakrishnan 792dfc731c Revert "Make virtualbox 20GB vdi work"
This reverts commit 67d840a1b3.

Change the docs for virtualbox for now to create a bigger VDI
2017-01-04 10:14:57 -08:00
Johannes Zellner 6697b39e79 Set password digest explicitly
sha1 used to be the fallback but with node 6.* the fallback is deprecated
2017-01-04 09:59:14 -08:00
Girish Ramakrishnan db1eeff2c3 Add test to check if user can be readded after removal
Fixes #162
2017-01-03 19:12:00 -08:00
Girish Ramakrishnan fc624701bf Use cloudron-setup from CDN
Fixes #165
2017-01-03 15:39:17 -08:00
Girish Ramakrishnan 591cc52944 Run initializeBaseImage script from the release tarball
Part of #165
2017-01-03 14:48:39 -08:00
Girish Ramakrishnan 67d840a1b3 Make virtualbox 20GB vdi work 2017-01-03 14:30:59 -08:00
Girish Ramakrishnan 8ffa951407 Clearly mark message as an error 2017-01-03 14:28:04 -08:00
Girish Ramakrishnan af39c2c7ae Replace cloudron-version with a python script
This will allow us to check version without node installed

Part of #165
2017-01-03 14:23:00 -08:00
Girish Ramakrishnan 5903c7d0bc remove x-bit from logcollector.js 2017-01-03 09:46:53 -08:00
Johannes Zellner dbb79fc9e6 Remove unused customDomain check in setup flow 2017-01-03 14:58:41 +01:00
Johannes Zellner ef1408fddb Remove unsed vars in cloudron-setup 2017-01-03 09:26:08 +01:00
Johannes Zellner 47ecb0e1cf Test minimum requirements before continue in cloudron-setup
Fixes #153
2017-01-02 18:03:28 +01:00
Johannes Zellner 55fad3d57e Convert booleans for the correct object 2017-01-02 14:15:20 +01:00
Johannes Zellner 496a44d412 Also update app dns records in dynamic dns case 2017-01-02 14:00:07 +01:00
Johannes Zellner 05721f73cc Fix typo 2017-01-02 13:51:58 +01:00
Johannes Zellner 424c36ea49 Convert boolean settings values
The db table only stores strings
2017-01-02 13:47:51 +01:00
Johannes Zellner a38097e2f5 Refresh dns if dynamic dns is enabled 2017-01-02 13:14:03 +01:00
Johannes Zellner b26cb4d339 Add dynamic dns settings key 2017-01-02 13:05:48 +01:00
Johannes Zellner 3523974163 Add initial refreshDNS() function 2017-01-02 13:00:30 +01:00
Johannes Zellner a2bdd294a8 update the version tag in the selfhosting docs 2017-01-01 17:17:24 +01:00
Girish Ramakrishnan f85bfdf451 Explain what the MB is 2016-12-31 09:39:17 -08:00
Girish Ramakrishnan cfad186a6b Highlight the reboot message little more 2016-12-30 15:20:27 -08:00
Girish Ramakrishnan c8a9412995 suppress error message 2016-12-30 14:23:16 -08:00
Girish Ramakrishnan 318ea04efc Set "version" to the resolved version in config.json 2016-12-30 13:12:22 -08:00
Girish Ramakrishnan 90c1fd4c31 rename the service to cloudron-resize-fs 2016-12-30 11:27:00 -08:00
Girish Ramakrishnan fad6221750 Run cloudron-system-setup before box 2016-12-30 11:23:53 -08:00
Johannes Zellner 9f0047478d Remove now unused dependency dnsutils 2016-12-30 17:26:39 +01:00
Johannes Zellner 591ef3271b Do not wait for apt, but skip install if we have a base image already 2016-12-30 17:25:23 +01:00
Johannes Zellner 9afbbde062 Actually this is about apt-get update for the mirror listing 2016-12-30 16:29:29 +01:00
Johannes Zellner 73e6e519a3 Wait for apt to finish before proceeding with cloudron-setup 2016-12-30 16:08:06 +01:00
Johannes Zellner 4268ba54bf If app purchase failed, show appstore login
Since we don't have cases like failing to charge credit card so far, the
only reason it can fail here is that the appstore token or userId is
incorrect/expired

Fixes #52
2016-12-30 15:50:43 +01:00
Johannes Zellner 47037b0066 Add hosttech referral link
Part of #140
2016-12-30 14:07:49 +01:00
Johannes Zellner 05a6a36a62 Add linode referral link
Part of #140
2016-12-30 13:56:03 +01:00
Johannes Zellner d72b1d8bd5 Show required memory in app install dialog
Fixes #150
2016-12-30 12:51:44 +01:00
Johannes Zellner 0f1a4422f5 Add prettyMemory angular filter 2016-12-30 12:51:30 +01:00
Johannes Zellner 7d06f9e1e3 Add comment why the script might fail on unsupported small disks 2016-12-30 11:53:35 +01:00
Johannes Zellner 1e4e76b0dd give disk size a unit in cloudron-system-setup.sh 2016-12-30 11:49:57 +01:00
Johannes Zellner 49d70f487e show dots at the end in cloudron-setup log lines 2016-12-30 11:35:03 +01:00
Johannes Zellner 456cb22ac0 this and that typo 2016-12-30 11:32:56 +01:00
Girish Ramakrishnan ba1dfee5ca Actually remove dev deps (npm is a mystery) 2016-12-30 01:04:43 -08:00
Girish Ramakrishnan 143a600a5c remove ununsed dev deps 2016-12-30 01:02:19 -08:00
Girish Ramakrishnan 68b4bf0a7f Remove ini and tail-stream unused modules 2016-12-30 01:00:23 -08:00
Girish Ramakrishnan bc75d07391 Remove ursa dependancy
ursa uses native code and doing a npm rebuild often runs out of
memory in low memory cloudrons
2016-12-30 00:13:35 -08:00
Girish Ramakrishnan 7eaa3ef52e Use the ejs-cli of the new box code 2016-12-29 19:17:31 -08:00
Girish Ramakrishnan af69ddc220 Email needs atleast 256m even on 1gb droplet 2016-12-29 18:33:59 -08:00
Girish Ramakrishnan b25d61fbb5 installer.sh is unused in base image 2016-12-29 15:56:14 -08:00
Girish Ramakrishnan 81a60b029d bash is dangerous (script_dir was marked readonly in parent script!) 2016-12-29 15:34:30 -08:00
Girish Ramakrishnan 751fd8cc4b update gulp-sass 2016-12-29 15:03:17 -08:00
Girish Ramakrishnan 503e3d6ff2 Add trailing slash 2016-12-29 14:36:19 -08:00
Girish Ramakrishnan decbfe0505 More start.sh cleanup 2016-12-29 14:35:48 -08:00
Girish Ramakrishnan 379042616f Ensure box.service starts after mysql.service 2016-12-29 14:24:29 -08:00
Girish Ramakrishnan df2878bc2e Prettify start.sh 2016-12-29 14:22:42 -08:00
Girish Ramakrishnan 1ff35461a2 Remove obsolete design doc 2016-12-29 13:21:09 -08:00
Girish Ramakrishnan 7de94fff1b Merge container logic into start.sh
This whole container thinking is over-engineered and we will get to
it if and when we need to.
2016-12-29 12:01:59 -08:00
Girish Ramakrishnan 3236f70d8b Show email records for manual dns
Fixes #151
2016-12-29 11:32:42 -08:00
Girish Ramakrishnan cf7cef19f9 Fix wording 2016-12-29 11:32:06 -08:00
Girish Ramakrishnan e159cdad5b Remove activated event
Simply go ahead and create cron jobs
2016-12-28 14:21:58 -08:00
Girish Ramakrishnan 2ddb533ef2 remove redundant permission change 2016-12-28 09:54:30 -08:00
Girish Ramakrishnan 36a6e02269 remove unused variable 2016-12-28 09:49:18 -08:00
Girish Ramakrishnan 6fbbf0ad61 Use curl with options 2016-12-28 09:49:04 -08:00
Girish Ramakrishnan 1040fbddc6 Improve data-file handling 2016-12-28 09:46:04 -08:00
Girish Ramakrishnan bbd63b2c57 Prettify container.sh 2016-12-28 08:59:26 -08:00
Girish Ramakrishnan 905bdb1d27 only reboot if base image script was called 2016-12-28 08:59:25 -08:00
Girish Ramakrishnan 11ce5ffa4c 0.93.0 changelog 2016-12-28 08:59:25 -08:00
Girish Ramakrishnan b1854f82f2 prettify init base image script 2016-12-28 08:59:25 -08:00
Girish Ramakrishnan 745b7a26b7 validate arguments only if data is not provided 2016-12-28 08:59:24 -08:00
Girish Ramakrishnan 764a38f23e Fix DO image script to not use installer 2016-12-28 08:59:24 -08:00
Girish Ramakrishnan 7873fdc7bb typo 2016-12-28 08:59:23 -08:00
Girish Ramakrishnan 76435460f0 redirect error 2016-12-28 08:59:20 -08:00
Girish Ramakrishnan 7e3a54ff1b force the link for idempotency 2016-12-28 08:59:15 -08:00
Girish Ramakrishnan 61789e3fda Use the installer.sh from the source tarball
This redesigns how update works. installer.sh now rebuild the package,
stops the old code and starts the new code. Importantly, it does not
download the new package, this is left to the caller. cloudron-setup
downloads the code and calls installer.sh of the downloaded code.
Same goes for updater.sh. This means that installer.sh itself is now
easily updatable.

Part of #152
2016-12-28 08:59:07 -08:00
Girish Ramakrishnan 441c5fe534 Add --data to pass raw data
This will be used by CaaS
2016-12-28 08:58:54 -08:00
Girish Ramakrishnan f30001d98b Add option to skip the base image init
This will be used for CaaS

Part of #152
2016-12-28 08:58:48 -08:00
Girish Ramakrishnan fae0ba5678 Decouple installer from the base image script
This means that the base image does not have the installer anymore
and needs to be copied over.

Part of #152
2016-12-28 08:58:10 -08:00
Girish Ramakrishnan 7e592f34bd base image is now port 22 (becomes 202 only after install) 2016-12-28 08:57:48 -08:00
Girish Ramakrishnan 691f6c7c5c Use docker 1.12.5
Docker uses an embedded DNS server (127.0.0.11) for user defined networks (UDN).

With the latest releases of docker, specifying 127.0.0.1 as --dns makes the
containers resolve 127.0.0.1 _inside_ the container's networking namespace
(not sure how it worked before this).

The next idea was to only specify --dns-search=. but this does not work.
This makes docker setup the containers to use 127.0.0.1 (or 127.0.0.11 for UDN).
In my mind, the UDN case should work but doesn't (not sure why).

So, the solution is to simply go with no --dns or --dns-search. Sadly,
setting dns-search just at container level does not work either :/ Strangely,

    docker run --network=cloudron --dns-search=. appimage  # does not work

    docker run --network=cloudron appimage # works if you manually remove search from /etc/resolv.conf

So clearly, something inside docker triggers when one of the dns* options is set.

This means that #130 has to be fixed at app level (For Go, this means to use the cgo resolver).
2016-12-28 08:57:48 -08:00
Girish Ramakrishnan f5eb5d545f use node 6.9.2 LTS 2016-12-28 08:57:43 -08:00
Girish Ramakrishnan 91e4f6fcec Add CLOUDRON chain first
This allows us to not issue an 'upgrade' yet.

Part of #152
2016-12-28 08:57:38 -08:00
Girish Ramakrishnan b759b12e90 Move cloudron-system-setup.sh out of installer
Part of #152
2016-12-28 08:57:30 -08:00
Girish Ramakrishnan 103019984b Move firewall setup to container.sh
Part of #152
2016-12-28 08:57:20 -08:00
Girish Ramakrishnan 01126aaeea move ssh configuration to container.sh
Note: appstore requires to be fixed to start the provisioning on port 22

Part of #152
2016-12-28 08:57:13 -08:00
Girish Ramakrishnan a6ab8ff02f Mount the btrfs user home data in container.sh
This allows it to be configurable easily at some point

Part of #152
2016-12-28 08:56:55 -08:00
Girish Ramakrishnan b89886a945 Move systemd service creation scripts to container.sh
Part of #152
2016-12-28 08:56:46 -08:00
Girish Ramakrishnan d12b71f69c move journald configuration to container.sh
Part of #152
2016-12-28 08:56:06 -08:00
Girish Ramakrishnan 53c2ed3c82 configure time in container.sh 2016-12-28 08:55:56 -08:00
Girish Ramakrishnan 148c8e6250 Give user access to system logs in container.sh
Part of #152
2016-12-28 08:55:43 -08:00
Girish Ramakrishnan 4a99eb105a cloudron-system-setup does not need to be run
we reboot anyway and the service is run on startup
2016-12-28 08:46:40 -08:00
Girish Ramakrishnan c5ca64af50 cloudron-version is cloudron-setup specific 2016-12-28 08:46:40 -08:00
Girish Ramakrishnan 984b920fde Use 0.92.1 2016-12-27 22:39:53 -08:00
Girish Ramakrishnan 54dae6827e Add 0.92.1 changes 2016-12-27 22:10:12 -08:00
Girish Ramakrishnan 58cf214bf2 Fix license 2016-12-26 20:17:26 -08:00
Girish Ramakrishnan eeefdf5927 Add link to chat 2016-12-22 13:28:04 -08:00
Girish Ramakrishnan 29c172deab Switch to master again for DO fix 2016-12-22 13:27:05 -08:00
Girish Ramakrishnan af1e83f12a Remove DO specific grub cmd line
The new DO images have a different label causing DO images to not boot
    root@ubuntu-2gb-sfo1-01:~# e2label /dev/vda1
    cloudimg-rootfs

net.ifnames=0 is used get unpredictable names as per
https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/.
Not sure why we want that.

Not sure about notsc and clocksource.

This change also preserves any existing cmdline
2016-12-22 12:34:23 -08:00
Girish Ramakrishnan 3a3edc4617 Use version 0.92.0 2016-12-21 18:20:06 -08:00
Girish Ramakrishnan e13f52e371 Use env vars if they exist 2016-12-21 15:36:40 -08:00
Girish Ramakrishnan 5687b4bee0 More 0.92.0 changes 2016-12-21 15:24:18 -08:00
Girish Ramakrishnan 48d0e73e9b Repin the cloudron-setup
There was a bug in how the platform ready event was fired
because the isConfigureSync detection was buggy
2016-12-21 15:15:37 -08:00
Girish Ramakrishnan 3d4e3638be Only check for platformReady prefix 2016-12-21 15:13:51 -08:00
Girish Ramakrishnan f07e6b29a3 Check for manual DNS provider 2016-12-21 15:10:56 -08:00
Girish Ramakrishnan a92f75f7d4 Pin to specific sha1 2016-12-21 14:45:28 -08:00
Girish Ramakrishnan 6e87111c99 Pin cloudron-setup
Required for preparing for the next upgrade release
2016-12-21 14:35:08 -08:00
Girish Ramakrishnan ad3594eebc Waiting for cloudron also takes some time 2016-12-20 11:56:18 -08:00
Girish Ramakrishnan af99e31c63 encryption key is now optional 2016-12-19 14:24:53 -08:00
Girish Ramakrishnan c8ee5b10be Add 0.92.0 changes 2016-12-19 14:19:11 -08:00
Girish Ramakrishnan cd471040b4 Move endpoint down (since it's a rare thing) 2016-12-19 14:14:09 -08:00
Girish Ramakrishnan f7beecc510 Create a new backup when backup config changes
This is required so that app restore UI works
2016-12-19 14:14:05 -08:00
Girish Ramakrishnan ca8b61caba Allow backup encryption key to be set 2016-12-19 12:41:35 -08:00
Girish Ramakrishnan d672b1e3f6 Make encryption key optional 2016-12-19 12:33:52 -08:00
Girish Ramakrishnan 22ae39323b use Math.floor instead of parseInt 2016-12-19 11:56:35 -08:00
Johannes Zellner 420a57aef9 Randomize appstore requests for updates and alive status
Fixes #137
2016-12-19 16:55:39 +01:00
Johannes Zellner 7d76c32334 Only show mail dns record warnings if email is enabled 2016-12-19 16:22:37 +01:00
Johannes Zellner 2fa4f4c66a We now always reboot no need to mention in the docs 2016-12-19 12:09:12 +01:00
Johannes Zellner 37d146a683 Reboot the server after installation
This solves two issues:
* activate bootloader settings
* ensure the yellowtent user can view journald logs
2016-12-19 12:06:22 +01:00
Johannes Zellner b95808be54 Move AWS env var checks to upload section 2016-12-19 09:43:08 +01:00
Girish Ramakrishnan dbdbdd9a2a 0.91.0 changes 2016-12-16 15:35:41 -08:00
Girish Ramakrishnan 16b8df7b9c Minor doc fixes 2016-12-16 15:31:53 -08:00
Johannes Zellner 293d4b4a47 Remove unused --publish argument 2016-12-16 18:05:59 +01:00
Johannes Zellner da7b2e62f5 Order apps in store listing based on installCount 2016-12-16 17:36:16 +01:00
Johannes Zellner 33e87c7ffa Add analytics pixel tracking in html mails
This is currently hardcoded to our piwik instance including the website
id
2016-12-16 13:11:13 +01:00
Johannes Zellner f417a35ad7 Add DO referral link
Part of #140
2016-12-16 11:45:46 +01:00
Johannes Zellner c86acff698 Add vultr referral link in selfhosting docs
Part of #140
2016-12-16 11:36:10 +01:00
Girish Ramakrishnan 0ec55b0cd4 Unset dns search
This makes sure that the host dns search is not carried over to the
containers

Fixes #130
2016-12-15 14:13:39 -08:00
Girish Ramakrishnan cf98d2a9d5 Remove ip from config
This is unused. But more importantly, it causes the cloudron to
internal error and the whole UI goes down just because we cannot
detect the IP via the generic sysinfo provider.
2016-12-15 12:15:06 -08:00
Girish Ramakrishnan ec75b14d9e Set timeout for dns queries 2016-12-15 12:00:51 -08:00
Johannes Zellner 4bad31f7cc Skip mailbox update if name has not changed 2016-12-15 16:57:29 +01:00
Johannes Zellner 288baa7e94 Rename mailbox when location changes
Fixes #118
2016-12-15 16:57:29 +01:00
Johannes Zellner d1161b3ff8 Add mailboxdb.updateName() 2016-12-15 16:57:29 +01:00
Johannes Zellner 27e5886a0b Add tests for mail dns records 2016-12-15 16:57:29 +01:00
Johannes Zellner eaebf9fd73 Fix typo when comparing dkim values 2016-12-15 16:57:29 +01:00
Johannes Zellner ea4c16604b Add refresh button to retest mail config 2016-12-15 16:57:29 +01:00
Johannes Zellner 66a4abeb50 Ensure txtRecords is a valid array
The dns api will respond with undefined if no records are found

Mostly related to https://github.com/tjfontaine/node-dns/issues/95
2016-12-15 16:57:29 +01:00
Johannes Zellner a57705264f Fixup the frontend for manual mail dns records
This was a bit broken after my merge attempt
2016-12-15 16:57:29 +01:00
Johannes Zellner e7fc40cfdd Minor code style changes 2016-12-15 16:57:29 +01:00
Johannes Zellner 55d306c938 we use single quotes 2016-12-15 16:57:29 +01:00
Johannes Zellner 8fe1f2fef1 Rename email dns records route 2016-12-15 16:57:29 +01:00
Dennis Schwerdel 1065b56380 Check dns records for generic dns providers 2016-12-15 16:57:29 +01:00
Girish Ramakrishnan e58068688c Add dns-provider to arg list 2016-12-15 07:41:09 -08:00
Girish Ramakrishnan 9a51feed0a Add --dns-provider argument
Maybe someday we can set other providers like route53 etc here
2016-12-15 07:35:56 -08:00
Girish Ramakrishnan 9ac8cc2cd7 Do not override the tls config provider when restoring 2016-12-15 07:32:10 -08:00
Girish Ramakrishnan 54a388af5e Add debug 2016-12-15 07:30:38 -08:00
Girish Ramakrishnan 5dda872917 Add note about the log message 2016-12-14 19:21:43 -08:00
Girish Ramakrishnan 3277cfdc6b Remove IP detection logic
This code was here to check if user will get an admin certificate.
It doesn't work well for intranet cloudron's. The check is also not
complete since just DNS is not enough for LE to succeed, we also
require port forwarding.
2016-12-14 19:19:00 -08:00
Girish Ramakrishnan c759a1c3f6 Fix test 2016-12-14 15:04:14 -08:00
Girish Ramakrishnan b77b2ab82d add manual dns provider to the ui 2016-12-14 14:59:16 -08:00
Girish Ramakrishnan 855de8565e Allow setting manual dns provider in api 2016-12-14 14:58:08 -08:00
Girish Ramakrishnan f1ad003b41 Switch dns backend default to manual
Existing cloudrons should be OK because there is no entry in the db
by default for dnsConfig.
2016-12-14 14:56:48 -08:00
Girish Ramakrishnan f6507ecbe3 noop dns backend does not wait for dns anymore 2016-12-14 14:56:03 -08:00
Girish Ramakrishnan 79083925d1 Add manual dns backend
The manual differs from noop in that it will perform the
wait for dns check.
2016-12-14 14:54:14 -08:00
Girish Ramakrishnan de1c677e75 Simply get admin cert after waiting for dns
Removes some specialized code that was in installAdminCertificate.
2016-12-14 14:52:42 -08:00
Girish Ramakrishnan 3ede9af34b remove subdomains.status 2016-12-14 14:47:03 -08:00
Girish Ramakrishnan d475d9bcbf Make waitForDns provider specific
This will allow us to create a proper 'noop' backend that does
not wait for dns to be in sync. This is required for local/intranet
setups.
2016-12-14 14:43:20 -08:00
Girish Ramakrishnan bf095f0698 Skip admin cert installation with fallback tls provider 2016-12-13 18:58:07 -08:00
Girish Ramakrishnan 90d9d6da8b doc: reword text a bit 2016-12-13 17:34:11 -08:00
Girish Ramakrishnan 5ed4d66dfe Make apps test pass 2016-12-13 11:31:14 -08:00
Girish Ramakrishnan 60b45912ce update nock 2016-12-13 10:58:12 -08:00
Girish Ramakrishnan 29aad624d5 Remove redundant fallback 2016-12-13 10:18:16 -08:00
Johannes Zellner 2bf8584f30 Do not take the addon object, but the boolean 2016-12-12 17:29:42 +01:00
Johannes Zellner d083ff3400 Add documentation for minio 2016-12-12 15:33:21 +01:00
Johannes Zellner b6e96d77aa Thanks shell 2016-12-12 14:59:30 +01:00
Johannes Zellner 6e1751d0ed Set empty endpoint url for caas storage backend
Caas storage backend also uses the s3 code branches
2016-12-12 14:00:07 +01:00
Johannes Zellner c1700069dc Ensure we run inside a sane folder when switching the codes 2016-12-12 12:43:37 +01:00
Johannes Zellner 17c2aa4faf singleUser is gone, welcome optionalSso docs 2016-12-12 12:22:09 +01:00
Johannes Zellner 8f47861b6d Mention if a manifest field is required for store submission 2016-12-12 12:19:18 +01:00
Johannes Zellner 8f2ee9a7cd mediaLinks does not yet support videos 2016-12-12 12:15:42 +01:00
Johannes Zellner 93e976fdb0 Add 0.90.0 changes 2016-12-12 12:10:41 +01:00
Johannes Zellner c737ea1954 Show contact help line for manual email dns setup 2016-12-12 12:05:12 +01:00
Johannes Zellner 700d815d54 Show warning with more description when enabling email
Fixes #132
2016-12-12 12:05:04 +01:00
Johannes Zellner 382219a29f Show help bubble for backups configuration 2016-12-12 11:32:37 +01:00
Johannes Zellner a372853777 Simplify help bubbles 2016-12-12 11:26:47 +01:00
Johannes Zellner 79f1cd16a3 Add placeholder text for s3 config 2016-12-12 09:51:52 +01:00
Johannes Zellner b2dbb5a100 Fixup bugs with updated backup scripts 2016-12-12 09:51:52 +01:00
Johannes Zellner 01631e0477 Allow optional endpoint in s3 settings ui
Part of #123
2016-12-12 09:51:52 +01:00
Johannes Zellner 816911d071 Make s3 backup scripts aware of endpoints
Part of #123
2016-12-12 09:51:52 +01:00
Girish Ramakrishnan 2cf0d6db9d customAuth is obsolete 2016-12-09 18:43:26 -08:00
Johannes Zellner 1df47b7c05 Mention lightsail as a supported provider 2016-12-09 17:15:17 +01:00
Johannes Zellner 622ac54213 Remove first. 2016-12-08 22:43:58 +01:00
Johannes Zellner e2d8853704 Sync non existing group text between install and configure dialogs 2016-12-08 22:43:17 +01:00
Johannes Zellner 4993c5010b Align access control groups 2016-12-08 22:40:40 +01:00
Johannes Zellner 8bd0d7c143 Move disable user management option to the top 2016-12-08 22:35:49 +01:00
Johannes Zellner 761ce99f8e Show overall system disk usage in graphs
Also adds a bit of description what the numbers include.

Fixes #83 since any more folder level information is currently too much work.
2016-12-07 16:48:39 +01:00
Johannes Zellner ba7c901d7a Add more appstore categories
We do not have real categories, but only do filtering
based on the tags an app mentions. This changes adds more such tags, so
one by one we should ensure the correct tags are used in each app.

Apps not part of any such category can be found by full text search
field in the ui

Fixes #114
2016-12-07 14:50:23 +01:00
Johannes Zellner 99c88ed7a0 Update to latest font-awesome 2016-12-07 14:50:09 +01:00
Johannes Zellner c27244cfbd Add more appstore categories 2016-12-07 14:19:17 +01:00
Johannes Zellner 099a42a2d4 Add 0.80.1 changes 2016-12-07 13:35:21 +01:00
Johannes Zellner 74c89cf7d4 Do not print out error if app nginx file does not exist 2016-12-07 13:20:37 +01:00
Johannes Zellner 805125b17f Only reload sshd for caas 2016-12-06 18:41:06 +01:00
Johannes Zellner 7d93cfaac1 Add missing return
Fixes #128
2016-12-06 17:26:56 +01:00
Johannes Zellner 3cd1e7a972 Do not exit on js uglify error in gulp develop 2016-12-06 15:52:22 +01:00
Johannes Zellner 4ed2651c5f Add app restart button in configure dialog
This has come up often now where people need to install the cli just for
app restarts, or would click the restore button, picking up an older
backup, where a simple restart of the app would have been sufficient.

Did this now after live-chat user asking again for this while an app got
stuck without anything obvious in the app logs.
2016-12-06 15:31:24 +01:00
Johannes Zellner e83cb0fb3c Add missing comma 2016-12-05 22:36:55 +01:00
Johannes Zellner b1be65d9ce Add fallback certificate backend 2016-12-05 17:01:23 +01:00
Johannes Zellner eacc4412ba We don't use tabs but 4 spaces 2016-12-05 16:07:06 +01:00
Johannes Zellner 0baf092ba4 Ensure we have iptables installed
Fixes #122
2016-12-02 17:13:47 +01:00
Johannes ebd9249f87 Check dns record change and dns lookup for app install/configure
Fixes #121
2016-11-30 18:51:54 +01:00
Johannes e1ee4973eb Add route53 dns tests
Fixes #120
2016-11-30 18:04:47 +01:00
Johannes ac09ad3393 Handle ETRYAGAIN app error
Fixes #100
2016-11-30 17:34:15 +01:00
Johannes 2bba87d951 Add app message angular filter 2016-11-30 17:31:37 +01:00
Johannes d54e02eed4 Enable and fix test for multiple dns upserts with digitalocean 2016-11-30 17:00:47 +01:00
Johannes db41633663 Support multiple DNS record upserts with digitalocean
Fixes #99
2016-11-30 17:00:16 +01:00
Johannes 0568387679 Add digitalocean dns tests
Part of #120
2016-11-30 16:36:54 +01:00
Johannes ffbbb88917 Add dns noop test
Part of #120
2016-11-30 15:36:03 +01:00
Johannes 756b36d227 Ask the api server for public ip instead of local interface
Part of #106 and #86
Might fix #115 pending testing
2016-11-29 16:20:56 +01:00
Johannes a2afadfe92 Actually exit if the user answer is negative 2016-11-29 14:47:46 +01:00
Johannes 0c76cee737 Check if any ip was found 2016-11-29 14:47:46 +01:00
Johannes b1ec3fe271 dig package is dnsutils 2016-11-29 14:47:46 +01:00
Johannes 19bf130ccd Ask on installation if the DNS is correctly setup 2016-11-29 14:47:46 +01:00
Johannes 32c14e0aa1 Support --api-server-origin in cloudron-setup 2016-11-29 14:47:46 +01:00
Johannes 0ff5050452 Check if any DNS answer matches
Fixes #111
2016-11-29 14:47:32 +01:00
Johannes ca83d4afb8 Show better text for self-hosted cloudrons when low on resources
Fixes #119
2016-11-29 13:28:20 +01:00
Johannes 21c1591f58 Remove dummy record 2016-11-28 16:06:34 +01:00
Johannes cb64ac1b7f Add unit tests for eventlog search 2016-11-28 16:02:59 +01:00
Johannes 337f808a62 Search in source and data of eventlog 2016-11-28 16:02:18 +01:00
Johannes 48d97947c1 Allow to set event item count listing
Part of #113
2016-11-28 15:48:31 +01:00
Johannes df4dd4f93a Ensure the nakeddomain placeholder can deal with custom domains
Fixes #112
2016-11-28 15:25:10 +01:00
Johannes a5eb34d680 Carry over sso on app clone 2016-11-28 12:45:32 +01:00
Johannes eba03caa23 Change syntax to avoid shell warning 2016-11-25 15:16:41 +01:00
Johannes 61a41a10ce Add apt-get update to cloudron-setup
This was reported to be needed on some providers
to be able to install curl
2016-11-25 14:26:38 +01:00
Johannes d3109022b1 Only show the configure link if the app is healthy 2016-11-24 15:48:18 +01:00
Johannes 1c828f19a3 Remove console.log() 2016-11-24 15:46:21 +01:00
Johannes 2f1572b404 Protect against undefined filter text 2016-11-24 15:42:41 +01:00
Johannes 2ca12db362 Introduce the sso marker for postInstallMessage
The marker is "=== sso ==="
The part before the marker is shown if sso i disabled,
the remaining part is shown when sso is enabled.

If no marker is found, the whole text is shown
2016-11-24 15:33:47 +01:00
Johannes 14ef7688b8 Add app configure link in app grid
This was asked for many times now for the wp-admin and ghost

In addtion we could make that information in the postinstall
a link as well
2016-11-24 13:02:22 +01:00
Johannes a1c83c79b2 Do not break the layout when no access control group is selected 2016-11-24 12:11:23 +01:00
Johannes 376678881c Use light font for app location 2016-11-24 12:05:45 +01:00
Johannes 0f7b11decd Give more space to the access restriction options 2016-11-23 17:28:05 +01:00
Johannes 22b8540843 Add more changes 2016-11-23 15:26:16 +01:00
Johannes afe5a1aa6c Increase readability by not always using light fonts 2016-11-23 15:25:39 +01:00
Johannes 83b5bb394c Specify sso for apps not using of optionalSso 2016-11-23 12:09:08 +01:00
Johannes 539d430f60 Show correct ui parts for apps configure to not use sso 2016-11-22 16:15:03 +01:00
Johannes Zellner 6d898398df Add paypal donation link 2016-11-22 13:28:22 +00:00
Johannes 23a2077056 Only specify sso on app install when optionalSso is true 2016-11-22 14:20:19 +01:00
Johannes d5bb797224 Fix typo for sso check 2016-11-22 13:46:15 +01:00
Johannes 907bae53ba Update to new manifestformat 2016-11-22 13:45:35 +01:00
Johannes 97122ed2be Include sso in the app install call 2016-11-22 11:51:53 +01:00
Johannes 7b65529f63 Use the correct accessRestrictionOption variable 2016-11-22 11:13:01 +01:00
Johannes a87831b48c Include sso field in the app object delivered over the rest api 2016-11-22 11:12:46 +01:00
Johannes baba7ca80d Changes for 0.80.0 2016-11-21 16:26:26 +01:00
Johannes d39a84ea53 Do not redirect on app upstream error but show static error page
Fixes #4
2016-11-21 16:25:23 +01:00
Johannes 3bcd255a07 Ugly hack to ensure the modal backdrop is removed when changing views
Couldn't figure a way to make this generic
2016-11-21 13:22:58 +01:00
Johannes 67a87cd040 Show link to group creation when no group exists 2016-11-21 13:22:24 +01:00
Johannes be2aa70f7d A bit more relayouting in the app install dialog 2016-11-21 13:12:14 +01:00
Johannes 2fac681b62 Clarify what customAuth means in install dialog 2016-11-21 12:57:42 +01:00
Johannes dd4f7bf176 Ensure we show apps within an angular digest context
This ensures the app is shown immediately, not only after
the next digest run happens
2016-11-21 12:30:11 +01:00
Johannes 00a4b7ba09 Fix typo: missing comma
Fixes #105
2016-11-20 20:44:03 +01:00
Johannes 51799f7f14 Only set backupConfig in setup when no restore key is provided
When a restore is performed, the backupConfig is part of the
backup. Otherwise provide a default file based config which
contains the encryption key
2016-11-20 18:17:55 +01:00
Girish Ramakrishnan 1b291365d5 Fix appdb.add to set sso 2016-11-19 21:59:06 +05:30
Girish Ramakrishnan 9337f832d3 optionalAuth -> optionalSso 2016-11-19 21:37:39 +05:30
Girish Ramakrishnan ab540cb3e4 update cloudron-manifestformat 2016-11-19 21:22:06 +05:30
Girish Ramakrishnan 1adc47ab32 make ordering of results predictable 2016-11-19 18:24:32 +05:30
Girish Ramakrishnan 94037e5266 remove oauth proxy backend logic 2016-11-19 17:13:08 +05:30
Girish Ramakrishnan 3457890b24 derive customAuth from usage of auth addon
we can get rid of this value from the manifest since the oauth proxy
is going away.
2016-11-19 17:12:58 +05:30
Girish Ramakrishnan b23c06d443 remove oauth proxy from ui code 2016-11-19 17:12:40 +05:30
Girish Ramakrishnan f5ebb782c0 remove support for singleUser 2016-11-19 17:12:31 +05:30
Girish Ramakrishnan 72f31744e3 remove singleUser from ui code 2016-11-19 17:12:24 +05:30
Girish Ramakrishnan 2065a5f7f2 Add optional SSO to install dialog 2016-11-19 17:12:15 +05:30
Girish Ramakrishnan 2ecf0c32cb Skip auth setup if user did not want sso 2016-11-19 17:12:00 +05:30
Girish Ramakrishnan 9c0f2175f7 add sso route parameter to app install
presumably, we don't allow this to be changed post installation
2016-11-19 17:11:46 +05:30
Girish Ramakrishnan 6064db9467 read sso field in db code 2016-11-19 17:10:54 +05:30
Girish Ramakrishnan 8cb8510d72 Add sso db field
SSO field tracks whether the user wants to enable SSO integration
or not.
2016-11-19 17:10:26 +05:30
Johannes Zellner 552ca43175 Only cleanup high frequency events in eventlog
Those are currently the login events and backup
2016-11-18 11:32:12 +01:00
Johannes Zellner 7c27f01ab8 Do not automatically enable root ssh access
With our current self-hosting installation process, this
is not longer required. It should be the users responsibility
to gain access to his server. For Cloudron managed hosting,
this does not apply as we always create servers with ssh keys.

Also do not tinker with the sshd configs. The user may choose
to use access via password.

Fixes #104
2016-11-17 16:28:32 +01:00
Johannes Zellner a8ec9a4329 Ensure the server has curl installed
Fixes #103
2016-11-17 15:03:37 +01:00
Johannes Zellner 797cf31969 Add note about possible restart requirement 2016-11-17 14:50:00 +01:00
Johannes Zellner 37e365f679 Remove hash in front of install commands to allow copy'n'paste 2016-11-17 14:47:12 +01:00
Johannes Zellner f53a9ab1aa Add known provider section to selfhosting docs 2016-11-17 14:46:03 +01:00
Johannes Zellner 4579de85bf Only log exposed ports if there are any 2016-11-16 22:18:12 +01:00
Johannes Zellner affc5ee7d9 Add changes for 0.70.1 2016-11-16 16:29:53 +01:00
Johannes Zellner 40fa3818cc Send alive beacon every hour 2016-11-16 15:01:23 +01:00
Johannes Zellner 4a264ba8c5 Also send provider alongside 2016-11-16 14:45:27 +01:00
Johannes Zellner 8a47c36e20 CloudronError does not have BILLING_REQUIRED and also doesn't need it 2016-11-15 16:59:45 +01:00
Johannes Zellner 2dc06a01b6 Add cronjob to send alive signal 2016-11-15 15:25:21 +01:00
Johannes Zellner f6695c9567 Add sendAliveStatus() 2016-11-15 15:24:40 +01:00
Johannes Zellner fc3768101d Token exchange route does not need appstoreId 2016-11-15 15:24:28 +01:00
Johannes Zellner 5645954686 This route does not exist anymore 2016-11-14 17:16:42 +01:00
Johannes Zellner f16d1c80f4 Do not log if no update is available 2016-11-14 17:00:30 +01:00
Johannes Zellner a25b884dbb Fix typo, use .body 2016-11-14 16:29:47 +01:00
Johannes Zellner 567401c337 Fetch appstore credentials on app un-/purchase for caas 2016-11-14 15:40:53 +01:00
Johannes 1c80f3d667 Update selfhosting docs for --encyrption-key
Concludes and fixes #98
2016-11-13 14:11:27 +01:00
Johannes 17ebc67d36 Set default backupConfig in cloudron-setup
If we provide the backup key we have to provide other values
to prevent having to perform value merging in settings.js
defaults
2016-11-13 13:37:38 +01:00
Johannes 4248776c16 Give details what encryption key is 2016-11-13 11:49:09 +01:00
Johannes 3e0d6f698e Verify --provider string 2016-11-13 11:47:37 +01:00
Johannes 67e2589a15 Remove noisy ' 2016-11-13 11:35:56 +01:00
Johannes 2398a515b5 Make --encryption-key mandatory 2016-11-13 11:34:02 +01:00
Johannes ad83d805ac Support supplying an encryption key during cloudron-setup 2016-11-13 11:20:50 +01:00
Johannes a6ba3535df Add flattr button to readme 2016-11-11 15:59:10 +01:00
Johannes 3510d8f097 Mention preferred medialinks aspect ratio 2016-11-11 09:40:54 +01:00
Johannes d0100218c9 Add information about metadata for app upload 2016-11-11 09:40:39 +01:00
Johannes 2cdeb40f33 Do not include docs folder in release tarball 2016-11-09 12:28:05 +01:00
Johannes e033dce93e Run update short circuit prior earlier
This allows short circuit of non caas upgrades as well

Fixes #97
2016-11-09 12:25:39 +01:00
Johannes 4c62338e97 Add even more logs for upgrades 2016-11-09 10:44:48 +01:00
Johannes 606599a65b Add a hint about S3 for upgrades 2016-11-08 21:38:42 +01:00
Johannes d091ac4e0a Add screenshot how to make s3 backup public 2016-11-08 21:20:51 +01:00
Johannes b676ebf9d7 Temporarily ensure the box update link anchor is fine 2016-11-08 18:32:26 +01:00
Girish Ramakrishnan e270c27cb0 Remove hardcoded cert 2016-11-08 18:04:07 +05:30
Girish Ramakrishnan 63561a51a4 Fix failing cert test
The hardcoded cert has expired
2016-11-08 17:33:45 +05:30
Girish Ramakrishnan cde7599f87 Choose default confs
Fixes #92
2016-11-08 15:36:48 +05:30
Johannes c9e7308f49 Attempt to set kernel params for generic provider
This is useful for running ubuntu on hardware or in virtualbox
2016-11-08 09:35:18 +01:00
Johannes 0088d9d5fc Renew expired certs in the cert tests 2016-11-08 09:28:48 +01:00
Johannes 4fd5b369f8 Reset app update indicator when an update was triggered
Fixes #48
2016-11-07 15:14:08 +01:00
Johannes 5e0ed1dff3 Don't just center the whole update email
Finally fixes #88
2016-11-07 13:35:02 +01:00
Johannes 215a16cd18 Render update changelog mail with markdown 2016-11-07 13:34:48 +01:00
Johannes cd5ae290bc Add showdown node module 2016-11-07 13:34:47 +01:00
Johannes bd0b66aaad Improve update email 2016-11-07 13:34:47 +01:00
Johannes 45b83232d7 Enable html mails for box updates 2016-11-07 12:32:57 +01:00
Johannes bf2885d7d3 Show markdown in update dialog
Part of #88
2016-11-07 12:20:28 +01:00
Johannes eeb8cc10ae Show error message in update dialog if a backup is currently happening
Fixes #89
2016-11-07 12:17:57 +01:00
Johannes 4668e3a771 Rename box-setup to cloudron-system-setup
This shell script and the associated systemd service
are hooks to setup the system like swap and volumes
It is part of the base image
2016-11-06 14:30:26 +01:00
Johannes 95a90dd050 Check on the installer service to be able to cancel update from box side 2016-11-06 14:30:26 +01:00
Johannes 908aa6f426 Reset the systemd-run service in case it failed earlier
systemd will refuse to run a transient unit if one run
with the same unit name failed earlier
2016-11-06 14:30:26 +01:00
Johannes 15f7ada958 We now use systemd-run no need for sudoDetached 2016-11-06 14:30:26 +01:00
Johannes 18b58ced8d Run the updater through systemd-run
This ensures it can start and stop the box process.
Due to control-group setting to killall children
the updater itself would get killed if the box service
restarts
2016-11-06 14:30:26 +01:00
Johannes 4f6f5bf3b7 Support --data-file instead of passing JSON as arguments
This is required for systemd-run, which limits the process
argument length and makes the data get truncated

https://github.com/coreos/fleet/issues/992
2016-11-06 14:30:26 +01:00
Johannes 50cbae420c Only retry 10 times in installer.sh 2016-11-06 14:30:26 +01:00
Johannes a1207de93f set --unsafe-perm for npm rebuild 2016-11-06 14:30:26 +01:00
Johannes a6824d8272 Ensure various scripts are run as root 2016-11-06 14:30:26 +01:00
Johannes 0eaeb67ba0 Run the box-setup init service
This ensures we have enough swap setup
2016-11-06 14:30:26 +01:00
Johannes b40a9803a8 Adjust script paths for isntaller.sh movement 2016-11-06 14:30:26 +01:00
Johannes f1ab8fde76 Move installer.sh one level up 2016-11-06 14:30:26 +01:00
Johannes 55d11b2832 Remove unused certs/ folder in installer 2016-11-06 14:30:26 +01:00
Johannes e01da9b065 Add a installer readme
This file is to clarify why this folder is special,
what it does and why it is there.
2016-11-06 14:30:26 +01:00
Johannes b703dbd7f7 Add changes for 0.70.0 2016-11-06 14:30:26 +01:00
Johannes c70c7462bf hooks for installer are just local sysadmin webhooks 2016-11-06 14:29:41 +01:00
Johannes 342dd26645 No need to run npm install for the installer anymore 2016-11-06 14:29:41 +01:00
Johannes 8e03295362 Remove the cloudron-installer systemd unit file 2016-11-06 14:29:41 +01:00
Johannes 18cc3537d6 No more cloudron-installer for the docs 2016-11-06 14:29:41 +01:00
Johannes 16deb001bf No more cloudron-installer to stop 2016-11-06 14:29:41 +01:00
Johannes 78035e0b2e Remove installer tests 2016-11-06 14:29:41 +01:00
Johannes c23755c028 Remove all nodejs code from installer 2016-11-06 14:29:41 +01:00
Johannes 38ddf12542 Instead of calling the installer, just run update.sh
update.sh will run detached and triggers the installer.sh
2016-11-06 14:29:41 +01:00
Johannes 525c7f2685 add shell.sudoDetached() 2016-11-06 14:29:41 +01:00
Johannes 4d360e3798 Allow update.sh to be run as root 2016-11-06 14:29:41 +01:00
Johannes 8adf9f3643 Add initial update.sh script to trigger installer.sh from box 2016-11-06 14:29:41 +01:00
Johannes 6236a9c15e Changes for 0.60.1 2016-11-04 11:46:13 +01:00
Johannes cc6b260189 Bump mail container version 2016-11-04 10:07:14 +01:00
Johannes 01953ded0f Fix typo in size slugs 2016-11-02 10:25:50 +01:00
Johannes 645dc21f7a Mention the need for an AWS account for S3 setup 2016-11-01 10:44:20 +01:00
Johannes 34acb38d40 Some typo fixes to the new selfhosting docs 2016-10-31 11:26:36 +01:00
Girish Ramakrishnan 73918f8808 doc: new selfhosting docs 2016-10-30 19:53:44 -07:00
Johannes 9f973133e8 Give correct feedback if S3 region is wrong
Fixes #87
2016-10-28 16:48:13 +02:00
Johannes 5ba86d5c35 Use aws s3 cli to test credentials
This allows us to test the exact same usage of the api
through the cli tool, not the javascript api
2016-10-28 16:36:05 +02:00
Johannes 7b1b369e40 Add select box for S3 region 2016-10-28 15:28:48 +02:00
Johannes 894384cf3c Remove unused change handler on dns provider selection 2016-10-28 14:58:28 +02:00
Johannes 9768f8171c Add possible provider 'digitalocean' 2016-10-28 11:21:58 +02:00
Girish Ramakrishnan 7672bc0c40 Add -y to update 2016-10-26 11:07:36 -07:00
Girish Ramakrishnan 064c584b45 Make provider mandatory 2016-10-26 10:53:25 -07:00
Johannes 586fc4fe2d Revert "CaaS: bring back the userdata.json provision code path"
This reverts commit 830972e8ae.
2016-10-26 10:20:26 +02:00
Johannes ca22939298 Revert "keep probing for userdata.json like before"
This reverts commit f8cc68b78d.
2016-10-26 10:20:20 +02:00
Girish Ramakrishnan f8cc68b78d keep probing for userdata.json like before
there can be a race between server starting up and the scp happenning
from the appstore
2016-10-25 18:29:43 -07:00
Girish Ramakrishnan 830972e8ae CaaS: bring back the userdata.json provision code path 2016-10-25 16:24:28 -07:00
Girish Ramakrishnan 871f5728f8 Add 0.60.0 changes 2016-10-25 15:58:50 -07:00
Girish Ramakrishnan 3560af1b1e Fix restore blob format 2016-10-25 14:34:48 -07:00
Girish Ramakrishnan 859d27522b Use -q causes the pipe to fail and script aborts 2016-10-25 14:01:40 -07:00
Girish Ramakrishnan 9c90f88af4 Add --help 2016-10-25 13:34:12 -07:00
Girish Ramakrishnan 8142ad3989 Fix various bugs 2016-10-25 13:15:19 -07:00
Girish Ramakrishnan 984c506c81 hard to center the semver 2016-10-25 12:57:24 -07:00
Girish Ramakrishnan 124c04167f Verify box version the first thing 2016-10-25 12:55:41 -07:00
Girish Ramakrishnan 105b8e0aeb suppress stderr output 2016-10-25 12:49:51 -07:00
Girish Ramakrishnan a22591a89f Handle download and install errors 2016-10-25 12:47:51 -07:00
Girish Ramakrishnan c91464accc Enable -e and handle init script error 2016-10-25 12:00:54 -07:00
Girish Ramakrishnan d36af33269 default dns config has changed 2016-10-25 11:37:24 -07:00
Girish Ramakrishnan eaa747fe39 do not install admin certs during test 2016-10-25 11:36:56 -07:00
Johannes 25243970ad Only allow email to be enabled if a real dns provider is setup 2016-10-25 16:31:22 +02:00
Johannes fc09cf2205 Update the webui when dns config changed 2016-10-25 16:21:37 +02:00
Johannes e1be8659fa Also validate DNS config for digitalocean backend 2016-10-25 16:18:54 +02:00
Johannes eb963f3e1b Report auth issues in digitalocean dns backend 2016-10-25 16:18:33 +02:00
Johannes a983fb144f Only caas currently allows dynamic domain change 2016-10-25 16:06:44 +02:00
Johannes a23f5d45b0 Improve error feedback when setting Route53 credentials 2016-10-25 16:06:31 +02:00
Johannes e4b7b9c9fb Fix typo 2016-10-25 15:28:26 +02:00
Johannes 0c6a2008ff Also support noop dns provider in settings backend 2016-10-25 14:55:20 +02:00
Johannes e7c82b3bf7 Make label clickable 2016-10-25 14:52:52 +02:00
Johannes 048f3e0614 Show selection box for dns provider 2016-10-25 14:51:57 +02:00
Johannes ae402f7afb Make the DNS setup button normal size 2016-10-25 14:43:16 +02:00
Johannes e848b23bc8 Let the user know when no DNS provider is setup
This is the case when noop provider is used
2016-10-25 14:41:35 +02:00
Johannes 012fbe926f Wait for the configure event to be received 2016-10-25 14:33:32 +02:00
Johannes e94cae88ab Cleanup package.json from unused node modules 2016-10-25 14:29:04 +02:00
Johannes d7a91429f3 noop dns provider is a valid one 2016-10-25 14:15:54 +02:00
Johannes 254e0ef8e1 Print information on how to follow logs in the setup script 2016-10-25 14:07:49 +02:00
Johannes 2e7cc4847e the folder is called /var/log/ without s 2016-10-25 14:01:35 +02:00
Johannes 8cfc8bb893 Redirect init and installer script output to log file 2016-10-25 13:58:46 +02:00
Johannes bd163327be Do not disable nginx service 2016-10-25 13:57:25 +02:00
Johannes 9adc6d2ba5 No more data subobject 2016-10-25 13:41:51 +02:00
Johannes 5539710a25 Explicitly specify npm bin 2016-10-25 13:27:31 +02:00
Johannes 6b6af13c5f Do not set -e in cloudron-setup
This needs to be reenabled, but I can't make out
why having it set makes the parent script stop
after calling an external one with /bin/bash,
even though the external one has a 0 exit code
2016-10-25 13:14:01 +02:00
Johannes 6660ef2ff3 Let the cloudron-version tool resolve the version string 2016-10-25 13:13:04 +02:00
Johannes 2ca5b3c197 Directly call installer.sh from cloudron-setup 2016-10-25 11:27:58 +02:00
Johannes 049ab4d744 Remove initial install feature in installer 2016-10-25 11:27:41 +02:00
Johannes dd9c594387 Install cloudron-version tool 2016-10-25 11:27:04 +02:00
Girish Ramakrishnan 15cfbe3f99 Initial version of configure style cloudron-setup script 2016-10-25 00:07:46 -07:00
Girish Ramakrishnan 0180dcf0ec Allow specific version to be installed 2016-10-25 00:01:06 -07:00
Girish Ramakrishnan c8a04f8707 remove code that stops nginx 2016-10-24 14:41:26 -07:00
Girish Ramakrishnan 37185b1058 Move cloudron-setup script to top level 2016-10-24 14:28:37 -07:00
Johannes f4aacfa2d0 tls config property is called tlsConfig 2016-10-24 18:04:28 +02:00
Johannes bc285a0965 Allow tls-provider to be set for development 2016-10-24 17:30:47 +02:00
Johannes e9a35ec549 Allow to specify box versions url for development 2016-10-24 17:28:40 +02:00
Johannes 595787a898 Add missing 'then' 2016-10-24 16:46:14 +02:00
Johannes 235d969890 Add cloudron-setup script 2016-10-24 16:18:02 +02:00
Johannes 8efa75e5d6 Only use ssh port 202 with caas 2016-10-24 15:56:24 +02:00
Johannes e700eb1551 Remove setup webui, we first rely on a shell script with args 2016-10-24 15:51:51 +02:00
Johannes b7e36a6f33 Retry dns check 2016-10-23 23:10:49 +02:00
Johannes 30e91eb812 Basic ui to wait for dns record 2016-10-23 22:58:56 +02:00
Johannes 468e5e7e89 Add route to check dns record 2016-10-23 22:58:38 +02:00
Girish Ramakrishnan 86a31b8f5a start nginx properly 2016-10-21 16:43:40 -07:00
Girish Ramakrishnan b9ff8a2cef start the installer 2016-10-21 16:22:25 -07:00
Girish Ramakrishnan e63ef4c991 Extract properly 2016-10-21 16:21:09 -07:00
Girish Ramakrishnan 1244a73a19 run the install web ui on port 80 2016-10-21 16:04:08 -07:00
Girish Ramakrishnan 64f3b45eef download installer in base image script 2016-10-21 15:52:40 -07:00
Girish Ramakrishnan d494129353 default provider to generic 2016-10-21 12:58:01 -07:00
Johannes Zellner 0c3dda8ee0 Add web ui to create config file 2016-10-21 12:30:47 -07:00
Johannes Zellner 3038521916 Set fallback versions url 2016-10-21 12:27:58 -07:00
Johannes Zellner d4d3eced56 Wait forever for user data and support js format 2016-10-21 12:21:30 -07:00
Johannes Zellner 2c279dc77e Set LE as default tls config 2016-10-21 10:31:55 -07:00
Johannes Zellner 5d8b46e015 Add more fallbacks for settings 2016-10-21 10:31:30 -07:00
Johannes Zellner 723c7307d2 Set default provider to generic 2016-10-21 10:28:40 -07:00
Johannes Zellner db55a7ad3c Create fallback cert if not passed in via user data 2016-10-21 10:28:22 -07:00
Johannes Zellner 09b4325ecc Set some more fallbacks in argparser.sh 2016-10-21 10:26:32 -07:00
Johannes Zellner 66999f7454 custom domain is actually the default by now 2016-10-21 10:25:33 -07:00
Johannes Zellner 2c511ccc5a Do not create a swap file if swap is already more than physical memory
This is the case for example on the default ubuntu 16.04 virtualbox image
2016-10-20 15:32:02 +02:00
Girish Ramakrishnan 6b72ee61f9 Show good error message for invalid username 2016-10-17 19:02:48 -07:00
Girish Ramakrishnan 0a7303e50d lower case message 2016-10-17 18:56:10 -07:00
Girish Ramakrishnan 906beaca29 add link to packaging guide 2016-10-16 11:24:58 -07:00
Girish Ramakrishnan daf8250e44 do not skip scripts!
all our sudo scripts are here.
2016-10-14 15:14:24 -07:00
Girish Ramakrishnan 4313d8a28c Send mail when backup fails
Fixes #9
2016-10-14 15:08:41 -07:00
Girish Ramakrishnan 4fbce26877 Turns out git archive is used in createDOImage to get installer code 2016-10-14 11:24:10 -07:00
Girish Ramakrishnan 702b93fe7c Do not include baseimage and installer in archive
The CLI tool will be fixed to download the file from gitlab.

Fixes #39
2016-10-14 10:46:21 -07:00
Girish Ramakrishnan 6755d13f1b Revert "Do not include baseimage and installer in archive"
This reverts commit f80ce1778a.

We cannot just remove it because the CLI tool relies on this right
now.
2016-10-14 10:35:33 -07:00
Girish Ramakrishnan f80ce1778a Do not include baseimage and installer in archive
These are part of the base image

Fixes #39
2016-10-14 09:49:24 -07:00
Girish Ramakrishnan db7958c934 remove reference to dead directories 2016-10-14 09:40:08 -07:00
Girish Ramakrishnan 02e7c4eaef Do not display "caas" 2016-10-14 09:34:55 -07:00
Girish Ramakrishnan ae299f5838 Fix failing test 2016-10-14 09:30:42 -07:00
Girish Ramakrishnan bafc35f99e Revert "Use in-place replacement ursa-purejs for native ursa"
This reverts commit 8e033dc387.

Lots of things in ursa-purejs is unimplemented. We get errors like:

    /home/yellowtent/box/node_modules/ursa-purejs/lib/ursa.js:331
          throw new Error("Unsupported operation : sign");
          ^
    Error: Unsupported operation : sign
        at Object.sign (/home/yellowtent/box/node_modules/ursa-purejs/lib/ursa.js:331:13)
        at Object.sign (/home/yellowtent/box/node_modules/ursa-purejs/lib/ursa.js:624:27)
        at /home/yellowtent/box/src/cert/acme.js:112:50
        at /home/yellowtent/box/src/cert/acme.js:70:16
2016-10-13 21:41:04 -07:00
Girish Ramakrishnan 32eb1edead center it 2016-10-13 16:26:29 -07:00
Girish Ramakrishnan 1187e6a101 Add powered by footer to password reset 2016-10-13 16:18:26 -07:00
Girish Ramakrishnan f94a653e80 Add powered by footer
Fixes #77
2016-10-13 16:18:22 -07:00
Girish Ramakrishnan 1c22cb8443 Pass invitor object when reinviting user 2016-10-13 15:57:58 -07:00
Girish Ramakrishnan 49f7fb552b settings api: key if present must be a string 2016-10-13 15:32:18 -07:00
Girish Ramakrishnan d460c36e14 Simply use settings.setBackupConfig 2016-10-13 15:32:00 -07:00
Girish Ramakrishnan 6e8eea6876 Use getBackupConfig instead and allow key to be settable 2016-10-13 15:23:49 -07:00
Girish Ramakrishnan fd1b56b9e9 Fix failing sysadmin test 2016-10-13 15:13:28 -07:00
Girish Ramakrishnan 92106a2a52 Fix failing simple auth test 2016-10-13 15:11:03 -07:00
Girish Ramakrishnan 8809552fb2 Fix failing apps test 2016-10-13 15:04:12 -07:00
Girish Ramakrishnan 3652d7f186 Fix failing cloudron-test 2016-10-13 14:55:14 -07:00
Girish Ramakrishnan 74abb26016 Fix failing backup test 2016-10-13 14:50:54 -07:00
Girish Ramakrishnan 606f28c724 fix failing setting test 2016-10-13 14:45:18 -07:00
Girish Ramakrishnan 427f72fb24 bump the infra version
this is redundant since we have an upgrade coming up...
2016-10-13 13:23:28 -07:00
Girish Ramakrishnan 21b28d3dcc Dynamically scale addon memory
Simple math for now: we bump up memory in slabs of 4gb

Fixes #79
2016-10-13 13:13:09 -07:00
Girish Ramakrishnan 1116bbe731 Add more 0.50.0 changes 2016-10-13 10:02:40 -07:00
Johannes Zellner 4099a7a32e Also use cloudronName in account setup 2016-10-13 17:40:00 +02:00
Johannes Zellner 97a17ff25f Amend common template values in a central place 2016-10-13 17:34:21 +02:00
Johannes Zellner 68d37b7260 Render the cloudronName in oauth views 2016-10-13 17:24:26 +02:00
Johannes Zellner 7513817d41 Add newline for password reset 2016-10-13 17:19:41 +02:00
Johannes Zellner fadef230e9 Fix avatar change after code refactoring 2016-10-13 17:10:30 +02:00
Johannes Zellner a672a930f8 Show cloudron name in password reset mail subject 2016-10-13 17:03:01 +02:00
Johannes Zellner e6f8c83a6b Remove dead code in webadmin 2016-10-13 16:55:55 +02:00
Johannes Zellner f8d50f6ea8 Ensure we hide tutorial and footer until angular is loaded 2016-10-13 16:53:38 +02:00
Johannes Zellner 62b803624f HTMLify the password reset mail 2016-10-13 16:48:58 +02:00
Johannes Zellner 9872ac424f Increase mail container memory
This is only a temporary fix for the next release, in case
we have not yet implemented a dynamic setting
2016-10-13 13:56:55 +02:00
Johannes Zellner bca57b5e47 Show cloudron name for webadmin login
Fixes #80
2016-10-13 13:56:29 +02:00
Johannes Zellner e533f506cc Remove reduandant Cloudron Cloudron 2016-10-13 12:44:08 +02:00
Johannes Zellner 0b8857e1bb Fix the user add email 2016-10-13 12:37:25 +02:00
Johannes Zellner 5a1729d715 Improve the invite mail 2016-10-13 11:56:23 +02:00
Johannes Zellner 946d4f1b70 Actually set the html content for the invite mail 2016-10-13 11:38:52 +02:00
Johannes Zellner 8e033dc387 Use in-place replacement ursa-purejs for native ursa
The native modules often cause headaches with rebuilds
2016-10-13 11:23:57 +02:00
Johannes Zellner cf09f0995f Remove unused requires 2016-10-13 11:21:40 +02:00
Johannes Zellner 19c7dd0de8 Add html version to user welcome mail 2016-10-13 11:21:29 +02:00
Girish Ramakrishnan 1d8df65fbf Fix mailbox name for naked domains
Fixes #81
2016-10-12 19:54:04 -07:00
Girish Ramakrishnan 2be17eeb52 Add semi-tested scaleway backend 2016-10-11 19:47:27 -07:00
Girish Ramakrishnan 5c34cb24c6 doc: add understand section 2016-10-11 19:29:42 -07:00
Girish Ramakrishnan c12ee50b3b dump the body for debugging 2016-10-11 19:29:23 -07:00
Girish Ramakrishnan c54a825eb8 doc: add linode/scaleway notes 2016-10-11 18:22:44 -07:00
Girish Ramakrishnan ef27a17cae Only update grub if we modified grub 2016-10-11 18:22:27 -07:00
Girish Ramakrishnan 8cf8661c2f it turns out 0.5 is less than 0.22 2016-10-11 16:41:51 -07:00
Girish Ramakrishnan 7cdbab446d Add big update (0.5.0) 2016-10-11 16:39:52 -07:00
Girish Ramakrishnan 74ffd5c2d3 Fix bash syntax 2016-10-11 16:24:47 -07:00
Girish Ramakrishnan 3a259e9ce0 add some hacks for scaleway
* load loop module if not autoloaded
* allow NBD ports (https://community.online.net/t/how-to-configures-iptables-with-input-rules-with-dynamic-nbd/303/31)
2016-10-11 15:21:10 -07:00
Johannes Zellner f9e47ac3c0 Ensure we always keep the backup key 2016-10-11 15:56:07 +02:00
Johannes Zellner 0c85f96b27 Allow to setup a backup region 2016-10-11 14:20:31 +02:00
Johannes Zellner b30300b8b2 Fix backup config prefix display 2016-10-11 14:14:24 +02:00
Johannes Zellner 6663a6bd66 More error feedback on backup config change form 2016-10-11 14:14:04 +02:00
Johannes Zellner c1fc2ce095 Give error response if aws accessKeyId is unknown 2016-10-11 14:07:36 +02:00
Johannes Zellner e614b930a5 Report with a distinguished status code if upstream validation failed 2016-10-11 11:49:30 +02:00
Johannes Zellner 9b4228f373 No need for a separate function 2016-10-11 11:47:33 +02:00
Johannes Zellner 6e6d4f7413 Actually verify s3 credentials by using the api 2016-10-11 11:46:28 +02:00
Johannes Zellner cac85b17bc Add backup config test for each backend 2016-10-11 11:36:25 +02:00
Johannes Zellner 449f8b03ad The backup setting route does not require password for now 2016-10-11 11:21:06 +02:00
Johannes Zellner 6eacc76281 wire up the backup backend settings save button 2016-10-11 11:18:12 +02:00
Johannes Zellner 33f764f6aa Properly setup the backup backend change dialog 2016-10-11 11:17:41 +02:00
Johannes Zellner 9ab845ef8a Set the backup janitor back to every 30min 2016-10-11 10:55:00 +02:00
Johannes Zellner eaee3ffbc9 Cleanup the storage backend change ui 2016-10-11 10:54:33 +02:00
Johannes Zellner e1f268a325 remove unused require 2016-10-11 10:32:22 +02:00
Johannes Zellner 1fc16d0fe8 Warn admins in the webui if they use the filesystem backend 2016-10-11 10:32:05 +02:00
Johannes Zellner d7ea06e80e Simply remove all backups up to the last to when using filesystem
backend
2016-10-11 10:31:21 +02:00
Johannes Zellner 2d39a9bfa1 Only store last two days of backups 2016-10-11 09:56:42 +02:00
Johannes Zellner f576f38e4c Calculate the backup checksum for client side verification
Fixes #54
2016-10-10 18:11:25 +02:00
Johannes Zellner 734506eb41 add checksum node module 2016-10-10 18:11:07 +02:00
Johannes Zellner 8ac8ea7d8a Reduce debug output 2016-10-10 16:27:39 +02:00
Johannes Zellner 9d3f8f23ef Also remove the app backup json files 2016-10-10 16:25:43 +02:00
Johannes Zellner b0a8ba85e1 Also remove the db records for deleted backups 2016-10-10 16:25:43 +02:00
Johannes Zellner 7e41ea9c31 Make the script executable 2016-10-10 16:25:43 +02:00
Johannes Zellner 1e65142f47 Use rmbackup.sh instead of fs.unlink() due to root ownership 2016-10-10 16:25:43 +02:00
Johannes Zellner f05a5226ba Add new sudo file rmbackup.sh as backups are owned by root currently 2016-10-10 16:25:43 +02:00
Johannes Zellner c129328828 There is no result 2016-10-10 16:25:43 +02:00
Johannes Zellner acc644160a Remove the old backups from the storage 2016-10-10 15:45:48 +02:00
Johannes Zellner c7e5c09bb9 Adjust removeBackup() api 2016-10-10 15:45:48 +02:00
Johannes Zellner 1b3ae1f178 Add new storage.removeBackup() api
This currently is only used in the filesystem backend,
but may be expanded to also cleanup S3 in the future
2016-10-10 15:45:48 +02:00
Johannes Zellner bceeb092bf Remove unused require 2016-10-10 14:50:53 +02:00
Johannes Zellner 0d0229e531 Filter potential backups to cleanup 2016-10-10 14:43:47 +02:00
Johannes Zellner 629e061743 Use specific error if app backup for restore can't be found 2016-10-10 13:21:45 +02:00
Girish Ramakrishnan d53657fa61 doc: generic machine 2016-10-09 21:03:56 -07:00
Girish Ramakrishnan 437c582be6 doc: reduce indentation 2016-10-09 20:51:08 -07:00
Girish Ramakrishnan 12ce714df4 Allow backup configuration to be changed 2016-10-09 20:23:21 -07:00
Girish Ramakrishnan f09a1c577b doc: more docs for backup api 2016-10-09 20:23:21 -07:00
Girish Ramakrishnan 4e3ba4c96f Check type of bucket and prefix as well 2016-10-09 20:17:42 -07:00
Girish Ramakrishnan 26c67d2d36 refactor settings ui: scope the methods 2016-10-09 20:07:59 -07:00
Girish Ramakrishnan 1e6b09c0da reduce task concurrency
trying to restore many apps in low memory, just crashes everything
2016-10-09 13:27:46 -07:00
Girish Ramakrishnan 4ed74a8164 bump postgresql and mail images 2016-10-09 12:53:55 -07:00
Girish Ramakrishnan 131cd96840 allow various provider in backup config 2016-10-09 00:41:24 -07:00
Girish Ramakrishnan fb4d6f7649 doc: fix dns config api docs 2016-10-09 00:24:30 -07:00
Girish Ramakrishnan da5e40db66 verify token type 2016-10-09 00:23:23 -07:00
Girish Ramakrishnan 6c1c7e74c1 detect if aa is available (linode has it disabled) 2016-10-08 23:04:24 -07:00
Girish Ramakrishnan 5a18c4dc26 in some systems, there is already some swap allocated 2016-10-08 21:55:13 -07:00
Girish Ramakrishnan 0fbe2709ea bash cannot handle float arithmetic 2016-10-08 21:40:05 -07:00
Girish Ramakrishnan 6fdf5bd7ec Find rootfs device the hard way 2016-10-08 21:31:11 -07:00
Girish Ramakrishnan f2948483df rename eth0 to generic
sysinfo caters to more than IP...
2016-10-08 16:40:58 -07:00
Girish Ramakrishnan 1ef6eefaf6 dns: fix noop get/upsert 2016-10-08 14:38:59 -07:00
Girish Ramakrishnan ae0f90c621 check for generic provider 2016-10-08 14:09:32 -07:00
Girish Ramakrishnan 63a0c69e76 modify grub only for ec2 2016-10-08 13:23:45 -07:00
Girish Ramakrishnan 370e4f7c25 rename wildcard to noop 2016-10-08 13:00:40 -07:00
Girish Ramakrishnan 7cb8745029 change provider name to ssh 2016-10-07 14:22:49 -07:00
Girish Ramakrishnan ba5f261f33 Fix speling 2016-10-07 14:21:26 -07:00
Girish Ramakrishnan 72f287c4e5 Fix typos 2016-10-07 14:19:44 -07:00
Girish Ramakrishnan c385abe416 return wildcard dns backend 2016-10-07 14:10:28 -07:00
Girish Ramakrishnan 49e3dba1f2 Add DNS wildcard backend
It assumes that the user setup the wildcard DNS entry manually.
2016-10-07 14:09:20 -07:00
Girish Ramakrishnan e456c4b39c Add eth0 sysinfo backend 2016-10-07 14:09:20 -07:00
Girish Ramakrishnan 9b83a4d776 add certificate interface file 2016-10-07 14:09:20 -07:00
Girish Ramakrishnan 0ae1238233 Add sysinfo interface definition 2016-10-07 14:09:20 -07:00
Johannes Zellner b45fca6468 Add 0.22.0 changes 2016-10-07 12:44:36 +02:00
Johannes Zellner d7245b5e1e Cleanup the provisioning code 2016-10-06 14:14:48 +02:00
Johannes Zellner 81c443d637 Use the correct callback 2016-10-06 14:08:53 +02:00
Johannes Zellner 84e4c0033e Do not support meta data api for user data
From this version on only a local /root/userdata.json
is supported. We will poll for that file every 5sec
The file is either uploaded via boxtask in caas or
the cli tool
2016-10-06 11:48:17 +02:00
Girish Ramakrishnan d7be1d7d03 open usermanual in new page 2016-10-05 12:54:59 -07:00
Girish Ramakrishnan c8bf858ab0 doc: make to/from more clear 2016-10-05 10:21:24 -07:00
Johannes Zellner e2c206b755 Add cron job stub for backup cleaning in janitor 2016-10-05 17:19:53 +02:00
Johannes Zellner 882ed72f14 Remove --ssh-key in update docs for selfhosting 2016-10-05 16:41:17 +02:00
Johannes Zellner 29451f8e07 Remove unused code in installer 2016-10-05 14:35:40 +02:00
Johannes Zellner 29d3ad6cd3 Rename provision.json to userdata.json 2016-10-05 14:31:22 +02:00
Johannes Zellner 4642d4c8c5 First try to get the user data from a local json file 2016-10-05 14:30:37 +02:00
Girish Ramakrishnan ca7f26d5c7 Bump postgresql to fix clone issue 2016-10-03 23:15:30 -07:00
Girish Ramakrishnan 98773160d0 sync before reboot 2016-10-03 17:43:22 -07:00
Girish Ramakrishnan 6f0708eff2 Add mailbox with new app id 2016-10-03 16:11:36 -07:00
Girish Ramakrishnan a2db4312b8 give dummy callback to reboot 2016-10-03 15:49:47 -07:00
Girish Ramakrishnan 1e744c24f0 Fix typo 2016-10-03 15:08:21 -07:00
Girish Ramakrishnan 602265329d Add 0.21.1 changes 2016-10-03 14:42:43 -07:00
Girish Ramakrishnan 833e19a239 add note on cloning 2016-10-03 14:22:05 -07:00
Girish Ramakrishnan 1a25ad77ca use latest mail container 2016-10-03 13:53:11 -07:00
Girish Ramakrishnan 13e1b7060e doc: add note on second level domains for DO creation 2016-10-03 13:52:40 -07:00
Girish Ramakrishnan 3adf183569 Fix apps.clone to allocate mailbox 2016-10-03 13:27:27 -07:00
Girish Ramakrishnan 8e3db8fa2e Fix typo 2016-10-02 18:28:50 -07:00
Girish Ramakrishnan 2c357e022b add note about ldap restrictions as well 2016-10-01 23:52:01 -07:00
Girish Ramakrishnan 0f882614b1 Fix color of help links 2016-10-01 18:05:50 -07:00
Girish Ramakrishnan 3ae7a514ef Change the put route for setting group members 2016-10-01 17:33:50 -07:00
Girish Ramakrishnan 7779e5da3b Move unrestricted as first entry since the spacing is awkward below the groups 2016-09-30 14:26:08 -07:00
Girish Ramakrishnan cd0243d700 always store the group names as lower case 2016-09-30 12:33:18 -07:00
Girish Ramakrishnan ba588a1cd7 Fix group name validation to not allow hyphen
Fixes #70
2016-09-30 12:28:29 -07:00
Girish Ramakrishnan f71b55c9e2 Fix apps test 2016-09-30 12:09:33 -07:00
Girish Ramakrishnan d62cecff88 Display group name instead of id 2016-09-30 11:19:49 -07:00
Girish Ramakrishnan 93fb01a9b9 Fix more of the group tests 2016-09-30 10:17:50 -07:00
Girish Ramakrishnan 39043736e5 Give empty location a label 2016-09-30 10:17:34 -07:00
Girish Ramakrishnan 475fd06ac0 use unique ids for groups 2016-09-30 09:33:10 -07:00
Girish Ramakrishnan 1d12808b13 test setting group members 2016-09-29 15:15:25 -07:00
Girish Ramakrishnan 430ac330dc add groupdb tests 2016-09-29 15:11:56 -07:00
Girish Ramakrishnan 8e712da2c8 Add route and API to set members of a group 2016-09-29 14:48:14 -07:00
Girish Ramakrishnan 79d2b0c11c improve PTR docs for email 2016-09-29 12:53:54 -07:00
Girish Ramakrishnan 02e15dc413 Add link to user manual 2016-09-29 12:46:40 -07:00
Girish Ramakrishnan cf8282691b highlight mail server requirement 2016-09-29 12:38:42 -07:00
Girish Ramakrishnan 8c52221d26 More 0.21.0 changes 2016-09-29 12:38:35 -07:00
Girish Ramakrishnan 450d644f71 bump infra version
the email addon authentication has changed. this means that we have
to recreate app (that use the recvmail/sendmail) addons.
2016-09-28 20:51:51 -07:00
Girish Ramakrishnan f9c6fbee72 Fix field name in migration script 2016-09-28 19:37:04 -07:00
Girish Ramakrishnan d5b50f48fd Fix crash in migration script 2016-09-28 17:51:43 -07:00
Girish Ramakrishnan cdf0b8c1b0 Display error in accounts UI 2016-09-28 16:11:10 -07:00
Girish Ramakrishnan 90aeeb3896 Use profile route to update the display name 2016-09-28 15:50:15 -07:00
Girish Ramakrishnan 2ea772b862 use profile route to update the email 2016-09-28 15:49:41 -07:00
Girish Ramakrishnan 1a4bb4d119 Add Client.updateProfile 2016-09-28 15:49:33 -07:00
Girish Ramakrishnan 079bf3aed1 Fix invite template (again) 2016-09-28 15:25:16 -07:00
Girish Ramakrishnan 7c892706c3 Fix invite sent message 2016-09-28 15:25:16 -07:00
Girish Ramakrishnan c1063112e8 Fix Ok casing 2016-09-28 15:23:22 -07:00
Girish Ramakrishnan 7a07b52e7c Show place holder text if mail is enabled but no email yet 2016-09-28 15:04:17 -07:00
Girish Ramakrishnan 08a45897c3 Fix spacing 2016-09-28 15:02:17 -07:00
Girish Ramakrishnan 27d911addc Fix mail templates to use alternate email when email is null 2016-09-28 14:47:09 -07:00
Girish Ramakrishnan 441ea1af05 set email to null if we have no username 2016-09-28 14:39:47 -07:00
Girish Ramakrishnan 85c16ca43a use display name since email may not be valid 2016-09-28 14:39:29 -07:00
Girish Ramakrishnan e1ef118d7b Use alternateEmail if user was removed without ever signing up 2016-09-28 13:25:41 -07:00
Girish Ramakrishnan 823e6575a6 hide user@fqdn when mail is enabled 2016-09-28 13:21:26 -07:00
Girish Ramakrishnan ec13938042 add hack to clear alias error on change 2016-09-28 13:17:31 -07:00
Girish Ramakrishnan 1a17627f83 make space add tag 2016-09-28 13:06:02 -07:00
Girish Ramakrishnan 61292c4df9 display alias errors 2016-09-28 12:54:56 -07:00
Girish Ramakrishnan 10ff0f559c Show error if mailbox already exists 2016-09-28 12:00:05 -07:00
Girish Ramakrishnan 601aa7f5cd group useredit functions 2016-09-28 11:52:00 -07:00
Girish Ramakrishnan 36a91bb51a group userremove functions 2016-09-28 11:47:50 -07:00
Girish Ramakrishnan 149c90e8f7 group useradd functions 2016-09-28 11:45:39 -07:00
Girish Ramakrishnan c357efe4da just ignore error if we cannot import mailbox
this allows the box code to not crash if the user already has existing
conflicting group and user names
2016-09-28 11:09:53 -07:00
Girish Ramakrishnan c43bc24a6a Revert "Show group email ids when mail is enabled"
This reverts commit cca9780f51.

The UI looks very cluttered with this
2016-09-28 10:53:04 -07:00
Girish Ramakrishnan a78e17b036 Do not return aliases as mailboxes 2016-09-28 10:26:41 -07:00
Girish Ramakrishnan cca9780f51 Show group email ids when mail is enabled 2016-09-28 10:17:04 -07:00
Girish Ramakrishnan 1d31975e2a Groupname can be 2 chars long 2016-09-28 10:11:43 -07:00
Girish Ramakrishnan 7cb6961052 Show aliases based on whether email is enabled 2016-09-28 10:06:01 -07:00
Girish Ramakrishnan 18e23e47df Fix help text a bit 2016-09-28 09:55:05 -07:00
Johannes Zellner ac469ddffc Point self-hosters to the self-hosting backup docs from the user manual 2016-09-28 16:46:35 +02:00
Johannes Zellner a3401cdc3d Ensure user listing is fine 2016-09-28 15:00:41 +02:00
Johannes Zellner c6dc7d5c99 Only show email help for users and groups if email is enabled 2016-09-28 12:57:17 +02:00
Johannes Zellner 48e602273a Fetch mail config in users view 2016-09-28 12:57:03 +02:00
Johannes Zellner de25b34f71 Add some help text how users and groups work wrt email 2016-09-28 12:54:26 +02:00
Johannes Zellner adc3c13a01 Change how supertext is displayed 2016-09-28 12:54:04 +02:00
Johannes Zellner b28c239dbf Show error if email already taken on user edit form 2016-09-28 12:29:18 +02:00
Johannes Zellner b0c470da5a show if user is not activated yet 2016-09-28 12:20:45 +02:00
Johannes Zellner 11cfa2efaa Fix user edit with alternateEmail 2016-09-28 12:12:37 +02:00
Johannes Zellner 3a30310e2f Select the alternateEmail for client side gravatar 2016-09-28 12:05:48 +02:00
Johannes Zellner 08ae43ca13 Show alternateEmail in user profile if email is enabled 2016-09-28 11:49:03 +02:00
Johannes Zellner d426856883 Use alternateEmail for gravatar 2016-09-28 11:48:48 +02:00
Johannes Zellner 9fb6a537ed Take alternateEmail into the client side profile 2016-09-28 11:47:26 +02:00
Johannes Zellner 58b5613c6b Send alternateEmail with profile and user rest api 2016-09-28 11:08:11 +02:00
Girish Ramakrishnan ae9838a869 alternateEmail already checks if email is enabled now 2016-09-27 23:54:48 -07:00
Girish Ramakrishnan 4204d76616 lower case the alias and mailing list cn 2016-09-27 23:41:50 -07:00
Girish Ramakrishnan 20b6df3cb8 Make the button as big as other buttons 2016-09-27 23:00:53 -07:00
Girish Ramakrishnan 6a4b60436e alternativeEmail -> alternateEmail 2016-09-27 22:25:50 -07:00
Girish Ramakrishnan e2b28d3286 Allow enabling email on dev 2016-09-27 19:23:12 -07:00
Girish Ramakrishnan 7d5dfb64eb set ready when users got loaded 2016-09-27 19:20:28 -07:00
Girish Ramakrishnan 9111174b50 Add 0.21.0 changes 2016-09-27 18:40:16 -07:00
Girish Ramakrishnan f61842fc30 admin is reserved but not because we use it 2016-09-27 16:36:57 -07:00
Girish Ramakrishnan a91ae2b9aa add mailboxdb.getGroup tests 2016-09-27 16:34:28 -07:00
Girish Ramakrishnan 20708ad25a return members of mailing list 2016-09-27 16:27:22 -07:00
Girish Ramakrishnan c152580df0 Revert "make rfc822MailMember a complete address"
This reverts commit b9823fff44.

Most examples on internet don't have the complete address.
https://wiki.debian.org/LDAP/MigrationTools/Examples
2016-09-27 16:04:50 -07:00
Girish Ramakrishnan b9823fff44 make rfc822MailMember a complete address 2016-09-27 16:04:11 -07:00
Girish Ramakrishnan bd2848932e test ldap mailing list search 2016-09-27 15:56:02 -07:00
Girish Ramakrishnan 0327333be2 Add test to check mailbox gets add/removed with group API 2016-09-27 15:49:06 -07:00
Girish Ramakrishnan a8861dd4f8 Add missing return 2016-09-27 13:09:05 -07:00
Girish Ramakrishnan 0c4a9d8bc9 Choose the first non-alias as app email 2016-09-27 12:51:33 -07:00
Girish Ramakrishnan c1aa1eb33f Fix group listing 2016-09-27 12:51:33 -07:00
Girish Ramakrishnan 0d3169c787 remove mailboxdb.listGroups 2016-09-27 12:51:33 -07:00
Johannes Zellner 519dd2b889 Fix typo in schema 2016-09-27 21:48:39 +02:00
Johannes Zellner c9d5af8424 Adjust tests to fail with invite email if cloudron mail is enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner a6547676a1 Do not allow invite email for login if cloudron mail is enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner 34f624abef Give auth codes much longer expiration
Since the expiration is calculated when mocha loads the tests,
5000 was too low if some tests take longer
2016-09-27 21:48:39 +02:00
Johannes Zellner bd8acf763e Only allow bind by cloudron mail if enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner 4ba0504e7a Add ldap tests for login with cloudron mail 2016-09-27 21:48:39 +02:00
Johannes Zellner 2a7de5dab7 extracting username from email for cloudron mail is now done in user.js 2016-09-27 21:48:39 +02:00
Johannes Zellner ea87b3e876 Ensure lowercasing the email 2016-09-27 21:48:39 +02:00
Johannes Zellner 23bf358bbe Fix case when username is not the same as the email 2016-09-27 21:48:39 +02:00
Johannes Zellner 656356732e LDAP tests need more time on my end 2016-09-27 21:48:39 +02:00
Johannes Zellner 35a964bd00 Allow users to be verified with both emails if cloudron mail is enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner 5cff9df632 Add tests for user getter 2016-09-27 21:48:39 +02:00
Johannes Zellner 84de6c0583 Add user creation tests when Cloudron mail is enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner ca1c48b4b5 Send mails to alternativeEmail if enabled 2016-09-27 21:48:39 +02:00
Johannes Zellner 64278a9ff9 Introduce alternativeEmail in case the Cloudron has email enabled 2016-09-27 21:48:39 +02:00
Girish Ramakrishnan 8bd790c1e0 remove unused variable 2016-09-27 11:58:02 -07:00
Girish Ramakrishnan c9a0db0127 remove the alias and mailbox ldap listing code
it's unused and complicates things. besides, this is not going to be
possible to implement for the mailgroup code.
2016-09-27 11:51:21 -07:00
Girish Ramakrishnan a75cefa38f Email now allows relay from 172.18.0.1 with no auth 2016-09-27 10:28:20 -07:00
Girish Ramakrishnan 374f4be08f bump mail container version 2016-09-27 10:19:30 -07:00
Girish Ramakrishnan 3fc17d38a5 Merge reserved groups and usernames into one list
This is because now the mailbox names are shared
2016-09-27 07:48:44 -07:00
Johannes Zellner cfcf9f48cd Remove dead code 2016-09-27 13:17:31 +02:00
Johannes Zellner d26859acb4 Make it clear that the cli tool has to be run from the laptop
This is based on several self-hosters installing it on the server
2016-09-27 13:05:41 +02:00
Johannes Zellner adcdd45053 Specifically handle MX records for digitalocean to suit their api 2016-09-27 12:10:31 +02:00
Girish Ramakrishnan 33f803cd1c allow mailbox search by email 2016-09-26 21:03:07 -07:00
Girish Ramakrishnan 4856fc7de6 Fix mailAlias LDAP listing 2016-09-26 14:38:23 -07:00
Girish Ramakrishnan 9d9278b6f2 s/by/for 2016-09-26 14:02:23 -07:00
Girish Ramakrishnan 7d7de9e900 allow login via email cn to access mailbox 2016-09-26 12:03:37 -07:00
Girish Ramakrishnan 4a37747cfe authenticate mailbox based on owner 2016-09-26 11:55:16 -07:00
Girish Ramakrishnan 3e8cba08e3 add test for user alias routes 2016-09-26 11:12:12 -07:00
Girish Ramakrishnan 703e76ceb6 Check if there was an old username when deleting mailbox 2016-09-26 11:05:13 -07:00
Girish Ramakrishnan 577b509731 authorize logic in redundant
The authorization has to be done in the mail server. There is no
information on the ldap side to authorize.
2016-09-26 10:20:49 -07:00
Girish Ramakrishnan 3c9beb1add ldap: fix mailbox search and bind 2016-09-26 10:18:58 -07:00
Girish Ramakrishnan 46d8047599 fix ldapjs usage 2016-09-26 09:08:04 -07:00
Girish Ramakrishnan d39fa041bf update ldapjs 2016-09-26 09:04:02 -07:00
Johannes Zellner a7140412c4 Do not use userdb.get() directly in auth 2016-09-26 16:29:50 +02:00
Girish Ramakrishnan 3591452184 test that invalid alias cannot be set 2016-09-26 00:20:47 -07:00
Girish Ramakrishnan a8d57bb036 test that user.del removed mailbox and aliases 2016-09-26 00:18:45 -07:00
Girish Ramakrishnan d92e99a092 fix user alias API 2016-09-26 00:11:25 -07:00
Girish Ramakrishnan b40e740110 test if mailbox is updated with username change 2016-09-25 23:58:21 -07:00
Girish Ramakrishnan cd500adfe4 test that user.del deletes mailbox 2016-09-25 23:54:27 -07:00
Girish Ramakrishnan 55b80ac81f update mailbox on username change 2016-09-25 23:51:39 -07:00
Girish Ramakrishnan 1f1f56b431 Fix mailboxdb API 2016-09-25 23:21:55 -07:00
Girish Ramakrishnan baa2dbbf39 Add alias and list ldap routes 2016-09-25 21:34:52 -07:00
Girish Ramakrishnan 4b34f823a7 implement ldap mailbox get 2016-09-25 16:46:11 -07:00
Girish Ramakrishnan c158548c19 remove ununsed mailboxdb.getAll 2016-09-25 16:46:08 -07:00
Girish Ramakrishnan 8ce22c5656 ldap: remove unnecessary global 2016-09-25 16:11:54 -07:00
Girish Ramakrishnan e4e54d87f2 Fix angular code to match new mailbox aliases API 2016-09-23 17:55:21 -07:00
Girish Ramakrishnan 2b1a94dc8d Add mailboxdb.getByOwnerId 2016-09-23 17:35:48 -07:00
Girish Ramakrishnan afa352528f read send/recv config from mailbox database 2016-09-23 17:28:57 -07:00
Girish Ramakrishnan 6a32f89bf2 add/remove mailbox entry for app 2016-09-23 17:26:07 -07:00
Girish Ramakrishnan 49baad349c remove mailbox routes and move it to users 2016-09-23 15:45:40 -07:00
Girish Ramakrishnan 00ee2eea39 Remove code to push aliases
The mail-addon will query via LDAP
2016-09-23 15:14:07 -07:00
Girish Ramakrishnan 1d77c42269 Add ownerId to mailbox fields 2016-09-22 15:51:57 -07:00
Girish Ramakrishnan f24eee026e add ownerId, ownerType to mailboxes table
ownerId is the app id or user id or the group id.
2016-09-22 15:51:16 -07:00
Girish Ramakrishnan 5773f26548 doc: add note to delete the dummy record
if the record remains, then installs to the naked domain will fail.
this is because we do not overwrite existing DNS entries that we
did not create.
2016-09-22 09:59:43 -07:00
Girish Ramakrishnan 563b2a3042 Do not add dmarc record unless mail is enabled
the dmarc records depends on the DKIM signing as well. if the
cloudron is not using the cloudron mail service, that means that
the mails are not dkim signed and thus mails get rejected.
2016-09-22 09:52:25 -07:00
Girish Ramakrishnan 565b0e13c8 remove unused variable 2016-09-22 09:34:18 -07:00
Johannes Zellner b863f3f89d Be explicit what to show as the backup location 2016-09-22 16:14:26 +02:00
Johannes Zellner e3aeb4daf3 Allow selfhosters to trigger a backup manually 2016-09-22 16:10:28 +02:00
Johannes Zellner 6480975ea7 Show backup config for non caas or dev 2016-09-22 16:10:03 +02:00
Johannes Zellner 5ebddf7df6 Fetch backup config in settings view 2016-09-22 16:09:52 +02:00
Johannes Zellner 78367ea781 add getter and setter for backup config 2016-09-22 16:09:34 +02:00
Johannes Zellner 9bb4bf6eca Always set the current domain as the default 2016-09-22 15:30:58 +02:00
Johannes Zellner 54543aa536 Show provider specific settings in DNS settings dialog 2016-09-22 15:26:21 +02:00
Johannes Zellner cdc337862f Improve the reveal directive to be able to deal with changing values 2016-09-22 15:26:04 +02:00
Johannes Zellner 4d983f2a19 Click reveal the secret and token for dns provider 2016-09-22 14:56:15 +02:00
Johannes Zellner 80b70bf0a9 Add ng-click-reveal directive 2016-09-22 14:52:29 +02:00
Johannes Zellner 505f4de55d Only show AWS related dns settings if that provider is used 2016-09-22 14:23:43 +02:00
Johannes Zellner 4ee6a440fe Show provider in settings 2016-09-22 14:19:02 +02:00
Johannes Zellner 52ae3e24d0 Add link to change billing in settings view for caas 2016-09-22 14:01:47 +02:00
Girish Ramakrishnan 503a1d7229 reserve .app namespace for apps 2016-09-21 11:55:53 -07:00
Girish Ramakrishnan 9a000ddaf0 make ADMIN_GROUP_ID a constant 2016-09-20 15:07:11 -07:00
Girish Ramakrishnan 7fde57f7de clear db ignoring foreign key checks 2016-09-20 14:33:22 -07:00
Girish Ramakrishnan cf039b7964 Fix typo 2016-09-20 14:14:04 -07:00
Girish Ramakrishnan f552a8ac0d doc: cleanup 2016-09-20 11:33:20 -07:00
Johannes Zellner c38abaa1c3 Update the DigitalOcean selfhosting docs 2016-09-20 15:20:48 +02:00
Johannes Zellner 7b9eff94b3 No need to set always empty headers for app restore curl 2016-09-20 09:25:48 +02:00
Johannes Zellner 4a9a6dc232 Move backup config fetching into storage backend 2016-09-20 09:25:48 +02:00
Johannes Zellner 0bfc533e44 Fixup function naming 2016-09-20 09:25:48 +02:00
Johannes Zellner b937a86426 Download backups is GET 2016-09-20 09:25:48 +02:00
Johannes Zellner 6352064e6c Add backup download route if backend supports it 2016-09-20 09:25:48 +02:00
Johannes Zellner c9c1964e09 The storage backends dont need a backup listing function 2016-09-20 09:25:48 +02:00
Johannes Zellner 3ac786ba6d Define shell variable regardless of backend 2016-09-20 09:25:48 +02:00
Johannes Zellner e8be76f2e8 Fixup typos 2016-09-20 09:25:48 +02:00
Johannes Zellner 0ef9102b50 Set default backup folder to /var/backups 2016-09-20 09:25:48 +02:00
Johannes Zellner 746afb2b21 Shell uses obviously == no === 2016-09-20 09:25:48 +02:00
Johannes Zellner 02d1238853 filename is our backup id 2016-09-20 09:25:48 +02:00
Johannes Zellner d8de9555f2 Add storage interface definition 2016-09-20 09:25:48 +02:00
Johannes Zellner f348fedd50 Caas backend has to use the AWS credentials provided by appstore 2016-09-20 09:25:48 +02:00
Johannes Zellner 2a92d4772c Fix typo 2016-09-20 09:25:48 +02:00
Johannes Zellner fa828cc661 Basic backup listing for filesystem backend 2016-09-20 09:25:48 +02:00
Johannes Zellner 04b7822be5 Implement filesystem storage backend getRestoreUrl() 2016-09-20 09:25:48 +02:00
Johannes Zellner 1fd96a847f Implement filesystem storage backend copy 2016-09-20 09:25:48 +02:00
Johannes Zellner bf177473fe Rename getBackupDetails() -> getBoxBackupDetails() 2016-09-20 09:25:48 +02:00
Johannes Zellner 2ce768e29a Refactor getAppBackupCredentials() 2016-09-20 09:25:48 +02:00
Johannes Zellner 96c8f96c52 Group exports 2016-09-20 09:25:48 +02:00
Johannes Zellner 83ed87a8eb Refactor getBackupCredentials() 2016-09-20 09:25:48 +02:00
Johannes Zellner 5ac12452a1 Give MX records a priority on digitalocean 2016-09-20 09:25:48 +02:00
Johannes Zellner 6cecad89ec Remove a console.log 2016-09-20 09:25:48 +02:00
Johannes Zellner 6c23bce8e8 Prepare support for provider specific backup scripts 2016-09-20 09:25:48 +02:00
Johannes Zellner 73df6a8dd7 empty subdomain value is represented as @ in DO 2016-09-20 09:25:48 +02:00
Johannes Zellner be1cc76006 Also allow digitalocean dns settings to be changed 2016-09-20 09:25:48 +02:00
Johannes Zellner 528f71ab0f Support digitalocean dns backend for configured state 2016-09-20 09:25:48 +02:00
Johannes Zellner 6fa643049f Fix status code check 2016-09-20 09:25:48 +02:00
Johannes Zellner 835176ad75 Add support to update a domain in digitalocean 2016-09-20 09:25:48 +02:00
Johannes Zellner 56c272f34e Support digitalocean dns backend 2016-09-20 09:25:48 +02:00
Johannes Zellner 98bb7e3a1a Add initial digitalocean dns backend 2016-09-20 09:25:48 +02:00
Johannes Zellner 487fb23836 Add DNS interface description 2016-09-20 09:25:48 +02:00
Johannes Zellner cffc6d5fa5 Reorder dns backend exports 2016-09-20 09:25:48 +02:00
Johannes Zellner 1736d50260 Add filesystem storage backend only as noop currently 2016-09-20 09:25:48 +02:00
Girish Ramakrishnan 982caee380 doc: hotfixing 2016-09-19 21:30:37 -07:00
Girish Ramakrishnan 3cd7f47fbb doc: enabling email 2016-09-19 15:26:47 -07:00
Girish Ramakrishnan f5e71233c1 doc: customAuth 2016-09-19 15:11:38 -07:00
Girish Ramakrishnan 679c8a7d09 Fix all usages of ldap.parseFilter
Part of #56
2016-09-19 13:53:48 -07:00
Girish Ramakrishnan 402c875874 ldap : Fix crash with invalid queries
Fixes #56
2016-09-19 13:40:26 -07:00
Girish Ramakrishnan 5333311a35 setup dmarc record for custom domains
See http://www.zytrax.com/books/dns/ch9/dmarc.html for more info

Fixes #55
2016-09-19 10:56:51 -07:00
Girish Ramakrishnan e2a22c3a5e collect more docker logs for IP mapping 2016-09-16 22:10:33 -07:00
Johannes Zellner f251d4e511 Add changes for 0.20.3 2016-09-16 11:38:47 +02:00
Girish Ramakrishnan c39c1b9b51 remove jshint 2016-09-15 23:15:06 -07:00
Girish Ramakrishnan 28c8aa3222 Do not use Floating IP
We do not use a floating IP for 3 reasons:
1. The PTR record is not set to floating IP.
2. The outbound interface is not changeable to floating IP.
3. there are reports that port 25 on floating IP is blocked.
2016-09-15 22:14:21 -07:00
Girish Ramakrishnan 056b3dcb56 doc: add note on marking spam 2016-09-15 13:19:31 -07:00
Girish Ramakrishnan 9465c24c33 doc: add forwarding address section 2016-09-15 13:14:19 -07:00
Girish Ramakrishnan f62bed5898 our md converter does not like brackets 2016-09-15 13:07:58 -07:00
Girish Ramakrishnan 9b49c7ada7 Fix linter warnings 2016-09-15 12:41:50 -07:00
Girish Ramakrishnan a40abaf1a0 do not crash if the service was never started
fixes #51
2016-09-15 11:54:20 -07:00
Girish Ramakrishnan 7f2eadcd4e All apps have moved to 0.9.0 2016-09-14 20:59:28 -07:00
Johannes Zellner c839e119b1 remove EC2 base image creation script 2016-09-14 14:34:59 +02:00
Johannes Zellner 4a2e5ddc12 Add initial documentation for digitalocean selfhosting 2016-09-14 13:34:46 +02:00
Girish Ramakrishnan c10302f146 Preserve the isDemo flag across updates 2016-09-13 18:33:21 -07:00
Girish Ramakrishnan 8ef8f08b28 Take into account the configure memory limit 2016-09-13 18:05:38 -07:00
Girish Ramakrishnan 2ae4f76af5 x 2016-09-13 18:01:10 -07:00
226 changed files with 15410 additions and 9278 deletions
+1 -2
View File
@@ -1,7 +1,6 @@
# following files are skipped when exporting using git archive
/release export-ignore
/admin export-ignore
test export-ignore
docs export-ignore
.gitattributes export-ignore
.gitignore export-ignore
+113
View File
@@ -624,3 +624,116 @@
* Save user certs separately from automatic certs (#44)
* Fix access control display for email apps (#45)
[0.20.3]
* Make DigitalOcean selfhosting independent
[0.21.0]
* Delivery of email to aliases is now case insensitive (#35)
* Mailing list support via Groups (#15)
* Fix issue where non-admin users could not update their profile
[0.21.1]
* Fix app clone error (mailbox was not allocated)
* Do not allow "-" in group names
[0.22.0]
* Rebuild server instances instead of recreating
[0.50.0]
* Add UI to configure backup location
* Add DNS backend to make it easy to run on any server with SSH access
* Update wildcard certificate
* Fix crash in mail container with SPF plugin
* Fix postgresql addon to restore correctly
* Periodically cleanup file system backups
* Improve invitation emails
* Fix bug where mailbox name was generated incorrectly for nake domain (#81)
[0.60.0]
* Implement new approach to selfhosting. `cloudron machine create` is now deprecated.
Please see the [selfhosting guide](https://cloudron.io/references/selfhosting.html)
for more details
* Send email to admins if backup fails
* Add UI to set digitalocean as DNS provider
[0.60.1]
* Apply less strict hostname checking for email
* Fix bug in Cloudron plan listing
* Improved storage provider interface
[0.70.0]
* Remove standalone installer daemon
[0.70.1]
* Add additional platform healthcheck
[0.80.0]
* Add optional SSO for apps
* Improve app status page
* Several webinterface improvements
[0.80.1]
* Improved DNS handling
* Better error messages in UI
[0.90.0]
* Remove customAuth support
* Support non AWS S3 object storage
* Settings UI improvements
[0.91.0]
* Support installing Cloudron on intranet and VirtualBox
* Fix bug where relocating an app did not free the old location
* Allow Email server to be enabled with wildcard DNS
[0.92.0]
* Backup encryption key is now optional
* Fix bug where DNS mail record warning was shown by mistake
* Make cloudron-setup finish with `manual` DNS provider
[0.92.1]
* Remove DO specific grub cmd line
* Fix License text
[0.93.0]
* Smoother upgrades
[0.94.0]
* Cloudron domain can now be set after installation
* Backups are now organized by directory
* Document upgrading from Filesystem backend
* Send certificate renewal errors, OOM errors to cloudron admins
* Email bounce alerts are sent to the Cloudron owner
[0.94.1]
* Suppress upgrade emails
* Enable unattended upgrades
* Standardize on using devicemapper for docker storage backend
* Show detailed backup progress
* Fix DNSBL issue in mail container
* Fix issue where bounce emails were not sent to aliases
* Remove tutorial
* Restart mail container on certificate change
[0.97.0]
* Fix missing app icon issue
* Fix issue where box sends out crash reports incessantly
* (API) Allow memory limit to be set to -1 (unlimited)
* (API) Move developmentMode flag from manifest to apps route
[0.98.0]
* Send stat on whether email is enabled
* Fix bug where heartbeat was sent for self-hosted Cloudrons
* Make Cloudron function even when disk is full
* Fix thunderbird connection issue
* Send more detailed logs for backup failures
* Restart nginx if it crashed automatically
* Support all DNS providers for managed Cloudrons
* Add granular configuration for auto-updates
[0.99.0]
* Fix bug where ports <= 1023 were not reserved
* Cleanup graphs UI
* Polish webadmin UI
* Fix bug where hard disk size was detected incorrectly
* Use overlay2 as docker storage backend for scaleway
+1 -1
View File
@@ -630,7 +630,7 @@ state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
box
Copyright (C) 2016 yellowtent
Copyright (C) 2016 Cloudron UG
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
+4
View File
@@ -9,6 +9,10 @@ a complex task.
We are building the ultimate platform for self-hosting web apps. The Cloudron allows
anyone to effortlessly host web applications on their server on their own terms.
Support us on
[![Flattr Cloudron](https://button.flattr.com/flattr-badge-large.png)](https://flattr.com/submit/auto?user_id=cloudron&url=https://cloudron.io&title=Cloudron&tags=opensource&category=software)
or [pay us a coffee](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=8982CKNM46D8U)
## Features
* Single click install for apps. Check out the [App Store](https://cloudron.io/appstore.html).
BIN
View File
Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 14 KiB

+165
View File
@@ -0,0 +1,165 @@
#!/bin/bash
set -eu -o pipefail
assertNotEmpty() {
: "${!1:? "$1 is not set."}"
}
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
IMAGE_ID="ami-5aee2235" # ubuntu 16.04 eu-central-1
INSTANCE_TYPE="t2.micro"
SECURITY_GROUP="sg-19f5a770" # everything open on eu-central-1
BLOCK_DEVICE="DeviceName=/dev/sda1,Ebs={VolumeSize=20,DeleteOnTermination=true,VolumeType=gp2}"
SSH_KEY_NAME="id_rsa_yellowtent"
revision=$(git rev-parse HEAD)
ami_name=""
server_id=""
server_ip=""
destroy_server="yes"
deploy_env="prod"
args=$(getopt -o "" -l "revision:,name:,no-destroy,env:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--env) deploy_env="$2"; shift 2;;
--revision) revision="$2"; shift 2;;
--name) ami_name="$2"; shift 2;;
--no-destroy) destroy_server="no"; shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
export AWS_DEFAULT_REGION="eu-central-1" # we have to use us-east-1 to publish
# TODO fix this
export AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY}"
export AWS_SECRET_ACCESS_KEY="${AWS_ACCESS_SECRET}"
echo "=> Creating AMI"
readonly ssh_keys="${HOME}/.ssh/id_rsa_yellowtent"
readonly SSH="ssh -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
if [[ ! -f "${ssh_keys}" ]]; then
echo "caas ssh key is missing at ${ssh_keys} (pick it up from secrets repo)"
exit 1
fi
function get_pretty_revision() {
local git_rev="$1"
local sha1=$(git rev-parse --short "${git_rev}" 2>/dev/null)
echo "${sha1}"
}
now=$(date "+%Y-%m-%d-%H%M%S")
pretty_revision=$(get_pretty_revision "${revision}")
if [[ -z "${ami_name}" ]]; then
# if you change this, change the regexp is appstore/janitor.js
ami_name="box-${deploy_env}-${pretty_revision}-${now}" # remove slashes
fi
echo "=> Create EC2 instance"
id=$(aws ec2 run-instances --image-id "${IMAGE_ID}" --instance-type "${INSTANCE_TYPE}" --security-group-ids "${SECURITY_GROUP}" --block-device-mappings "${BLOCK_DEVICE}" --key-name "${SSH_KEY_NAME}"\
| $JSON Instances \
| $JSON 0.InstanceId)
[[ -z "$id" ]] && exit 1
echo "Instance created with ID $id"
echo "=> Waiting for instance to get a public IP"
while true; do
server_ip=$(aws ec2 describe-instances --instance-ids ${id} \
| $JSON Reservations.0.Instances \
| $JSON 0.PublicIpAddress)
if [[ ! -z "${server_ip}" ]]; then
echo ""
break
fi
echo -n "."
sleep 1
done
echo "Got public IP ${server_ip}"
echo "=> Waiting for ssh connection"
while true; do
echo -n "."
if $SSH ubuntu@${server_ip} echo "hello"; then
echo ""
break
fi
sleep 5
done
echo "=> Fetching cloudron-setup"
while true; do
if $SSH ubuntu@${server_ip} wget "https://cloudron.io/cloudron-setup" -O "cloudron-setup"; then
echo ""
break
fi
echo -n "."
sleep 5
done
echo "=> Running cloudron-setup"
$SSH ubuntu@${server_ip} sudo /bin/bash "cloudron-setup" --env "${deploy_env}" --provider "ec2"
echo "=> Creating AMI"
image_id=$(aws ec2 create-image --instance-id "${id}" --name "${ami_name}" | $JSON ImageId)
[[ -z "$id" ]] && exit 1
echo "Creating AMI with Id ${image_id}"
echo "=> Waiting for AMI to be created"
while true; do
state=$(aws ec2 describe-images --image-ids ${image_id} \
| $JSON Images \
| $JSON 0.State)
if [[ "${state}" == "available" ]]; then
echo ""
break
fi
echo -n "."
sleep 5
done
if [[ "${destroy_server}" == "yes" ]]; then
echo "=> Deleting EC2 instance"
while true; do
state=$(aws ec2 terminate-instances --instance-id "${id}" \
| $JSON TerminatingInstances \
| $JSON 0.CurrentState.Name)
if [[ "${state}" == "shutting-down" ]]; then
echo ""
break
fi
echo -n "."
sleep 5
done
fi
echo ""
echo "Done."
echo ""
echo "New AMI is: ${image_id}"
echo ""
+5 -5
View File
@@ -10,7 +10,7 @@ readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
installer_revision=$(git rev-parse HEAD)
revision=$(git rev-parse HEAD)
box_name=""
server_id=""
server_ip=""
@@ -28,7 +28,7 @@ eval set -- "${args}"
while true; do
case "$1" in
--env) deploy_env="$2"; shift 2;;
--revision) installer_revision="$2"; shift 2;;
--revision) revision="$2"; shift 2;;
--name) box_name="$2"; destroy_server="no"; shift 2;;
--no-destroy) destroy_server="no"; shift 2;;
--) break;;
@@ -73,7 +73,7 @@ function get_pretty_revision() {
}
now=$(date "+%Y-%m-%d-%H%M%S")
pretty_revision=$(get_pretty_revision "${installer_revision}")
pretty_revision=$(get_pretty_revision "${revision}")
if [[ -z "${box_name}" ]]; then
# if you change this, change the regexp is appstore/janitor.js
@@ -138,13 +138,13 @@ cd "${SOURCE_DIR}"
git archive --format=tar HEAD | $ssh22 "root@${server_ip}" "cat - > /tmp/box.tar.gz"
echo "Executing init script"
if ! $ssh22 "root@${server_ip}" "/bin/bash /root/initializeBaseUbuntuImage.sh ${installer_revision} caas"; then
if ! $ssh22 "root@${server_ip}" "/bin/bash /root/initializeBaseUbuntuImage.sh caas"; then
echo "Init script failed"
exit 1
fi
echo "Shutting down server with id : ${server_id}"
$ssh202 "root@${server_ip}" "shutdown -f now" || true # shutdown sometimes terminates ssh connection immediately making this command fail
$ssh22 "root@${server_ip}" "shutdown -f now" || true # shutdown sometimes terminates ssh connection immediately making this command fail
# wait 10 secs for actual shutdown
echo "Waiting for 10 seconds for server to shutdown"
-192
View File
@@ -1,192 +0,0 @@
#!/bin/bash
set -eu -o pipefail
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
export JSON="${SOURCE_DIR}/node_modules/.bin/json"
installer_revision=$(git rev-parse HEAD)
instance_id=""
server_ip=""
destroy_server="yes"
ami_id="ami-f9e30f96"
region="eu-central-1"
aws_credentials="baseimage"
security_group="sg-b9a473d1"
instance_type="t2.small"
subnet_id="subnet-801402e9"
key_pair_name="id_rsa_yellowtent"
# Only GNU getopt supports long options. OS X comes bundled with the BSD getopt
# brew install gnu-getopt to get the GNU getopt on OS X
[[ $(uname -s) == "Darwin" ]] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
args=$(${GNU_GETOPT} -o "" -l "revisio0n:,no-destroy" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--revision) installer_revision="$2"; shift 2;;
--no-destroy) destroy_server="no"; shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
readonly ssh_keys="${HOME}/.ssh/id_rsa_yellowtent"
readonly scp202="scp -P 202 -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly scp22="scp -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly ssh202="ssh -p 202 -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
readonly ssh22="ssh -o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${ssh_keys}"
if [[ ! -f "${ssh_keys}" ]]; then
echo "caas ssh key is missing at ${ssh_keys} (pick it up from secrets repo)"
exit 1
fi
function debug() {
echo "$@" >&2
}
function get_pretty_revision() {
local git_rev="$1"
local sha1=$(git rev-parse --short "${git_rev}" 2>/dev/null)
echo "${sha1}"
}
now=$(date "+%Y-%m-%d-%H%M%S")
pretty_revision=$(get_pretty_revision "${installer_revision}")
echo "Creating EC2 instance"
instance_id=$(aws ec2 run-instances --image-id ${ami_id} --region ${region} --profile ${aws_credentials} --security-group-ids ${security_group} --instance-type ${instance_type} --key-name ${key_pair_name} --subnet-id ${subnet_id} --associate-public-ip-address | $JSON Instances[0].InstanceId)
echo "Got InstanceId: ${instance_id}"
# name the instance
aws ec2 create-tags --profile ${aws_credentials} --resources ${instance_id} --tags "Key=Name,Value=baseimage-${pretty_revision}"
echo "Waiting for instance to be running..."
while true; do
event_status=`aws ec2 describe-instances --instance-id ${instance_id} --region ${region} --profile ${aws_credentials} | $JSON Reservations[0].Instances[0].State.Name`
if [[ "${event_status}" == "running" ]]; then
break
fi
debug -n "."
sleep 10
done
server_ip=$(aws ec2 describe-instances --instance-id ${instance_id} --region ${region} --profile ${aws_credentials} | $JSON Reservations[0].Instances[0].PublicIpAddress)
echo "Server IP is: ${server_ip}"
while true; do
echo "Trying to copy init script to server"
if $scp22 "${SCRIPT_DIR}/initializeBaseUbuntuImage.sh" ubuntu@${server_ip}:.; then
break
fi
echo "Timedout, trying again in 30 seconds"
sleep 30
done
echo "Copying infra_version.js"
$scp22 "${SCRIPT_DIR}/../src/infra_version.js" ubuntu@${server_ip}:.
echo "Copying box source"
cd "${SOURCE_DIR}"
git archive --format=tar HEAD | $ssh22 "ubuntu@${server_ip}" "cat - > /tmp/box.tar.gz"
echo "Enabling root ssh access"
if ! $ssh22 "ubuntu@${server_ip}" "sudo sed -e 's/.* \(ssh-rsa.*\)/\1/' -i /root/.ssh/authorized_keys"; then
echo "Unable to enable root access"
echo "Make sure to cleanup the ec2 instance ${instance_id}"
exit 1
fi
echo "Executing init script"
if ! $ssh22 "root@${server_ip}" "/bin/bash /home/ubuntu/initializeBaseUbuntuImage.sh ${installer_revision} ec2"; then
echo "Init script failed"
echo "Make sure to cleanup the ec2 instance ${instance_id}"
exit 1
fi
echo "Strip ssh key"
if ! $ssh202 "root@${server_ip}" "rm /root/.ssh/authorized_keys"; then
echo "Unable to remove ssh access"
echo "Make sure to cleanup the ec2 instance ${instance_id}"
exit 1
fi
snapshot_name="cloudron-${pretty_revision}-${now}"
echo "Creating ami image ${snapshot_name}"
image_id=$(aws ec2 create-image --region ${region} --profile ${aws_credentials} --instance-id ${instance_id} --name ${snapshot_name} | $JSON ImageId)
echo "Image creation started for image id: ${image_id}"
echo "Waiting for image creation to finish..."
while true; do
event_status=`aws ec2 describe-images --region ${region} --profile ${aws_credentials} --image-id ${image_id} | $JSON Images[0].State`
if [[ "${event_status}" == "available" ]]; then
break
fi
debug -n "."
sleep 10
done
echo "Terminating instance"
aws ec2 terminate-instances --region ${region} --profile ${aws_credentials} --instance-ids ${instance_id}
echo "Make image public"
aws ec2 modify-image-attribute --region ${region} --profile ${aws_credentials} --image-id ${image_id} --launch-permission "{\"Add\":[{\"Group\":\"all\"}]}"
# http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region
# Images are currently created in eu-central-1
echo "Coping image to other regions"
ec2_regions=( "us-east-1" "us-west-1" "us-west-2" "ap-south-1" "ap-northeast-2" "ap-southeast-1" "ap-southeast-2" "ap-northeast-1" "eu-west-1" "sa-east-1" )
ec2_amis=( )
for r in ${ec2_regions[@]}; do
echo "=> ${r}"
ami_id=$(aws ec2 copy-image --region ${r} --profile ${aws_credentials} --source-image-id ${image_id} --source-region ${region} --name ${snapshot_name} | $JSON ImageId)
# append in the same order as the regions
ec2_amis+=( ${ami_id} )
done
# wait for all images to be available
echo "Waiting for images to be ready (first will take the longest)..."
region_string="${region}=${image_id}"
i=0
while [ $i -lt ${#ec2_regions[*]} ]; do
echo "=> ${ec2_regions[$i]} ${ec2_amis[$i]}"
while true; do
event_status=`aws ec2 describe-images --region ${ec2_regions[$i]} --profile ${aws_credentials} --image-id ${ec2_amis[$i]} | $JSON Images[0].State`
if [[ "${event_status}" == "available" ]]; then
echo "done"
break
fi
debug -n "."
sleep 10
done
# now make it public
aws ec2 modify-image-attribute --region ${ec2_regions[$i]} --profile ${aws_credentials} --image-id ${ec2_amis[$i]} --launch-permission "{\"Add\":[{\"Group\":\"all\"}]}"
# append to output string for release tool
region_string+=",${ec2_regions[$i]}=${ec2_amis[$i]}"
# inc the iteration counter
i=$(( $i + 1));
done
echo ""
echo "--------------------------------------------------"
echo "New image id is: ${image_id}"
echo "Image region string for release:"
echo "${region_string}"
echo "--------------------------------------------------"
echo ""
+61 -269
View File
@@ -2,301 +2,93 @@
set -euv -o pipefail
readonly USER=yellowtent
readonly USER_HOME="/home/${USER}"
readonly INSTALLER_SOURCE_DIR="${USER_HOME}/installer"
readonly INSTALLER_REVISION="$1"
readonly PROVIDER="$2"
readonly USER_DATA_FILE="/root/user_data.img"
readonly USER_DATA_DIR="/home/yellowtent/data"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly arg_provider="${1:-generic}"
readonly arg_infraversionpath="${SOURCE_DIR}/${2:-}"
function die {
echo $1
exit 1
}
[[ "$(systemd --version 2>&1)" == *"systemd 229"* ]] || die "Expecting systemd to be 229"
echo "==== Create User ${USER} ===="
if ! id "${USER}"; then
useradd "${USER}" -m
fi
echo "=== Yellowtent base image preparation (installer revision - ${INSTALLER_REVISION}) ==="
echo "=== Prepare installer source ==="
rm -rf "${INSTALLER_SOURCE_DIR}" && mkdir -p "${INSTALLER_SOURCE_DIR}"
rm -rf /tmp/box && mkdir -p /tmp/box
tar xvf /tmp/box.tar.gz -C /tmp/box && rm /tmp/box.tar.gz
cp -rf /tmp/box/installer/* "${INSTALLER_SOURCE_DIR}"
echo "${INSTALLER_REVISION}" > "${INSTALLER_SOURCE_DIR}/REVISION"
export DEBIAN_FRONTEND=noninteractive
echo "=== Upgrade ==="
apt-get update
apt-get dist-upgrade -y
apt-get install -y curl
apt-get -o Dpkg::Options::="--force-confdef" update -y
apt-get -o Dpkg::Options::="--force-confdef" dist-upgrade -y
# Setup firewall before everything. docker creates it's own chain and the -X below will remove it
# Do NOT use iptables-persistent because it's startup ordering conflicts with docker
echo "=== Setting up firewall ==="
# clear tables and set default policy
iptables -F # flush all chains
iptables -X # delete all chains
# default policy for filter table
iptables -P INPUT DROP
iptables -P FORWARD ACCEPT # TODO: disable icc and make this as reject
iptables -P OUTPUT ACCEPT
echo "==> Installing required packages"
# NOTE: keep these in sync with src/apps.js validatePortBindings
# allow ssh, http, https, ping, dns
iptables -I INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -m tcp -m multiport --dports 25,80,202,443,587,993,4190 -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
iptables -A INPUT -p udp --sport 53 -j ACCEPT
iptables -A INPUT -s 172.18.0.0/16 -j ACCEPT # required to accept any connections from apps to our IP:<public port>
debconf-set-selections <<< 'mysql-server mysql-server/root_password password password'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password password'
# loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# this enables automatic security upgrades (https://help.ubuntu.com/community/AutomaticSecurityUpdates)
apt-get -y install \
acl \
awscli \
btrfs-tools \
build-essential \
cron \
curl \
iptables \
logrotate \
mysql-server-5.7 \
nginx-full \
openssh-server \
pwgen \
rcconf \
swaks \
unattended-upgrades \
unbound
# prevent DoS
# iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT
# log dropped incoming. keep this at the end of all the rules
iptables -N LOGGING # new chain
iptables -A INPUT -j LOGGING # last rule in INPUT chain
iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7
iptables -A LOGGING -j DROP
echo "==== Install btrfs tools ==="
apt-get -y install btrfs-tools
echo "==== Install docker ===="
# install docker from binary to pin it to a specific version. the current debian repo does not allow pinning
# IMPORTANT: docker 1.11.x breaks the --dns option hack that we use below
curl https://get.docker.com/builds/Linux/x86_64/docker-1.10.2 > /usr/bin/docker
apt-get -y install aufs-tools
chmod +x /usr/bin/docker
groupadd docker
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API
PartOf=docker.service
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
After=network.target docker.socket
Requires=docker.socket
[Service]
ExecStart=/usr/bin/docker daemon -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --dns 127.0.0.1
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
echo "=== Setup btrfs data ==="
truncate -s "8192m" "${USER_DATA_FILE}" # 8gb start (this will get resized dynamically by box-setup.service)
mkfs.btrfs -L UserHome "${USER_DATA_FILE}"
mkdir -p "${USER_DATA_DIR}"
mount -t btrfs -o loop,nosuid "${USER_DATA_FILE}" ${USER_DATA_DIR}
systemctl daemon-reload
systemctl enable docker
systemctl start docker
# give docker sometime to start up and create iptables rules
# those rules come in after docker has started, and we want to wait for them to be sure iptables-save has all of them
sleep 10
# Disable forwarding to metadata route from containers
iptables -I FORWARD -d 169.254.169.254 -j DROP
# ubuntu will restore iptables from this file automatically. this is here so that docker's chain is saved to this file
mkdir /etc/iptables && iptables-save > /etc/iptables/rules.v4
echo "=== Enable memory accounting =="
if [[ "${PROVIDER}" == "digitalocean" ]] || [[ "${PROVIDER}" == "caas" ]]; then
sed -e 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX="console=tty1 root=LABEL=DOROOT notsc clocksource=kvm-clock net.ifnames=0 cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
else
sed -e 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
fi
update-grub
# now add the user to the docker group
usermod "${USER}" -a -G docker
echo "==== Install nodejs ===="
# Cannot use anything above 4.1.1 - https://github.com/nodejs/node/issues/3803
mkdir -p /usr/local/node-4.1.1
curl -sL https://nodejs.org/dist/v4.1.1/node-v4.1.1-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-4.1.1
ln -s /usr/local/node-4.1.1/bin/node /usr/bin/node
ln -s /usr/local/node-4.1.1/bin/npm /usr/bin/npm
echo "==> Installing node.js"
mkdir -p /usr/local/node-6.9.2
curl -sL https://nodejs.org/dist/v6.9.2/node-v6.9.2-linux-x64.tar.gz | tar zxvf - --strip-components=1 -C /usr/local/node-6.9.2
ln -sf /usr/local/node-6.9.2/bin/node /usr/bin/node
ln -sf /usr/local/node-6.9.2/bin/npm /usr/bin/npm
apt-get install -y python # Install python which is required for npm rebuild
[[ "$(python --version 2>&1)" == "Python 2.7."* ]] || die "Expecting python version to be 2.7.x"
echo "==== Downloading docker images ===="
if [ -f ${SOURCE_DIR}/infra_version.js ]; then
images=$(node -e "var i = require('${SOURCE_DIR}/infra_version.js'); console.log(i.baseImages.join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
# https://docs.docker.com/engine/installation/linux/ubuntulinux/
echo "==> Installing Docker"
apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" > /etc/apt/sources.list.d/docker.list
apt-get -y update
echo "Pulling images: ${images}"
for image in ${images}; do
docker pull "${image}"
done
else
echo "No infra_versions.js found, skipping image download"
# create systemd drop-in file
mkdir -p /etc/systemd/system/docker.service.d
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/docker daemon -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=devicemapper" > /etc/systemd/system/docker.service.d/cloudron.conf
apt-get -y --allow-downgrades install docker-engine=1.12.5-0~ubuntu-xenial # apt-cache madison docker-engine
apt-mark hold docker-engine # do not update docker
storage_driver=$(docker info | grep "Storage Driver" | sed 's/.*: //')
if [[ "${storage_driver}" != "devicemapper" ]]; then
echo "Docker is using "${storage_driver}" instead of devicemapper"
exit 1
fi
echo "==== Install nginx ===="
apt-get -y install nginx-full
[[ "$(nginx -v 2>&1)" == *"nginx/1.10."* ]] || die "Expecting nginx version to be 1.10.x"
echo "==> Enable memory accounting"
apt-get -y install grub2
sed -e 's/^GRUB_CMDLINE_LINUX="\(.*\)"$/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 panic_on_oops=1 panic=5"/' -i /etc/default/grub
update-grub
echo "==== Install build-essential ===="
apt-get -y install build-essential rcconf
echo "==> Downloading docker images"
if [ ! -f "${arg_infraversionpath}/infra_version.js" ]; then
echo "No infra_versions.js found"
exit 1
fi
echo "==== Install mysql ===="
debconf-set-selections <<< 'mysql-server mysql-server/root_password password password'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password password'
apt-get -y install mysql-server-5.7
[[ "$(mysqld --version 2>&1)" == *"5.7."* ]] || die "Expecting mysql version to be 5.7.x"
images=$(node -e "var i = require('${arg_infraversionpath}/infra_version.js'); console.log(i.baseImages.join(' '), Object.keys(i.images).map(function (x) { return i.images[x].tag; }).join(' '));")
echo "==== Install pwgen and swaks awscli ===="
apt-get -y install pwgen swaks awscli
echo -e "\tPulling docker images: ${images}"
for image in ${images}; do
docker pull "${image}"
done
echo "==== Install collectd ==="
echo "==> Install collectd"
if ! apt-get install -y collectd collectd-utils; then
# FQDNLookup is true in default debian config. The box code has a custom collectd.conf that fixes this
echo "Failed to install collectd. Presumably because of http://mailman.verplant.org/pipermail/collectd/2015-March/006491.html"
sed -e 's/^FQDNLookup true/FQDNLookup false/' -i /etc/collectd/collectd.conf
fi
update-rc.d -f collectd remove
# this simply makes it explicit that we run logrotate via cron. it's already part of base ubuntu
echo "==== Install logrotate ==="
apt-get install -y cron logrotate
systemctl enable cron
echo "=== Rebuilding npm packages ==="
cd "${INSTALLER_SOURCE_DIR}" && while ! npm install --production; do sleep 1; done
chown "${USER}:${USER}" -R "${INSTALLER_SOURCE_DIR}"
echo "==== Install installer systemd script ===="
cat > /etc/systemd/system/cloudron-installer.service <<EOF
[Unit]
Description=Cloudron Installer
; journald crashes result in a EPIPE in node. Cannot ignore it as it results in loss of logs.
BindsTo=systemd-journald.service
After=box-setup.service
[Service]
Type=idle
ExecStart="${INSTALLER_SOURCE_DIR}/src/server.js"
Environment="DEBUG=installer*,connect-lastmile"
; kill any child (installer.sh) as well
KillMode=control-group
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
# Restore iptables before docker
echo "==== Install iptables-restore systemd script ===="
cat > /etc/systemd/system/iptables-restore.service <<EOF
[Unit]
Description=IPTables Restore
Before=docker.service
[Service]
Type=oneshot
ExecStart=/sbin/iptables-restore /etc/iptables/rules.v4
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
# Allocate swap files
# https://bbs.archlinux.org/viewtopic.php?id=194792 ensures this runs after do-resize.service
# On ubuntu ec2 we use cloud-init https://wiki.archlinux.org/index.php/Cloud-init
echo "==== Install box-setup systemd script ===="
cat > /etc/systemd/system/box-setup.service <<EOF
[Unit]
Description=Box Setup
Before=docker.service collectd.service mysql.service sshd.service nginx.service
After=cloud-init.service
[Service]
Type=oneshot
ExecStart="${INSTALLER_SOURCE_DIR}/systemd/box-setup.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable cloudron-installer
systemctl enable iptables-restore
systemctl enable box-setup
# Configure systemd
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
-i /etc/systemd/journald.conf
# When rotating logs, systemd kills journald too soon sometimes
# See https://github.com/systemd/systemd/issues/1353 (this is upstream default)
sed -e "s/^WatchdogSec=.*$/WatchdogSec=3min/" \
-i /lib/systemd/system/systemd-journald.service
sync
# Configure time
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
timedatectl set-ntp 1
timedatectl set-timezone UTC
# Give user access to system logs
apt-get -y install acl
usermod -a -G systemd-journal ${USER}
mkdir -p /var/log/journal # in some images, this directory is not created making system log to /run/systemd instead
chown root:systemd-journal /var/log/journal
systemctl restart systemd-journald
setfacl -n -m u:${USER}:r /var/log/journal/*/system.journal
echo "==== Install ssh ==="
apt-get -y install openssh-server
# https://stackoverflow.com/questions/4348166/using-with-sed on why ? must be escaped
sed -e 's/^#\?Port .*/Port 202/g' \
-e 's/^#\?PermitRootLogin .*/PermitRootLogin without-password/g' \
-e 's/^#\?PermitEmptyPasswords .*/PermitEmptyPasswords no/g' \
-e 's/^#\?PasswordAuthentication .*/PasswordAuthentication no/g' \
-i /etc/ssh/sshd_config
# DO uses Google nameservers by default. This causes RBL queries to fail (host 2.0.0.127.zen.spamhaus.org)
# We do not use dnsmasq because it is not a recursive resolver and defaults to the value in the interfaces file (which is Google DNS!)
echo "==== Install unbound DNS ==="
apt-get -y install unbound
# required so we can connect to this machine since port 22 is blocked by iptables by now
systemctl reload sshd
+3 -7
View File
@@ -5,16 +5,14 @@
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
require('debug').formatArgs = function formatArgs() {
arguments[0] = this.namespace + ' ' + arguments[0];
return arguments;
require('debug').formatArgs = function formatArgs(args) {
args[0] = this.namespace + ' ' + args[0];
};
var appHealthMonitor = require('./src/apphealthmonitor.js'),
async = require('async'),
config = require('./src/config.js'),
ldap = require('./src/ldap.js'),
oauthproxy = require('./src/oauthproxy.js'),
server = require('./src/server.js'),
simpleauth = require('./src/simpleauth.js');
@@ -37,12 +35,12 @@ async.series([
ldap.start,
simpleauth.start,
appHealthMonitor.start,
oauthproxy.start
], function (error) {
if (error) {
console.error('Error starting server', error);
process.exit(1);
}
console.log('Cloudron is up and running');
});
var NOOP_CALLBACK = function () { };
@@ -51,7 +49,6 @@ process.on('SIGINT', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
simpleauth.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
@@ -59,6 +56,5 @@ process.on('SIGTERM', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
simpleauth.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

+17 -19
View File
@@ -1,6 +1,4 @@
# Addons
## Overview
# Overview
Addons are services like database, authentication, email, caching that are part of the
Cloudron runtime. Setup, provisioning, scaling and maintanence of addons is taken care of
@@ -10,7 +8,7 @@ The fundamental idea behind addons is to allow sharing of Cloudron resources acr
For example, a single MySQL server instance can be used across multiple apps. The Cloudron
runtime sets up addons in such a way that apps are isolated from each other.
## Using Addons
# Using Addons
Addons are opt-in and must be specified in the [Cloudron Manifest](/references/manifest.html).
When the app runs, environment variables contain the necessary information to access the addon.
@@ -36,9 +34,9 @@ for this purpose to setup and update the DB schema.
}
```
## All addons
# All addons
### email
## email
This addon allows an app to send and recieve emails on behalf of the user. The intended use case is webmail applications.
@@ -60,7 +58,7 @@ MAIL_SIEVE_PORT= # ManageSieve server port
MAIL_DOMAIN= # Domain of the mail server
```
### ldap
## ldap
This addon provides LDAP based authentication via LDAP version 3.
@@ -92,7 +90,7 @@ cloudron exec
> ldapsearch -x -h "${LDAP_SERVER}" -p "${LDAP_PORT}" -b "${LDAP_GROUPS_BASE_DN}"
```
### localstorage
## localstorage
Since all Cloudron apps run within a read-only filesystem, this addon provides a writeable folder under `/app/data/`.
All contents in that folder are included in the backup. On first run, this folder will be empty. File added in this path
@@ -107,7 +105,7 @@ If the app is running under the recommeneded `cloudron` user, this can be achiev
chown -R cloudron:cloudron /app/data
```
### mongodb
## mongodb
By default, this addon provide mongodb 2.6.3.
@@ -128,7 +126,7 @@ cloudron exec
# mongo -u "${MONGODB_USERNAME}" -p "${MONGODB_PASSWORD}" ${MONGODB_HOST}:${MONGODB_PORT}/${MONGODB_DATABASE}
```
### mysql
## mysql
By default, this addon provides a single database on MySQL 5.6.19. The database is already created and the application
only needs to create the tables.
@@ -158,7 +156,7 @@ the following environment variables are injected:
MYSQL_DATABASE_PREFIX= # prefix to use to create databases
```
### oauth
## oauth
The Cloudron OAuth 2.0 provider can be used in an app to implement Single Sign-On.
@@ -188,7 +186,7 @@ is so that apps cannot make undesired changes to the user's Cloudron.
We currently provide OAuth2 integration for Ruby [omniauth](https://github.com/cloudron-io/omniauth-cloudron) and Node.js [passport](https://github.com/cloudron-io/passport-cloudron).
### postgresql
## postgresql
By default, this addon provides PostgreSQL 9.4.4.
@@ -211,7 +209,7 @@ cloudron exec
> PGPASSWORD=${POSTGRESQL_PASSWORD} psql -h ${POSTGRESQL_HOST} -p ${POSTGRESQL_PORT} -U ${POSTGRESQL_USERNAME} -d ${POSTGRESQL_DATABASE}
```
### recvmail
## recvmail
The recvmail addon can be used to receive email for the application.
@@ -221,7 +219,7 @@ MAIL_IMAP_SERVER= # the IMAP server. this can be an IP or DNS name
MAIL_IMAP_PORT= # the IMAP server port
MAIL_IMAP_USERNAME= # the username to use for authentication
MAIL_IMAP_PASSWORD= # the password to use for authentication
MAIL_TO= # the to address to use
MAIL_TO= # the "To" address to use
MAIL_DOMAIN= # the mail for which email will be received
```
@@ -237,7 +235,7 @@ cloudron exec
The IMAP command `? LOGIN username password` can then be used to test the authentication.
### redis
## redis
By default, this addon provides redis 2.8.13. The redis is configured to be persistent and data is preserved across updates
and restarts.
@@ -257,7 +255,7 @@ cloudron exec
> redis-cli -h "${REDIS_HOST}" -p "${REDIS_PORT}" -a "${REDIS_PASSWORD}"
```
### scheduler
## scheduler
The scheduler addon can be used to run tasks at periodic intervals (cron).
@@ -297,7 +295,7 @@ If a task is still running when a new instance of the task is scheduled to be st
task instance is killed.
### sendmail
## sendmail
The sendmail addon can be used to send email from the application.
@@ -307,7 +305,7 @@ MAIL_SMTP_SERVER= # the mail server (relay) that apps can use. this can be a
MAIL_SMTP_PORT= # the mail server port
MAIL_SMTP_USERNAME= # the username to use for authentication as well as the `from` username when sending emails
MAIL_SMTP_PASSWORD= # the password to use for authentication
MAIL_FROM= # the from address to use
MAIL_FROM= # the "From" address to use
MAIL_DOMAIN= # the domain name to use for email sending (i.e username@domain)
```
@@ -320,7 +318,7 @@ cloudron exec
> swaks --server "${MAIL_SMTP_SERVER}" -p "${MAIL_SMTP_PORT}" --from "${MAIL_SMTP_USERNAME}@${MAIL_DOMAIN}" --body "Test mail from cloudron app at $(hostname -f)" --auth-user "${MAIL_SMTP_USERNAME}" --auth-password "${MAIL_SMTP_PASSWORD}"
```
### simpleauth
## simpleauth
Simple Auth can be used for authenticating users with a HTTP request. This method of authentication is targeted
at applications, which for whatever reason can't use the ldap addon.
+39 -149
View File
@@ -151,6 +151,8 @@ If `altDomain` is set, the app can be accessed from `https://<altDomain>`.
* `SAMEORIGIN` - allows embedding from the same domain as the app. This is the default.
* `ALLOW-FROM https://example.com/` - allows this app to be embedded from example.com
`memoryLimit` is the maximum memory this app can use (in bytes) including swap. If set to 0, the app uses the `memoryLimit` value set in the manifest. If set to -1, the app gets unlimited memory.
Read more about the options at [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options).
Response (200):
@@ -315,7 +317,7 @@ GET `/api/v1/apps/:appId/backups` <scope>admin</scope>
Gets the backups of the application with id `appId`.
Use the [Backup](/references/api.html#download-backup) API to download the backup.
Use the [Backup](/references/api.html#download-backup) API to download the backup. Use the [Clone](/references/api.html#clone) API to create another instance of this app from a backup.
Response (200):
@@ -860,6 +862,20 @@ Response (200):
}
```
### Set members
PUT `/api/v1/groups/:groupId/members` <scope>admin</scope>
Sets the members of an existing group with id `groupId`. Note that this replaces the
existing users with the provided userIds.
Request:
```
{
userIds: [ <string>, ... ] // list of users to be part of this group
}
```
### List groups
GET `/api/v1/groups` <scope>admin</scope>
@@ -893,125 +909,6 @@ Response (204):
{}
```
## Mailboxes
Mailboxes allow users to receive mail using `IMAP`. Every user gets a mailbox with the same name as the username.
Users can receive email using IMAP (TLS) at `my.customdomain.com` and send mail using SMTP (STARTTLS) at
`my.customdomain.com`.
Apps can also receive email using the [recvmail addon](/references/addons.html#recvmail). App mailboxes are
named as `<subdomain>.app`. For this reason, the `.app` suffix is reserved for apps.
### Create mailbox
POST `/api/v1/mailboxes` <scope>admin</scope>
Creates a new mailbox.
`name` specifies the address at which email can be received and does not include the domain name. `name` can contain
only alphanumeric characters and a dot ('.') and must be lower case. The Cloudron mail server support '+' based subaddressing (i.e)name+tag@domain.com will be delivered to this mailbox.
mailboxes can only be accessed by users with the same username as the mailbox. A mailbox is automatically created
for every user on the Cloudron.
Request:
```
{
name: <string>
}
```
`name` must be atleast 2 characters and must be alphanumeric.
Response (200):
```
{
id: <string>,
name: <string>
}
```
### Get mailbox
GET `/api/v1/mailboxes/:mailboxName` <scope>admin</scope>
Gets an existing mailbox with name `mailboxName`.
Response (200):
```
{
id: <string>,
name: <string>,
aliases: [ <string>, ...]
}
```
### List mailboxes
GET `/api/v1/mailboxes` <scope>admin</scope>
Lists all mailboxes.
Response (200):
```
{
mailboxes: [
{
id: <string>,
name: <string>
},
...
]
}
```
### Set mailbox aliases
PUT `/api/v1/mailboxes/:mailboxName/aliases` <scope>admin</scope>
Sets aliases of an existing mailbox.
An alias is an alternate email address for a mailbox. Mails received at the alternate address are stored
in the mailbox. The email's `to` header should indicate the address that the original sent it to.
This also allows users to send emails with `From` address set to an alias.
Note that this call will replace the current aliases for this mailbox, it does **not** merge the provided ones with the current aliases.
Aliases have to be unique and cannot conflict with existing aliases or mailbox names. Any conflict with result in a 409.
Request:
```
{
aliases: [ <string>, ... ]
}
```
### Get mailbox aliases
GET `/api/v1/mailboxes/:mailboxName/aliases` <scope>admin</scope>
Gets aliases of an existing mailbox.
Response (200):
```
{
aliases: [ <string>, ... ]
}
```
### Delete mailbox
DELETE `/api/v1/mailboxes/:mailboxName` <scope>admin</scope>
Deletes an existing with name `mailboxName`.
Response (204):
```
{}
```
## Profile
### Get profile
@@ -1069,24 +966,6 @@ Response (204):
{}
```
### Tutorial
POST `/api/v1/profile/tutorial` <scope>profile</scope>
Toggles display of the tutorial when the token owner logs in.
Request:
```
{
showTutorial: <boolean>
}
```
Response (204):
```
{}
```
## Settings
### Get auto update pattern
@@ -1159,17 +1038,16 @@ GET `/api/v1/settings/backup_config` <scope>admin</scope> <scope>internal</scope
Gets the credentials used to upload backups.
This is currently internal API and is documented here for completeness.
Response(200):
```
{
"provider": <string>, // 'caas'
"key": <string>, // encryption key
"region": <string>, // s3 region
"bucket": <string>, // s3 bucket name
"prefix": <string>, // s3 bucket prefix
"token": <string> // caas specific token
"provider": <string>, // 'caas' or 's3' or 'filesystem'
"key": <string>, // encryption key
"region": <string>, // s3 region
"bucket": <string>, // s3 bucket name
"prefix": <string>, // s3 bucket prefix
"token": <string>, // 'caas' specific token
"backupFolder": <string> // 'filesystem' specific backup directory
}
```
@@ -1179,7 +1057,20 @@ POST `/api/v1/settings/backup_config` <scope>admin</scope> <scope>internal</scop
Sets the credentials used to upload backups.
This is currently internal API and is documented here for completeness.
Request:
```
{
"provider": "s3|filesystem",
"key": <string>, // backup encryption key
"bucket": <string>, // S3: bucket
"prefix": <string>, // S3: prefix in bucket
"accessKeyId": <string>, // S3: access key id
"secretAccessKey": <string>, // S3: secret access key
"backupFolder": <string> // filesystem: directory inside cloudron to store backups
}
```
### Get DNS Configuration
@@ -1192,8 +1083,7 @@ This is currently internal API and is documented here for completeness.
Response(200):
```
{
"provider": <string>, // 'caas'
"token": <string> // caas specific token
"provider": <string> // 'caas' or 'route53' or 'digitalocean' or 'noop' or 'manual'
}
```
+7 -9
View File
@@ -1,6 +1,4 @@
# Architecture
## Introduction
# Introduction
The Cloudron platform is designed to easily install and run web applications.
The application architecture is designed to let the Cloudron take care of system
@@ -17,7 +15,7 @@ Web applications like blogs, wikis, password managers, code hosting, document ed
file syncers, notes, email, forums are a natural fit for the Cloudron. Decentralized "social"
networks are also good app candidates for the Cloudron.
## Image
# Image
Application images are created using [Docker](https://www.docker.io). Docker provides a way
to package (and containerize) the application as a filesystem which contains it's code, system libraries
@@ -34,7 +32,7 @@ and packages that are independent of the host OS.
The [base image](/references/baseimage.html) is the parent of all app images.
## Cloudron Manifest
# Cloudron Manifest
Each app provides a `CloudronManifest.json` that specifies information required for the
`Cloudron Store` and for the installation of the image in the Cloudron.
@@ -53,7 +51,7 @@ Information required for the Cloudron Store includes:
See the [manifest reference](/references/manifest.html) for more information.
## Addons
# Addons
Addons are services like database, authentication, email, caching that are part of the
Cloudron. Setup, provisioning, scaling and maintenance of addons is taken care of by the
@@ -67,7 +65,7 @@ Addons are opt-in and must be specified in the Cloudron Manifest. When the app r
variables contain the necessary information to access the addon. See the
[addon reference](/references/addons.html) for more information.
## Authentication
# Authentication
The Cloudron provides a centralized dashboard to manage users, roles and permissions. Applications
do not create or manage user credentials on their own and instead use one of the various
@@ -79,12 +77,12 @@ Authentication strategies include OAuth 2.0, LDAP or Simple Auth. See the
Authorizing users is application specific and it is only authentication that is delegated to the
Cloudron.
## Cloudron Store
# Cloudron Store
Cloudron Store provides a market place to publish and optionally monetize your app. Submitting to the
Cloudron Store enables any Cloudron user to discover, purchase and install your application with
a few clicks.
## What next?
# What next?
* [Package an existing app for the Cloudron](/tutorials/packaging.html)
+9 -18
View File
@@ -1,6 +1,4 @@
# Authentication
## Overview
# Overview
Cloudron provides a centralized dashboard to manage users, roles and permissions. Applications
do not create or manage user credentials on their own and instead use one of the various
@@ -10,7 +8,7 @@ Note that authentication only identifies a user and does not indicate if the use
to perform an action in the application. Authorizing users is application specific and must be
implemented by the application.
## Users & Admins
# Users & Admins
Cloudron user management is intentionally very simple. The owner (first user) of the
Cloudron is `admin` by default. The `admin` role allows one to install, uninstall and reconfigure
@@ -25,7 +23,7 @@ A Cloudron `admin` can give admin privileges to one or more Cloudron users.
Each Cloudron user has an unique `username` and an `email`.
## Strategies
# Strategies
Cloudron provides multiple authentication strategies.
@@ -33,7 +31,7 @@ Cloudron provides multiple authentication strategies.
* LDAP provided by the [LDAP addon](/references/addons.html#ldap)
* Simple Auth provided by [Simple Auth addon](/references/addons.html#simpleauth)
## Choosing a strategy
# Choosing a strategy
Applications can be broadly categorized based on their user management as follows:
@@ -46,13 +44,6 @@ Applications can be broadly categorized based on their user management as follow
* No user
* Such apps have no concept of logged-in user.
* The Cloudron provides a `website visibility` setting that allows a Cloudron admin to optionally
install an OAuth proxy in front of such applications. In such a case, a user visiting the website first
authenticates with the OAuth proxy and once authenticated is allowed into the application.
* When an OAuth proxy is installed, such applications can use the `X-Authenticated-User` header from the
[ICAP Extensions](https://tools.ietf.org/html/draft-stecher-icap-subid-00#section-3.4) de facto standard.
This value can be used for display purposes or creating meta data for a document.
* Single user
* Such apps only have a single user who is usually also the `admin`.
@@ -60,7 +51,7 @@ Applications can be broadly categorized based on their user management as follow
* Such apps _must_ set the `singleUser` property in the manifest which will restrict login to a single user
(configurable through the Cloudron's admin panel).
## Public and Private apps
# Public and Private apps
`Private` apps display content only when they have a signed-in user. These apps can choose one of the
authentication strategies listed above.
@@ -77,7 +68,7 @@ from a settings ui in the app, it's better to simply put some sensible defaults
the settings. In the case where such settings cannot be changed dynamically, it is best to simply publish two
separate apps in the Cloudron store each with a different configuration.
## External User Registration
# External User Registration
Some apps allow external users to register and create accounts. For example, a public company chat that
can invite anyone to join or a blog allowing registered commenters.
@@ -92,14 +83,14 @@ Naively handling user registration enables attacks of the following kind:
* When a user named `foo` logs in, the app cannot determine the correct `foo` anymore. Making separate login buttons for each
login source clears the confusion for both the user and the app.
## Userid
# Userid
The preferred approach to track users in an application is a uuid or the Cloudron `username`.
The `username` in Cloudron is unique and cannot be changed.
Tracking users using `email` field is error prone since that may be changed by the user anytime.
## Single Sign-on
# Single Sign-on
Single sign-on (SSO) is a property where a user logged in one application automatically logs into
another application without having to re-enter his credentials. When applications implement the
@@ -108,7 +99,7 @@ OAuth, they will automatically log into any other app implementing OAuth.
Conversely, signing off from one app, logs them off from all the apps.
## Security
# Security
The LDAP and Simple Auth strategies require the user to provide their plain text passwords to the
application. This might be a cause of concern and app developers are thus highly encouraged to integrate
+11 -13
View File
@@ -1,6 +1,4 @@
# Base Image
## Overview
# Overview
The application's Dockerfile must specify the FROM base image to be `cloudron/base:0.9.0`.
@@ -12,7 +10,7 @@ are not configured in any way and it's up to the application to configure them a
For example, while `apache` is installed, there are no meaningful site configurations that the
application can use.
## Packages
# Packages
The following packages are part of the base image. If you need another version, you will have to
install it yourself.
@@ -37,7 +35,7 @@ install it yourself.
* Supervisor 3.2.0
* uwsgi 2.0.12
## Inspecting the base image
# Inspecting the base image
The base image can be inspected by installing [Docker](https://docs.docker.com/installation/).
@@ -54,40 +52,40 @@ To inspect the base image:
*Note:* Please use `docker 1.9.0` or above to pull the base image. Doing otherwise results in a base
image with an incorrect image id. The image id of `cloudron/base:0.9.0` is `d038af182821`.
## The `cloudron` user
# The `cloudron` user
The base image contains a user named `cloudron` that apps can use to run their app.
It is good security practice to run apps as a non-previleged user.
## Env vars
# Env vars
The following environment variables are set as part of the application runtime.
### API_ORIGIN
## API_ORIGIN
API_ORIGIN is set to the HTTP(S) origin of this Cloudron's API. For example,
`https://my-girish.cloudron.us`.
### APP_DOMAIN
## APP_DOMAIN
APP_DOMAIN is set to the domain name of the application. For example, `app-girish.cloudron.us`.
### APP_ORIGIN
## APP_ORIGIN
APP_ORIGIN is set to the HTTP(S) origin on the application. This is origin which the
user can use to reach the application. For example, `https://app-girish.cloudron.us`.
### CLOUDRON
## CLOUDRON
CLOUDRON is always set to '1'. This is useful to write Cloudron specific code.
### WEBADMIN_ORIGIN
## WEBADMIN_ORIGIN
WEBADMIN_ORIGIN is set to the HTTP(S) origin of the Cloudron's web admin. For example,
`https://my-girish.cloudron.us`.
## Node.js
# Node.js
The base image comes pre-installed with various node.js versions.
+44 -59
View File
@@ -1,6 +1,4 @@
# CloudronManifest
## Overview
# Overview
Every Cloudron Application contains a `CloudronManifest.json`.
@@ -36,13 +34,13 @@ Here is an example manifest:
"contactEmail": "support@clourdon.io",
"icon": "file://icon.png",
"tags": [ "test", "collaboration" ],
"mediaLinks": [ "www.youtube.com/watch?v=dQw4w9WgXcQ" ]
"mediaLinks": [ "https://images.rapgenius.com/fd0175ef780e2feefb30055be9f2e022.520x343x1.jpg" ]
}
```
## Fields
# Fields
### addons
## addons
Type: object
@@ -68,7 +66,7 @@ Example:
}
```
### author
## author
Type: string
@@ -78,14 +76,14 @@ The `author` field contains the name and email of the app developer (or company)
Example:
```
"author": "Cloudron Inc <girish@cloudron.io>"
"author": "Cloudron UG <girish@cloudron.io>"
```
### changelog
## changelog
Type: markdown string
Required: no
Required: no (required for submitting to the Cloudron Store)
The `changelog` field contains the changes in this version of the application. This string
can be a markdown style bulleted list.
@@ -95,7 +93,7 @@ Example:
"changelog": "* Add support for IE8 \n* New logo"
```
### configurePath
## configurePath
Type: path string
@@ -115,7 +113,7 @@ Example:
"configurePath": "/wp-admin"
```
### contactEmail
## contactEmail
Type: email
@@ -129,7 +127,7 @@ Example:
"contactEmail": "support@testapp.com"
```
### description
## description
Type: markdown string
@@ -152,21 +150,7 @@ Example:
"description:": "file://DESCRIPTION.md"
```
### developmentMode
Type: boolean
Required: no
Setting `developmentMode` to true disables readonly rootfs and the default memory limit. In addition,
the application *pauses* on start and can be started manually using `cloudron exec`. Note that you
cannot submit an app to the store with this field turned on.
This mode can be used to identify the files being modified by your application - often required to
debug situations where your app does not run on a readonly rootfs. Run your app using `cloudron exec`
and use `find / -mmin -30` to find file that have been changed or created in the last 30 minutes.
### healthCheckPath
## healthCheckPath
Type: url path
@@ -182,15 +166,16 @@ Example:
```
"healthCheckPath": "/"
```
### httpPort
## httpPort
Type: positive integer
Required: yes
The `httpPort` field contains the TCP port on which your app is listening for HTTP requests. This port
is exposed to the world via subdomain/location that the user chooses at installation time. While not
required, it is good practice to mark this port as `EXPOSE` in the Dockerfile.
The `httpPort` field contains the TCP port on which your app is listening for HTTP requests. This
is the HTTP port the Cloudron will use to access your app internally.
While not required, it is good practice to mark this port as `EXPOSE` in the Dockerfile.
Cloudron Apps are containerized and thus two applications can listen on the same port. In reality,
they are in different network namespaces and do not conflict with each other.
@@ -203,11 +188,11 @@ Example:
"httpPort": 8080
```
### icon
## icon
Type: local image filename
Required: no
Required: no (required for submitting to the Cloudron Store)
The `icon` field is used to display the application icon/logo in the Cloudron Store. Icons are expected
to be square of size 256x256.
@@ -216,7 +201,7 @@ to be square of size 256x256.
"icon": "file://icon.png"
```
### id
## id
Type: reverse domain string
@@ -232,7 +217,7 @@ the application if the id is already in use by another application.
"id": "io.cloudron.testapp"
```
### manifestVersion
## manifestVersion
Type: integer
@@ -244,25 +229,25 @@ Required: yes
"manifestVersion": 1
```
### mediaLinks
## mediaLinks
Type: array of urls
Required: no
Required: no (required for submitting to the Cloudron Store)
The `mediaLinks` field contains an array of links that the Cloudron Store uses to display a slide show of pictures
and videos of the application.
The `mediaLinks` field contains an array of links that the Cloudron Store uses to display a slide show of pictures of the application.
All links are preferably https.
They have to be publicly reachable via `https` and should have an aspect ratio of 3 to 1.
For example `600px by 200px` (with/height).
```
"mediaLinks": [
"www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://s3.amazonaws.com/cloudron-app-screenshots/org.owncloud.cloudronapp/556f6a1d82d5e27a7c4fca427ebe6386d373304f/2.jpg",
"https://images.rapgenius.com/fd0175ef780e2feefb30055be9f2e022.520x343x1.jpg"
]
```
### memoryLimit
## memoryLimit
Type: bytes (integer)
@@ -277,7 +262,7 @@ By default, all apps have a memoryLimit of 256MB. For example, to have a limit o
"memoryLimit": 524288000
```
### maxBoxVersion
## maxBoxVersion
Type: semver string
@@ -289,7 +274,7 @@ a box greater than `maxBoxVersion` will fail.
This is useful when a new box release introduces features which are incompatible with the app. This situation is quite
unlikely and it is recommended to leave this unset.
### minBoxVersion
## minBoxVersion
Type: semver string
@@ -301,7 +286,7 @@ a box lesser than `minBoxVersion` will fail.
This is useful when the app relies on features that are only available from a certain version of the box. If unset, the
default value is `0.0.1`.
### postInstallMessage
## postInstallMessage
Type: markdown string
@@ -313,22 +298,22 @@ The intended use of this field is to display some post installation steps that t
complete the installation. For example, displaying the default admin credentials and informing the user to
to change it.
### singleUser
## optionalSso
Type: boolean
Required: no
The `singleUser` field can be set to true for apps that are meant to be used only a single user.
The `optionalSso` field can be set to true for apps that can be installed optionally without using the Cloudron user management.
When set, the Cloudron will display a user selection dialog at installation time. The selected user is the sole user
who can access the app.
This only applies if any Cloudron auth related addons are used. When set, the Cloudron will not inject the auth related addon environment variables.
Any app startup scripts have to be able to deal with missing env variables in this case.
### tagline
## tagline
Type: one-line string
Required: no
Required: no (required for submitting to the Cloudron Store)
The `tagline` is used by the Cloudron Store to display a single line short description of the application.
@@ -336,11 +321,11 @@ The `tagline` is used by the Cloudron Store to display a single line short descr
"tagline": "The very best note keeper"
```
### tags
## tags
Type: Array of strings
Required: no
Required: no (required for submitting to the Cloudron Store)
The `tags` are used by the Cloudron Store for filtering searches by keyword.
@@ -348,7 +333,7 @@ The `tags` are used by the Cloudron Store for filtering searches by keyword.
"tags": [ "git", "version control", "scm" ]
```
### targetBoxVersion
## targetBoxVersion
Type: semver string
@@ -364,7 +349,7 @@ and will disable the SELinux feature for the app.
If unspecified, this value defaults to `minBoxVersion`.
### tcpPorts
## tcpPorts
Type: object
@@ -429,7 +414,7 @@ In more detail:
it might be simpler to listen on `SSH_PORT` internally. In such cases, the app can omit the `containerPort` value and should
instead reconfigure itself to listen internally on `SSH_PORT` on each start up.
### title
## title
Type: string
@@ -442,7 +427,7 @@ Example:
"title": "Gitlab"
```
### version
## version
Type: semver string
@@ -456,7 +441,7 @@ Example:
"version": "1.1.0"
```
### website
## website
Type: url
+275 -198
View File
@@ -1,108 +1,118 @@
# Self host Cloudron
# Overview
The Cloudron platform can be installed on your own cloud server. The self hosted version comes with all the same features as the managed version.
The Cloudron platform can be installed on public cloud servers from EC2, Digital Ocean, Hetzner,
Linode, OVH, Scaleway, Vultr etc. Cloudron also runs well on a home server or company intranet.
## CLI Tool
If you run into any trouble following this guide, ask us at our [chat](https://chat.cloudron.io).
The [Cloudron tool](https://git.cloudron.io/cloudron/cloudron-cli) is used for managing a Cloudron. It has a `machine`
subcommand that can be used to create, update and maintain a self-hosted Cloudron.
# Understand
### Linux & OS X
Installing the CLI tool requires node.js and npm. The CLI tool can be installed using the following command:
Before installing the Cloudron, it is helpful to understand Cloudron's design. The Cloudron
intends to make self-hosting effortless. It takes care of updates, backups, firewall, dns setup,
certificate management etc. All app and user configuration is carried out using the web interface.
This approach to self-hosting means that the Cloudron takes complete ownership of the server and
only tracks changes that were made via the web interface. Any external changes made to the server
(i.e other than via the Cloudron web interface or API) may be lost across updates.
The Cloudron requires a domain name when it is installed. Apps are installed into subdomains.
The `my` subdomain is special and is the location of the Cloudron web interface. For this to
work, the Cloudron requires a way to programmatically configure the DNS entries of the domain.
Note that the Cloudron will never overwrite _existing_ DNS entries and refuse to install
apps on existing subdomains.
# Cloud Server
DigitalOcean and EC2 (Amazon Web Services) are frequently tested by us.
Please use the below links to support us with referrals:
* [Amazon EC2](https://aws.amazon.com/ec2/)
* [DigitalOcean](https://m.do.co/c/933831d60a1e)
In addition to those, the Cloudron community has successfully installed the platform on those providers:
* [Amazon Lightsail](https://amazonlightsail.com/)
* [hosttech](https://www.hosttech.ch/?promocode=53619290)
* [Linode](https://www.linode.com/?r=f68d816692c49141e91dd4cef3305da457ac0f75)
* [OVH](https://www.ovh.com/)
* [Scaleway](https://www.scaleway.com/)
* [So you Start](https://www.soyoustart.com/)
* [Vultr](http://www.vultr.com/?ref=7063201)
Please let us know if any of them requires tweaks or adjustments.
# Installing
## Create server
Create an `Ubuntu 16.04 (Xenial)` server with at-least `1gb` RAM. Do not make any changes
to vanilla ubuntu. Be sure to allocate a static IPv4 address for your server.
### Linode
Since Linode does not manage SSH keys, be sure to add the public key to
`/root/.ssh/authorized_keys`.
### Scaleway
Use the [boot script](https://github.com/scaleway-community/scaleway-docker/issues/2) to
enable memory accouting.
## Run setup
SSH into your server and run the following commands:
```
npm install -g cloudron
wget https://cloudron.io/cloudron-setup
chmod +x cloudron-setup
./cloudron-setup --provider <digitalocean|ec2|generic|scaleway>
```
Depending on your setup, you may need to run this as root.
The setup will take around 10-15 minutes.
On OS X, it is known to work with the `openssl` package from homebrew.
**cloudron-setup** takes the following arguments:
See [#14](https://git.cloudron.io/cloudron/cloudron-cli/issues/14) for more information.
* `--provider` is the name of your VPS provider. If the name is not on the list, simply
choose `generic`. In most cases, the `generic` provider mostly will work fine.
If the Cloudron does not complete initialization, it may mean that
we have to add some vendor specific quirks. Please open a
[bug report](https://git.cloudron.io/cloudron/box/issues) in that case.
### Windows
Optional arguments for installation:
The CLI tool does not work on Windows.
* `--tls-provider` is the name of the SSL/TLS certificate backend. Defaults to Let's encrypt.
Specifying `fallback` will setup the Cloudron to use the fallback wildcard certificate.
Initially a self-signed one is provided, which can be overwritten later in the admin interface.
This may be useful for non-public installations.
### Machine subcommands
Optional arguments used for update and restore:
You should now be able to run the `cloudron machine help` command in a shell.
* `--version` is the version of Cloudron to install. By default, the setup script installs
the latest version. You can set this to an older version when restoring a Cloudron from a backup.
```
create Creates a new Cloudron
restore Restores a Cloudron
migrate Migrates a Cloudron
update Upgrade or updates a Cloudron
eventlog Get Cloudron eventlog
logs Get Cloudron logs
ssh Get remote SSH connection
backup Manage Cloudron backups
```
* `--restore-url` is a backup URL to restore from.
## AWS EC2
## Domain setup
### Requirements
Once the setup script completes, the server will reboot, then visit your server by its
IP address (`https://ip`) to complete the installation.
To run the Cloudron on AWS, first sign up with [Amazon AWS](https://aws.amazon.com/).
The setup website will show a certificate warning. Accept the self-signed certificate
and proceed to the domain setup.
The Cloudron uses the following AWS services:
Currently, only Second Level Domains are supported. For example, `example.com`,
`example.co.uk` will work fine. Choosing a domain name at any other level like
`cloudron.example.com` will not work.
* **EC2** for creating a virtual private server that runs the Cloudron code.
* **Route53** for DNS. The Cloudron will manage all app subdomains as well as the email related DNS records automatically.
* **S3** to store encrypted Cloudron backups.
### Route 53
The minimum requirements for a Cloudron depends on the apps installed. The absolute minimum required EC2 instance is `t2.small`.
Create root or IAM credentials and choose `Route 53` as the DNS provider.
The Cloudron runs best on instances which do not have a burst mode VCPU.
The system disk space usage of a Cloudron is around 15GB. This results in a minimum requirement of about 30GB to give some headroom for app installations and user data.
### Cost Estimation
Taking the minimal requirements of hosting on EC2, with a backup retention of 2 days, the cost estimation per month is as follows:
```
Route53: 0.90
EC2: 19.04
EBS: 3.00
S3: 1.81
-------------------------
Total: $ 24.75/mth
```
For custom cost estimation, please use the [AWS Cost Calculator](http://calculator.s3.amazonaws.com/index.html)
### Setup
Open the AWS console and create the required resources:
1. Create a Route53 zone for your domain. Be sure to set the Route53 nameservers for your domain in your name registrar. Note: Only Second Level Domains are supported.
For example, `example.com`, `example.co.uk` will work fine. Choosing a domain name at any other level like `cloudron.example.com` will not work.
2. Create a S3 bucket for backups. The bucket region **must* be the same region as where you intend to create your Cloudron (EC2).
When creating the S3 bucket, it is important to choose a region. Do **NOT** choose `US Standard`.
The supported regions are:
* US East (N. Virginia) us-east-1
* US West (N. California) us-west-1
* US West (Oregon) us-west-2
* Asia Pacific (Mumbai) ap-south-1
* Asia Pacific (Seoul) ap-northeast-2
* Asia Pacific (Sydney) ap-southeast-2
* Asia Pacific (Tokyo) ap-northeast-1
* EU (Frankfurt) eu-central-1
* EU (Ireland) eu-west-1
* South America (São Paulo) sa-east-1
3. Create a new SSH key or upload an existing SSH key in the target region (`Key Pairs` in the left pane of the EC2 console).
4. Create AWS credentials. You can either use root **or** IAM credentials.
* For root credentials:
* In AWS Console, under your name in the menu bar, click `Security Credentials`
* Click on `Access Keys` and create a key pair.
* For IAM credentials:
* For root credentials:
* In AWS Console, under your name in the menu bar, click `Security Credentials`
* Click on `Access Keys` and create a key pair.
* For IAM credentials:
* You can use the following policy to create IAM credentials:
```
{
"Version": "2012-10-17",
@@ -123,7 +133,55 @@ The supported regions are:
"Resource": [
"*"
]
},
}
]
}
```
### Digital Ocean
Create an API token with read+write access and choose `Digital Ocean` as the DNS provider.
### Other
If your domain *does not* use Route 53 or Digital Ocean, setup a wildcard (`*`) DNS `A` record that points to the
IP of the server created above. If your DNS provider has an API, please open an
[issue](https://git.cloudron.io/cloudron/box/issues) and we may be able to support it.
## Finish Setup
Once the domain setup is done, the Cloudron will configure the DNS and get a SSL certificate. It will automatically redirect to `https://my.<domain>`.
# Backups
The Cloudron creates encrypted backups once a day. Each app is backed up independently and these
backups have the prefix `app_`. The platform state is backed up independently with the
prefix `box_`.
By default, backups reside in `/var/backups`. Please note that having backups reside in the same
physical machine as the Cloudron server instance is dangerous and it must be changed to
an external storage location like `S3` as soon as possible.
## Amazon S3
Provide S3 backup credentials in the `Settings` page and leave the endpoint field empty.
Create a bucket in S3 (You have to have an account at [AWS](https://aws.amazon.com/)). The bucket can be setup to periodically delete old backups by
adding a lifecycle rule using the AWS console. S3 supports both permanent deletion
or moving objects to the cheaper Glacier storage class based on an age attribute.
With the current daily backup schedule a setting of two days should be sufficient
for most use-cases.
* For root credentials:
* In AWS Console, under your name in the menu bar, click `Security Credentials`
* Click on `Access Keys` and create a key pair.
* For IAM credentials:
* You can use the following policy to create IAM credentials:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
@@ -131,162 +189,181 @@ The supported regions are:
"arn:aws:s3:::<your bucket name>",
"arn:aws:s3:::<your bucket name>/*"
]
},
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"ec2:Region": "<ec2 region>"
}
}
}
]
}
```
### Create the Cloudron
## Minio S3
Create the Cloudron using the `cloudron machine` command:
[Minio](https://minio.io/) is a distributed object storage server, providing the same API as Amazon S3.
Since Cloudron supports S3, any API compatible solution should be supported as well, if this is not the case, let us know.
Minio can be setup, by following the [installation instructions](https://docs.minio.io/) on any server, which is reachable by the Cloudron.
Do not setup Minio on the same server as the Cloudron, this will inevitably result in data loss, if backups are stored on the same instance.
Once setup, minio will print the necessary information, like login credentials, region and endpoints in its logs.
```
cloudron machine create ec2 \
--region <aws-region> \
--type t2.small \
--disk-size 30 \
--ssh-key <ssh-key-name-or-filepath> \
--access-key-id <aws-access-key-id> \
--secret-access-key <aws-access-key-secret> \
--backup-bucket <bucket-name> \
--backup-key '<secret>' \
--fqdn <domain>
$ ./minio server ./storage
Endpoint: http://192.168.10.113:9000 http://127.0.0.1:9000
AccessKey: GFAWYNJEY7PUSLTHYHT6
SecretKey: /fEWk66E7GsPnzE1gohqKDovaytLcxhr0tNWnv3U
Region: us-east-1
```
The `--region` is the region where your Cloudron is to be created. For example, `us-west-1` for N. California and `eu-central-1` for Frankfurt. A complete list of available
regions is list <a href="//docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions" target="_blank">here</a>.
First create a new bucket for the backups, using the minio commandline tools or the webinterface. The bucket has to have **read and write** permissions.
The `--disk-size` parameter indicates the volume (hard disk) size to be allocated for the Cloudron.
The information to be copied to the Cloudron's backup settings form may look similar to:
The `--ssh-key` is the path to a PEM file or the private SSH Key. If your key is located as `~/.ssh/id_rsa_<name>`, you can
also simply provide the `name` as the argument.
<img src="/docs/img/minio_backup_config.png" class="shadow"><br/>
The `--backup-key '<secret>'` will be used to encrypt all backups prior to uploading to S3. Keep that secret in a safe place, as you need it to restore your Cloudron from a backup! You can generate a random key using `pwgen -1y 64`. Be sure to put single quotes
around the `secret` to prevent accidental shell expansion.
**NOTE**: The `cloudron machine create` subcommand will automatically create a corresponding VPC, subnet and security group for your Cloudron, unless `--subnet` and `--security-group` arguments are explicitly passed in. If you want to reuse existing resources, please ensure that the security group does not limit any traffic to the Cloudron since the Cloudron manages its own firewall and that the subnet has an internet gateway setup in the routing table.
# Email
## First time setup
Cloudron has a built-in email server. By default, it only sends out email on behalf of apps
(for example, password reset or notification). You can enable the email server for sending
and receiving mail on the `settings` page. This feature is only available if you have setup
a DNS provider like Digital Ocean or Route53.
Visit `https://my.<domain>` to do first time setup of your Cloudron.
Your server's IP plays a big role in how emails from our Cloudron get handled. Spammers
frequently abuse public IP addresses and as a result your Cloudron might possibly start
out with a bad reputation. The good news is that most IP based blacklisting services cool
down over time. The Cloudron sets up DNS entries for SPF, DKIM, DMARC automatically and
reputation should be easy to get back.
1. The website should already have a valid TLS certificate. If you see any certificate warnings, it means your Cloudron was not created correctly.
2. If you see a login screen, instead of a setup screen, it means that someone else got to your Cloudron first and set it up
already! In this unlikely case, simply delete the EC2 instance and create a new Cloudron again.
## Checklist
Once the setup is done, you can access the admin page in the future at `https://my.<domain>`.
* Once your Cloudron is ready, setup a Reverse DNS PTR record to be setup for the `my` subdomain.
## Backups
* AWS/EC2 - Fill the PTR [request form](https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request.
The Cloudron has a backup schedule of creating one once a day. In addition to regularly scheduled backups, a backup is also created if you update the Cloudron or any of the apps (in this case only the app in question will get backed up).
* Digital Ocean - Digital Ocean sets up a PTR record based on the droplet's name. So, simply rename
your droplet to `my.<domain>`. Note that some new Digital Ocean accounts have [port 25 blocked](https://www.digitalocean.com/community/questions/port-25-smtp-external-access).
Since this might result in a lot of backup data on your S3 backup bucket, we recommend adjusting the bucket properties. This can be done adding a lifecycle rule for that bucket, using the AWS console. S3 supports both permanent deletion or moving objects to the cheaper Glacier storage class based on an age attribute. With the current daily backup schedule a setting of two days should be already sufficient for most use-cases.
* Scaleway - Edit your security group to allow email. You can also set a PTR record on the interface with your
`my.<domain>`.
* Check if your IP is listed in any DNSBL list [here](http://multirbl.valli.org/). In most cases,
you can apply for removal of your IP by filling out a form at the DNSBL manager site.
* When using wildcard or manual DNS backends, you have to setup the DMARC, MX records manually.
* Finally, check your spam score at [mail-tester.com](https://www.mail-tester.com/). The Cloudron
should get 100%, if not please let us know.
# CLI Tool
The [Cloudron tool](https://git.cloudron.io/cloudron/cloudron-cli) is useful for managing
a Cloudron. <b class="text-danger">The Cloudron CLI tool has to be installed & run on a Laptop or PC</b>
Once installed, you can install, configure, list, backup and restore apps from the command line.
## Linux & OS X
Installing the CLI tool requires node.js and npm. The CLI tool can be installed using the following command:
If your Cloudron is running, you can list backups using the following command:
```
cloudron machine backup list <domain>
npm install -g cloudron
```
Alternately, you can list the backups by querying S3 using the following command:
Depending on your setup, you may need to run this as root.
On OS X, it is known to work with the `openssl` package from homebrew.
See [#14](https://git.cloudron.io/cloudron/cloudron-cli/issues/14) for more information.
## Windows
The CLI tool does not work on Windows. Please contact us on our [chat](https://chat.cloudron.io) if you want to help with Windows support.
# Updates
Apps installed from the Cloudron Store are automatically updated every night.
The Cloudron platform itself updates in two ways: update or upgrade.
### Update
An **update** is applied onto the running server instance. Such updates are performed
every night. You can also use the Cloudron UI to initiate an update immediately.
The Cloudron will always make a complete backup before attempting an update. In the unlikely
case an update fails, it can be [restored](/references/selfhosting.html#restore).
### Upgrade
An **upgrade** requires a new OS image. This process involves creating a new server from scratch
with the latest code and restoring it from the last backup.
To upgrade follow these steps closely:
* Create a new backup - `cloudron machine backup create`
* List the latest backup - `cloudron machine backup list`
* Make the backup available for the new cloudron instance:
* `S3` - When storing backup ins S3, make the latest box backup public - files starting with `box_` (from v0.94.0) or `backup_`. This can be done from the AWS S3 console as seen here:
<img src="/docs/img/aws_backup_public.png" class="shadow haze"><br/>
Copy the new public URL of the latest backup for use as the `--restore-url` below.
<img src="/docs/img/aws_backup_link.png" class="shadow haze"><br/>
* `File system` - When storing backups in `/var/backups`, you have to make the box and the app backups available to the new Cloudron instance's `/var/backups`. This can be achieved in a variety of ways depending on the situation: like scp'ing the backup files to the machine before installation, mounting the external backup hard drive into the new Cloudron's `/var/backup` OR downloading a copy of the backup using `cloudron machine backup download` and uploading them to the new machine. After doing so, pass `file:///var/backups/<path to box backup>` as the `--restore-url` below.
* Create a new Cloudron by following the [installing](/references/selfhosting.html#installing) section.
When running the setup script, pass in the `--encryption-key` and `--restore-url` flags.
The `--encryption-key` is the backup encryption key. It can be displayed with `cloudron machine info`
Similar to the initial installation, a Cloudron upgrade looks like:
```
cloudron machine backup list --provider ec2 \
--region <region> \
--access-key-id <access-key-id> \
--secret-access-key <secret-access-key> \
--backup-bucket <s3 bucket name> \
<domain>
$ ssh root@newserverip
> wget https://cloudron.io/cloudron-setup
> chmod +x cloudron-setup
> ./cloudron-setup --provider <digitalocean|ec2|generic|scaleway> --encryption-key <key> --restore-url <publicS3Url>
```
## Restore
* Finally, once you see the newest version being displayed in your Cloudron webinterface, you can safely delete the old server instance.
The Cloudron can restore itself from a backup using the following command:
```
cloudron machine create ec2 \
--backup <backup-id> \
--region <aws-region> \
--type t2.small \
--disk-size 30 \
--ssh-key <ssh-key-name> \
--access-key-id <aws-access-key-id> \
--secret-access-key <aws-access-key-secret> \
--backup-bucket <bucket-name> \
--backup-key <secret> \
--fqdn <domain>
```
# Restore
The backup id can be obtained by [listing the backup](/references/selfhosting.html#backups). Other arguments are similar to [Cloudron creation](/references/selfhosting.html#create-the-cloudron). Once the new instance has completely restored, you can safely terminate the old Cloudron from the AWS console.
To restore a Cloudron from a specific backup:
## Updates
* Select the backup - `cloudron machine backup list`
Apps installed from the Cloudron Store are updated automatically every night.
* Make the backup public
The Cloudron platform itself updates in two ways:
* `S3` - Make the box backup publicly readable - files starting with `box_` (from v0.94.0) or `backup_`. This can be done from the AWS S3 console. Once the box has restored, you can make it private again.
* An **update** is applied onto the running server instance. Such updates are performed every night. You can use the Cloudron UI to perform updates.
* `File system` - When storing backups in `/var/backups`, you have to make the box and the app backups available to the new Cloudron instance's `/var/backups`. This can be achieved in a variety of ways depending on the situation: like scp'ing the backup files to the new machine before Cloudron installation OR mounting an external backup hard drive into the new Cloudron's `/var/backup` OR downloading a copy of the backup using `cloudron machine backup download` and uploading them to the new machine. After doing so, pass `file:///var/backups/<path to box backup>` as the `--restore-url` below.
* An **upgrade** requires a new OS image and thus has to be performed using the CLI tool. This process involves creating a new EC2 instance is created using the latest image and all the data and apps are restored. The `cloudron machine update` command can be used when an _upgrade_ is available (you will get a notification in the UI).
```
cloudron machine update --ssh-key <ssh-key> <domain>
```
Once the upgrade is complete, you can safely terminate the old EC2 instance.
* Create a new Cloudron by following the [installing](/references/selfhosting.html#installing) section.
When running the setup script, pass in the `version`, `encryption-key` and `restore-url` flags.
The `version` field is the version of the Cloudron that the backup corresponds to (it is embedded
in the backup file name).
The Cloudron will always make a complete backup before attempting an update or upgrade. In the unlikely case an update fails, it can be [restored](/references/selfhosting.html#restore).
* Make the box backup private, once the upgrade is complete.
## SSH
# Debug
If you want to SSH into your Cloudron, you can
```
ssh -p 202 -i ~/.ssh/ssh_key_name root@my.<domain>
```
You can SSH into your Cloudron and collect logs:
If you are unable to connect, verify the following:
* Be sure to use the **my.** subdomain (eg. my.foobar.com).
* The SSH Key should be in PEM format. If you are using Putty PPK files, follow [this article](http://stackoverflow.com/questions/2224066/how-to-convert-ssh-keypairs-generated-using-puttygenwindows-into-key-pairs-use) to convert it to PEM format.
* The SSH Key must have correct permissions (400) set (this is a requirement of the ssh client).
* `journalctl -a -u box` to get debug output of box related code.
* `docker ps` will give you the list of containers. The addon containers are named as `mail`, `postgresql`,
`mysql` etc. If you want to get a specific container's log output, `journalctl -a CONTAINER_ID=<container_id>`.
## Mail
# Alerts
Spammers frequently abuse EC2 public IP addresses and as a result your Cloudron might possibly start out with a bad
reputation. The good news is that most IP based blacklisting services cool down over time. The Cloudron
sets up DNS entries for SPF, DKIM automatically and reputation should be easy to get back.
The Cloudron will notify the Cloudron administrator via email if apps go down, run out of memory, have updates
available etc.
* Once your Cloudron is ready, apply for a Reverse DNS record to be setup for your domain. You can find the AWS request
form [here](https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request).
You will have to setup a 3rd party service like [Cloud Watch](https://aws.amazon.com/cloudwatch/) or [UptimeRobot](http://uptimerobot.com/) to monitor the Cloudron itself. You can use `https://my.<domain>/api/v1/cloudron/status`
as the health check URL.
* Check if your IP is listed in any DNSBL list [here](http://multirbl.valli.org/). In most cases, you can apply for removal
of your IP by filling out a form at the DNSBL manager site.
* Finally, check your spam score at [mail-tester.com](https://www.mail-tester.com/).
## Debugging
To debug the Cloudron CLI tool:
* `DEBUG=* cloudron <cmd>`
You can also [SSH](#ssh) into your Cloudron and collect logs.
* `journalctl -a -u box -u cloudron-installer` to get debug output of box related code.
* `docker ps` will give you the list of containers. The addon containers are named as `mail`, `postgresql`, `mysql` etc. If you want to get a specific
containers log output, `journalctl -a CONTAINER_ID=<container_id>`.
## Other Providers
Currently, we do not support other cloud server provider. Please let us know at [support@cloudron.io](mailto:support@cloudron.io), if you want to see other providers supported.
## Help
If you run into any problems, join us in our [chat](https://chat.cloudron.io) or [email us](mailto:support@cloudron.io).
# Help
If you run into any problems, join us at our [chat](https://chat.cloudron.io) or [email us](mailto:support@cloudron.io).
+65 -40
View File
@@ -1,6 +1,4 @@
# User Manual
## Introduction
# Introduction
The Cloudron is the best platform self-hosting web applications on your server. You
can easily install apps on it, add users, manage access restriction and keep your
@@ -27,7 +25,7 @@ completely automated.
If you want to learn more about the secret sauce that makes the Cloudron, please read our
[architecture overview](/references/architecture.html).
## Use cases
# Use cases
Here are some of the apps you can run on a Cloudron:
@@ -41,7 +39,7 @@ Here are some of the apps you can run on a Cloudron:
Our list of apps is growing everyday, so be sure to [follow us on twitter](https://twitter.com/cloudron_io).
## Activation
# Activation
When you first create the Cloudron, the setup wizard will ask you to setup an administrator
account. Don't worry, a Cloudron adminstrator doesn't need to know anything about maintaining
@@ -54,9 +52,9 @@ the Cloudron from the `Users` menu item.
The Cloudron administration page is located at the `my` subdomain. You might want to bookmark
this link!
## Apps
# Apps
### Installation
## Installation
You can install apps on the Cloudron by choosing the `App Store` menu item. Use the 'Search' bar
to search for apps.
@@ -85,7 +83,7 @@ visit the Cloudron administration panel.
* `Restrict to groups` - Only users in the groups can access the app.
### Updates
## Updates
All your apps automatically update as and when the application author releases an update. The Cloudron
will attempt to update around midnight of your timezone.
@@ -96,13 +94,15 @@ of clicking the `Update` button (the green star) after you read about the change
<img src="/docs/img/app_update.png" class="shadow">
### Backups
## Backups
<i>If you self-host, please refer to the [self-hosting documentation](/references/selfhosting.html#backups) for backups.</i>
All apps are automatically backed up every day. Backups are stored encrypted in Amazon S3. You don't have
to do anything about it. The [Cloudron CLI](https://git.cloudron.io/cloudron/cloudron-cli) tool can be used
to download application backups.
### Configuration
## Configuration
Apps can be reconfigured using the `Configure` button.
@@ -118,27 +118,27 @@ You can do the following:
Changing an app's configuration has a small downtime (usually around a minute).
### Restore
## Restore
Apps can be restored to a previous backup by clicking on the `Restore` button.
<img src="/docs/img/app_restore_button.png" class="shadow">
Note that restoring previous data might also restore the previous version of the software. For example, you might
Note that restoring previous data might also restore the previous version of the software. For example, you might
be currently using Version 5 of the app. If you restore to a backup that was made with Version 3 of the app, then the restore
operation will install Version 3 of the app. This is because the latest version may not be able to handle old data.
### Uninstall
## Uninstall
You can uninstall an app by clicking the `Uninstall` button.
<img src="/docs/img/app_uninstall_button.png" class="shadow">
Note that all data associated with the app will be immediately removed from the Cloudron. App data might still
Note that all data associated with the app will be immediately removed from the Cloudron. App data might still
persist in your old backups and the [CLI tool](https://git.cloudron.io/cloudron/cloudron-cli) provides a way to
restore from those old backups should it be required.
### Embedding Apps
## Embedding Apps
It is possible to embed Cloudron apps into other websites. By default, this is disabled to prevent
[Clickjacking](https://cloudron.io/blog/2016-07-15-site-embedding.html).
@@ -146,14 +146,14 @@ It is possible to embed Cloudron apps into other websites. By default, this is d
You can set a website that is allowed to embed your Cloudron app using the app's [Configure dialog](#configuration).
Click on 'Show Advanced Settings...' and enter the embedder website name.
## Custom domain
# Custom domain
When you create a Cloudron from cloudron.io, we provide a subdomain under `cloudron.me` like `girish.cloudron.me`.
Apps are available under that subdomain using a hyphenated name like `blog-girish.cloudron.me`.
Domain names are a thing of pride and the Cloudron makes it easy to make your apps accessible from memorable locations like `blog.girish.in`.
### Single app on a custom domain
## Single app on a custom domain
This approach is applicable if you desire that only a single app be accessing from a custom
domain. For this, open the app's configure dialog and choose `External Domain` in the location dropdown.
@@ -163,7 +163,7 @@ domain. For this, open the app's configure dialog and choose `External Domain` i
This dialog will suggest you to add a `CNAME` record. Once you setup a CNAME record with your DNS provider,
the app will be accessible from that external domain.
### Entire Cloudron on a custom domain
## Entire Cloudron on a custom domain
This approach is applicable if you want all your apps to be accessible from subdomains of your custom domain.
For example, `blog.girish.in`, `notes.girish.in`, `owncloud.girish.in`, `mail.girish.in` and so on. This
@@ -180,9 +180,9 @@ Change the domain name to your custom domain. Currently, we require that your do
Moving to a custom domain will retain all your apps and data and will take around 15 minutes. If you require assistance with another provider,
<a href="mailto:support@cloudron.io">just let us know</a>.
## User management
# User management
### Users
## Users
You can invite new users (friends, family, colleagues) with their email address from the `Users` menu. They will
receive an invite to sign up with your Cloudron. They can now access the apps that you have given them access
@@ -192,7 +192,7 @@ to.
To remove a user, simply remove them from the list. Note that the removed user cannot access any app anymore.
### Administrators
## Administrators
A Cloudron administrator is a special right given to an existing Cloudron user allowing them to manage
apps and users. To make an existing user an administator, click the edit (pencil) button corresponding to
@@ -200,10 +200,14 @@ the user and check the `Allow this user to manage apps, groups and other users`
<img src="/docs/img/administrator.png" class="shadow">
### Groups
## Groups
Groups provide a convenient way to restrict access to your apps. Simply add one or more users to a group
and restrict the access for an app to that group. You can create a group by using the `Groups` menu item.
Groups provide a convenient way to group users. It's purpose is two-fold:
* You can assign one or more groups to apps to restrict who can access for an app.
* Each group is a mailing list (forwarding address) constituting of it's members.
You can create a group by using the `Groups` menu item.
<img src="/docs/img/groups.png" class="shadow">
@@ -211,19 +215,21 @@ To set the access restriction use the app's configure dialog.
<img src="/docs/img/app_access_control.png" class="shadow">
## Login
You can now send mails to `groupname@<domain>` to address all the group members.
### Cloudron admin
# Login
## Cloudron admin
The Cloudron admin page is always located at the `my` subdomain of your Cloudron domain. For custom domains,
this will be like `my.girish.in`. For domains from cloudron.io, this will be like `my-girish.cloudron.me`.
### Apps (single sign-on)
## Apps (single sign-on)
An important feature of the Cloudron is Single Sign-On. You use the same username & password for logging in
to all your apps. No more having to manage separate set of credentials for each service!
### Single user apps
## Single user apps
Some apps only work with a single user. For example, a notes app might allow only a single user to login and add
notes. For such apps, you will be prompted during installation to select the single user who can access the app.
@@ -232,13 +238,22 @@ notes. For such apps, you will be prompted during installation to select the sin
If you want multiple users to use the app independently, simply install the app multiple times to different locations.
## Email
# Email
The Cloudron has a built-in email server. The primary email address is the same as the username. Emails can be sent
and received from `<username>@<domain>`. The Cloudron does not allow masquerading - one user cannot send email
pretending to be another user.
### Receiving email (IMAP)
## Enabling Email
By default, Cloudron's email server only allows apps to send email. To enable users to send and receive email,
turn on the option under `Settings`. Turning on this option also allows apps to _receive_ email.
Once email is enabled, the Cloudron will keep the the `MX` DNS record updated.
<img src="/docs/img/enable_email.png" class="shadow">
## Receiving email using IMAP
Use the following settings to receive email.
@@ -247,7 +262,7 @@ Use the following settings to receive email.
* Connection Security - TLS
* Username/password - Same as your Cloudron credentials
### Sending email (SMTP)
## Sending email using SMTP
Use the following settings to send email.
@@ -256,7 +271,7 @@ Use the following settings to send email.
* Connection Security - STARTTLS
* Username/password - Same as your Cloudron credentials
### Email filters (Sieve)
## Email filters using Sieve
Use the following settings to setup email filtering users via Manage Sieve.
@@ -268,7 +283,7 @@ Use the following settings to setup email filtering users via Manage Sieve.
The [Rainloop](https://cloudron.io/appstore.html?app=net.rainloop.cloudronapp) and [Roundcube](https://cloudron.io/appstore.html?app=net.roundcube.cloudronapp)
apps are already pre-configured to use the above settings.
### Aliases
## Aliases
You can configure one or more aliases alongside the primary email address of each user. You can set aliases by editing the
user's settings, available behind the edit button in the user listing. Note that aliases cannot conflict with existing user names.
@@ -278,12 +293,22 @@ user's settings, available behind the edit button in the user listing. Note that
Currently, it is not possible to login using the alias for SMTP/IMAP/Sieve services. Instead, add the alias as an identity in
your mail client but login using the Cloudron credentials.
### Subaddresses
## Subaddresses
Emails addressed to `<username>+tag@<domain>` will be delivered to the `username` mailbox. You can use this feature to give out emails of the form
`username+kayak@<domain>`, `username+aws@<domain>` and so on and have them all delivered to your mailbox.
## Graphs
## Forwarding addresses
Each group on the Cloudron is also a forwarding address. Mails can be addressed to `group@<domain>` and the mail will
be sent to each user who is part of the group.
## Marking Spam
The spam detection agent on the Cloudron requires training to identify spam. To do this, simply move your junk mails
to a pre-created folder named `Spam`. Most mail clients have a Junk or Spam button which does this automatically.
# Graphs
The Graphs view shows an overview of the disk and memory usage on your Cloudron.
@@ -298,32 +323,32 @@ on the graph to see the memory consumption over time in the chart below it.
The `System` Memory graph shows the overall memory consumption on the entire Cloudron. If you see
the Free memory < 50MB frequently, you should consider upgrading to a Cloudron with more memory.
## Activity log
# Activity log
The `Activity` view shows the activity on your Cloudron. It includes information about who is using
the apps on your Cloudron and also tracks configuration changes.
<img src="/docs/img/activity.png" class="shadow">
## Domains and SSL Certificates
# Domains and SSL Certificates
All apps on the Cloudron can only be reached by `https`. The Cloudron automatically installs and
renews certificates for your apps as needed. Should installation of certificate fail for reasons
beyond it's control, Cloudron admins will get a notification about it.
## API Access
# API Access
All the operations listed in this manual like installing app, configuring users and groups, are
completely programmable with a [REST API](/references/api.html).
## Moving to a larger Cloudron
# Moving to a larger Cloudron
When using a Cloudron from cloudron.io, it is easy to migrate your apps and data to a bigger server.
In the `Settings` page, you can change the plan.
<insert picture>
## Command line tool
# Command line tool
If you are a software developer or a sysadmin, the Cloudron comes with a CLI tool that can be
used to develop custom apps for the Cloudron. Read more about it [here](https://git.cloudron.io/cloudron/cloudron-cli).
+27 -15
View File
@@ -83,7 +83,7 @@ FROM cloudron/base:0.9.0
ADD server.js /app/code/server.js
CMD [ "/usr/local/node-4.2.1/bin/node", "/app/code/server.js" ]
CMD [ "/usr/local/node-4.4.7/bin/node", "/app/code/server.js" ]
```
The `FROM` command specifies that we want to start off with Cloudron's [base image](/references/baseimage.html).
@@ -94,12 +94,12 @@ The `ADD` command copies the source code of the app into the directory `/app/cod
about the `/app/code` directory and it is merely a convention we use to store the application code.
The `CMD` command specifies how to run the server. The base image already contains many different versions of
node.js. We use Node 4.2.1 here.
node.js. We use Node 4.4.7 here.
This Dockerfile can be built and run locally as:
```
docker build -t tutorial .
docker run -p 8000:8000 -ti tutorial
docker run -p 8000:8000 -t tutorial
```
## Manifest
@@ -271,14 +271,18 @@ You can also execute arbitrary commands:
$ cloudron exec env # display the env variables that your app is running with
```
### DevelopmentMode
### Debugging
When debugging complex startup scripts, one can specify `"developmentMode": true,` in the CloudronManifest.json.
This will ignore the `RUN` command, specified in the Dockerfile and allows the developer to interactively test
the startup scripts using `cloudron exec`.
An app can be placed in `debug` mode by passing `--debug` to `cloudron install` or `cloudron configure`.
Doing so, runs the app in a non-readonly rootfs and unlimited memory. By default, this will also ignore
the `RUN` command specified in the Dockerfile. The developer can then interactively test the app and
startup scripts using `cloudron exec`.
**Note:** that an app running in this mode has full read/write access to the filesystem and all memory limits are lifted.
This mode can be used to identify the files being modified by your application - often required to
debug situations where your app does not run on a readonly rootfs. Run your app using `cloudron exec`
and use `find / -mmin -30` to find file that have been changed or created in the last 30 minutes.
You can turn off debugging mode using `cloudron configure --no-debug`.
# Addons
@@ -331,7 +335,7 @@ See https://git.cloudron.io/cloudron/tutorial-ldap for a simple example of how t
For apps that are single user can skip Single Sign-On support by setting the `"singleUser": true`
in the manifest. By doing so, the Cloudron will installer will show a dialog to choose a user.
For app that have no user management at all, the Cloudron implements an `OAuth proxy` that
For app that have no user management at all, the Cloudron implements an `OAuth proxy` that
optionally lets the Cloudron admin make the app visible only for logged in users.
# Best practices
@@ -408,11 +412,20 @@ the `start.sh` script does the following:
The app's main process must handle SIGTERM and forward it as required to child processes. bash does not
automatically forward signals to child processes. For this reason, when using a startup shell script,
remember to use exec <app> as the last line. Doing so will replace bash with your program and allows
remember to use exec <app> as the last line. Doing so will replace bash with your program and allows
your program to handle signals as required.
# Beta Testing
## Metadata
Publishing to the Cloudron Store requires apps to have meta data specified in the `CloudronManifest.json`.
The `cloudron` tool will notify if any such information is missing, prior to uploading.
See more information for each field [here](/references/manifest.html).
## Upload for Testing
Once your app is ready, you can upload it to the store for `beta testing` by
other Cloudron users. This can be done using:
@@ -420,9 +433,8 @@ other Cloudron users. This can be done using:
cloudron upload
```
The app should now be visible in the Store view of your cloudron under
the 'Testing' section. You can check if the icon, description and other details
appear correctly.
You should now be able to visit `/#/appstore/<appid>?version=<appversion>` on your
Cloudron to check if the icon, description and other details appear correctly.
Other Cloudron users can install your app on their Cloudron's using
`cloudron install --appstore-id <appid@version>`.
@@ -442,7 +454,7 @@ The cloudron.io team will review the app and publish the app to the store.
## Versioning
To create an update for an app, simply bump up the [semver version](/references/manifest.html#version) field in
the manifest and publish a new version to the store.
the manifest and publish a new version to the store.
The Cloudron chooses the next app version to update to based on the following algorithm:
* Choose the maximum `patch` version matching the app's current `major` and `minor` version.
@@ -461,7 +473,7 @@ The Cloudron admins get notified by email for any major or minor app releases.
## Failed updates
The Cloudron always makes a backup of the app before making an update. Should the
update fail, the user can restore to the backup (which will also restore the app's
update fail, the user can restore to the backup (which will also restore the app's
code to the previous version).
# Cloudron Button
+49 -4
View File
@@ -40,7 +40,16 @@ gulp.task('3rdparty', function () {
// JavaScript
// --------------
gulp.task('js', ['js-index', 'js-setup', 'js-update'], function () {});
if (argv.help || argv.h) {
console.log('Supported arguments for "gulp develop":');
console.log(' --client-id <clientId>');
console.log(' --client-secret <clientSecret>');
console.log(' --api-origin <cloudron api uri>');
process.exit(1);
}
gulp.task('js', ['js-index', 'js-setup', 'js-setupdns', 'js-update'], function () {});
var oauth = {
clientId: argv.clientId || 'cid-webadmin',
@@ -55,7 +64,14 @@ console.log(' ClientSecret: %s', oauth.clientSecret);
console.log(' Cloudron API: %s', oauth.apiOrigin || 'default');
console.log();
gulp.task('js-index', function () {
// needs special treatment for error handling
var uglifyer = uglify();
uglifyer.on('error', function (error) {
console.error(error);
});
gulp.src([
'webadmin/src/js/index.js',
'webadmin/src/js/client.js',
@@ -66,25 +82,53 @@ gulp.task('js-index', function () {
.pipe(ejs({ oauth: oauth }, { ext: '.js' }))
.pipe(sourcemaps.init())
.pipe(concat('index.js', { newLine: ';' }))
.pipe(uglify())
.pipe(uglifyer)
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-setup', function () {
// needs special treatment for error handling
var uglifyer = uglify();
uglifyer.on('error', function (error) {
console.error(error);
});
gulp.src(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'])
.pipe(ejs({ oauth: oauth }, { ext: '.js' }))
.pipe(sourcemaps.init())
.pipe(concat('setup.js', { newLine: ';' }))
.pipe(uglify())
.pipe(uglifyer)
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-setupdns', function () {
// needs special treatment for error handling
var uglifyer = uglify();
uglifyer.on('error', function (error) {
console.error(error);
});
gulp.src(['webadmin/src/js/setupdns.js', 'webadmin/src/js/client.js'])
.pipe(ejs({ oauth: oauth }, { ext: '.js' }))
.pipe(sourcemaps.init())
.pipe(concat('setupdns.js', { newLine: ';' }))
.pipe(uglifyer)
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-update', function () {
// needs special treatment for error handling
var uglifyer = uglify();
uglifyer.on('error', function (error) {
console.error(error);
});
gulp.src(['webadmin/src/js/update.js'])
.pipe(sourcemaps.init())
.pipe(uglify())
.pipe(uglifyer)
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'))
.pipe(gulp.dest('setup/splash/website/js'));
@@ -143,6 +187,7 @@ gulp.task('watch', ['default'], function () {
gulp.watch(['webadmin/src/templates/*.html'], ['html-templates']);
gulp.watch(['webadmin/src/js/update.js'], ['js-update']);
gulp.watch(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'], ['js-setup']);
gulp.watch(['webadmin/src/js/setupdns.js', 'webadmin/src/js/client.js'], ['js-setupdns']);
gulp.watch(['webadmin/src/js/index.js', 'webadmin/src/js/client.js', 'webadmin/src/js/appstore.js', 'webadmin/src/js/main.js', 'webadmin/src/views/*.js'], ['js-index']);
gulp.watch(['webadmin/src/3rdparty/**/*'], ['3rdparty']);
});
-892
View File
@@ -1,892 +0,0 @@
{
"name": "installer",
"version": "0.0.1",
"dependencies": {
"async": {
"version": "1.5.0",
"from": "https://registry.npmjs.org/async/-/async-1.5.0.tgz",
"resolved": "https://registry.npmjs.org/async/-/async-1.5.0.tgz"
},
"body-parser": {
"version": "1.14.1",
"from": "https://registry.npmjs.org/body-parser/-/body-parser-1.14.1.tgz",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.14.1.tgz",
"dependencies": {
"bytes": {
"version": "2.1.0",
"from": "https://registry.npmjs.org/bytes/-/bytes-2.1.0.tgz",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-2.1.0.tgz"
},
"content-type": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz"
},
"depd": {
"version": "1.1.0",
"from": "https://registry.npmjs.org/depd/-/depd-1.1.0.tgz",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.0.tgz"
},
"http-errors": {
"version": "1.3.1",
"from": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"dependencies": {
"inherits": {
"version": "2.0.1",
"from": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
},
"statuses": {
"version": "1.2.1",
"from": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz"
}
}
},
"iconv-lite": {
"version": "0.4.12",
"from": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.12.tgz",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.12.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"qs": {
"version": "5.1.0",
"from": "https://registry.npmjs.org/qs/-/qs-5.1.0.tgz",
"resolved": "https://registry.npmjs.org/qs/-/qs-5.1.0.tgz"
},
"raw-body": {
"version": "2.1.4",
"from": "https://registry.npmjs.org/raw-body/-/raw-body-2.1.4.tgz",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.1.4.tgz",
"dependencies": {
"unpipe": {
"version": "1.0.0",
"from": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz"
}
}
},
"type-is": {
"version": "1.6.9",
"from": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"dependencies": {
"media-typer": {
"version": "0.3.0",
"from": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz"
},
"mime-types": {
"version": "2.1.7",
"from": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
}
}
}
}
},
"connect-lastmile": {
"version": "0.0.13",
"from": "https://registry.npmjs.org/connect-lastmile/-/connect-lastmile-0.0.13.tgz",
"resolved": "https://registry.npmjs.org/connect-lastmile/-/connect-lastmile-0.0.13.tgz",
"dependencies": {
"debug": {
"version": "2.1.3",
"from": "https://registry.npmjs.org/debug/-/debug-2.1.3.tgz",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.1.3.tgz",
"dependencies": {
"ms": {
"version": "0.7.0",
"from": "http://registry.npmjs.org/ms/-/ms-0.7.0.tgz",
"resolved": "http://registry.npmjs.org/ms/-/ms-0.7.0.tgz"
}
}
}
}
},
"debug": {
"version": "2.2.0",
"from": "https://registry.npmjs.org/debug/-/debug-2.2.0.tgz",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.2.0.tgz",
"dependencies": {
"ms": {
"version": "0.7.1",
"from": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz",
"resolved": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz"
}
}
},
"express": {
"version": "4.13.3",
"from": "https://registry.npmjs.org/express/-/express-4.13.3.tgz",
"resolved": "https://registry.npmjs.org/express/-/express-4.13.3.tgz",
"dependencies": {
"accepts": {
"version": "1.2.13",
"from": "https://registry.npmjs.org/accepts/-/accepts-1.2.13.tgz",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.2.13.tgz",
"dependencies": {
"mime-types": {
"version": "2.1.7",
"from": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
},
"negotiator": {
"version": "0.5.3",
"from": "https://registry.npmjs.org/negotiator/-/negotiator-0.5.3.tgz",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.5.3.tgz"
}
}
},
"array-flatten": {
"version": "1.1.1",
"from": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz"
},
"content-disposition": {
"version": "0.5.0",
"from": "http://registry.npmjs.org/content-disposition/-/content-disposition-0.5.0.tgz",
"resolved": "http://registry.npmjs.org/content-disposition/-/content-disposition-0.5.0.tgz"
},
"content-type": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.1.tgz"
},
"cookie": {
"version": "0.1.3",
"from": "https://registry.npmjs.org/cookie/-/cookie-0.1.3.tgz",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.1.3.tgz"
},
"cookie-signature": {
"version": "1.0.6",
"from": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz"
},
"depd": {
"version": "1.0.1",
"from": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz",
"resolved": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz"
},
"escape-html": {
"version": "1.0.2",
"from": "http://registry.npmjs.org/escape-html/-/escape-html-1.0.2.tgz",
"resolved": "http://registry.npmjs.org/escape-html/-/escape-html-1.0.2.tgz"
},
"etag": {
"version": "1.7.0",
"from": "https://registry.npmjs.org/etag/-/etag-1.7.0.tgz",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.7.0.tgz"
},
"finalhandler": {
"version": "0.4.0",
"from": "http://registry.npmjs.org/finalhandler/-/finalhandler-0.4.0.tgz",
"resolved": "http://registry.npmjs.org/finalhandler/-/finalhandler-0.4.0.tgz",
"dependencies": {
"unpipe": {
"version": "1.0.0",
"from": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz"
}
}
},
"fresh": {
"version": "0.3.0",
"from": "https://registry.npmjs.org/fresh/-/fresh-0.3.0.tgz",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.3.0.tgz"
},
"merge-descriptors": {
"version": "1.0.0",
"from": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.0.tgz",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.0.tgz"
},
"methods": {
"version": "1.1.1",
"from": "https://registry.npmjs.org/methods/-/methods-1.1.1.tgz",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.1.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"parseurl": {
"version": "1.3.0",
"from": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.0.tgz",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.0.tgz"
},
"path-to-regexp": {
"version": "0.1.7",
"from": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz"
},
"proxy-addr": {
"version": "1.0.8",
"from": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-1.0.8.tgz",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-1.0.8.tgz",
"dependencies": {
"forwarded": {
"version": "0.1.0",
"from": "http://registry.npmjs.org/forwarded/-/forwarded-0.1.0.tgz",
"resolved": "http://registry.npmjs.org/forwarded/-/forwarded-0.1.0.tgz"
},
"ipaddr.js": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.0.1.tgz"
}
}
},
"qs": {
"version": "4.0.0",
"from": "https://registry.npmjs.org/qs/-/qs-4.0.0.tgz",
"resolved": "https://registry.npmjs.org/qs/-/qs-4.0.0.tgz"
},
"range-parser": {
"version": "1.0.3",
"from": "https://registry.npmjs.org/range-parser/-/range-parser-1.0.3.tgz",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.0.3.tgz"
},
"send": {
"version": "0.13.0",
"from": "http://registry.npmjs.org/send/-/send-0.13.0.tgz",
"resolved": "http://registry.npmjs.org/send/-/send-0.13.0.tgz",
"dependencies": {
"destroy": {
"version": "1.0.3",
"from": "http://registry.npmjs.org/destroy/-/destroy-1.0.3.tgz",
"resolved": "http://registry.npmjs.org/destroy/-/destroy-1.0.3.tgz"
},
"http-errors": {
"version": "1.3.1",
"from": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.3.1.tgz",
"dependencies": {
"inherits": {
"version": "2.0.1",
"from": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
}
}
},
"mime": {
"version": "1.3.4",
"from": "https://registry.npmjs.org/mime/-/mime-1.3.4.tgz",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.3.4.tgz"
},
"ms": {
"version": "0.7.1",
"from": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz",
"resolved": "https://registry.npmjs.org/ms/-/ms-0.7.1.tgz"
},
"statuses": {
"version": "1.2.1",
"from": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.2.1.tgz"
}
}
},
"serve-static": {
"version": "1.10.0",
"from": "http://registry.npmjs.org/serve-static/-/serve-static-1.10.0.tgz",
"resolved": "http://registry.npmjs.org/serve-static/-/serve-static-1.10.0.tgz"
},
"type-is": {
"version": "1.6.9",
"from": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.9.tgz",
"dependencies": {
"media-typer": {
"version": "0.3.0",
"from": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz"
},
"mime-types": {
"version": "2.1.7",
"from": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.7.tgz",
"dependencies": {
"mime-db": {
"version": "1.19.0",
"from": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.19.0.tgz"
}
}
}
}
},
"utils-merge": {
"version": "1.0.0",
"from": "http://registry.npmjs.org/utils-merge/-/utils-merge-1.0.0.tgz",
"resolved": "http://registry.npmjs.org/utils-merge/-/utils-merge-1.0.0.tgz"
},
"vary": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/vary/-/vary-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.0.1.tgz"
}
}
},
"json": {
"version": "9.0.3",
"from": "https://registry.npmjs.org/json/-/json-9.0.3.tgz",
"resolved": "https://registry.npmjs.org/json/-/json-9.0.3.tgz"
},
"morgan": {
"version": "1.6.1",
"from": "https://registry.npmjs.org/morgan/-/morgan-1.6.1.tgz",
"resolved": "https://registry.npmjs.org/morgan/-/morgan-1.6.1.tgz",
"dependencies": {
"basic-auth": {
"version": "1.0.3",
"from": "https://registry.npmjs.org/basic-auth/-/basic-auth-1.0.3.tgz",
"resolved": "https://registry.npmjs.org/basic-auth/-/basic-auth-1.0.3.tgz"
},
"depd": {
"version": "1.0.1",
"from": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz",
"resolved": "http://registry.npmjs.org/depd/-/depd-1.0.1.tgz"
},
"on-finished": {
"version": "2.3.0",
"from": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"dependencies": {
"ee-first": {
"version": "1.1.1",
"from": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz"
}
}
},
"on-headers": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.1.tgz"
}
}
},
"proxy-middleware": {
"version": "0.15.0",
"from": "https://registry.npmjs.org/proxy-middleware/-/proxy-middleware-0.15.0.tgz",
"resolved": "https://registry.npmjs.org/proxy-middleware/-/proxy-middleware-0.15.0.tgz"
},
"request": {
"version": "2.72.0",
"from": "request@*",
"resolved": "https://registry.npmjs.org/request/-/request-2.72.0.tgz",
"dependencies": {
"aws-sign2": {
"version": "0.6.0",
"from": "aws-sign2@>=0.6.0 <0.7.0",
"resolved": "https://registry.npmjs.org/aws-sign2/-/aws-sign2-0.6.0.tgz"
},
"aws4": {
"version": "1.4.1",
"from": "aws4@>=1.2.1 <2.0.0",
"resolved": "https://registry.npmjs.org/aws4/-/aws4-1.4.1.tgz"
},
"bl": {
"version": "1.1.2",
"from": "bl@>=1.1.2 <1.2.0",
"resolved": "https://registry.npmjs.org/bl/-/bl-1.1.2.tgz",
"dependencies": {
"readable-stream": {
"version": "2.0.6",
"from": "readable-stream@>=2.0.5 <2.1.0",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.0.6.tgz",
"dependencies": {
"core-util-is": {
"version": "1.0.2",
"from": "core-util-is@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz"
},
"inherits": {
"version": "2.0.1",
"from": "inherits@>=2.0.1 <2.1.0",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
},
"isarray": {
"version": "1.0.0",
"from": "isarray@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz"
},
"process-nextick-args": {
"version": "1.0.7",
"from": "process-nextick-args@>=1.0.6 <1.1.0",
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-1.0.7.tgz"
},
"string_decoder": {
"version": "0.10.31",
"from": "string_decoder@>=0.10.0 <0.11.0",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz"
},
"util-deprecate": {
"version": "1.0.2",
"from": "util-deprecate@>=1.0.1 <1.1.0",
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz"
}
}
}
}
},
"caseless": {
"version": "0.11.0",
"from": "caseless@>=0.11.0 <0.12.0",
"resolved": "https://registry.npmjs.org/caseless/-/caseless-0.11.0.tgz"
},
"combined-stream": {
"version": "1.0.5",
"from": "combined-stream@>=1.0.5 <1.1.0",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.5.tgz",
"dependencies": {
"delayed-stream": {
"version": "1.0.0",
"from": "delayed-stream@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz"
}
}
},
"extend": {
"version": "3.0.0",
"from": "extend@>=3.0.0 <3.1.0",
"resolved": "https://registry.npmjs.org/extend/-/extend-3.0.0.tgz"
},
"forever-agent": {
"version": "0.6.1",
"from": "forever-agent@>=0.6.1 <0.7.0",
"resolved": "https://registry.npmjs.org/forever-agent/-/forever-agent-0.6.1.tgz"
},
"form-data": {
"version": "1.0.0-rc4",
"from": "form-data@>=1.0.0-rc3 <1.1.0",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-1.0.0-rc4.tgz",
"dependencies": {
"async": {
"version": "1.5.2",
"from": "async@>=1.5.2 <2.0.0",
"resolved": "https://registry.npmjs.org/async/-/async-1.5.2.tgz"
}
}
},
"har-validator": {
"version": "2.0.6",
"from": "har-validator@>=2.0.6 <2.1.0",
"resolved": "https://registry.npmjs.org/har-validator/-/har-validator-2.0.6.tgz",
"dependencies": {
"chalk": {
"version": "1.1.3",
"from": "chalk@>=1.1.1 <2.0.0",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
"dependencies": {
"ansi-styles": {
"version": "2.2.1",
"from": "ansi-styles@>=2.2.1 <3.0.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz"
},
"escape-string-regexp": {
"version": "1.0.5",
"from": "escape-string-regexp@>=1.0.2 <2.0.0",
"resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz"
},
"has-ansi": {
"version": "2.0.0",
"from": "has-ansi@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/has-ansi/-/has-ansi-2.0.0.tgz",
"dependencies": {
"ansi-regex": {
"version": "2.0.0",
"from": "ansi-regex@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.0.0.tgz"
}
}
},
"strip-ansi": {
"version": "3.0.1",
"from": "strip-ansi@>=3.0.0 <4.0.0",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz",
"dependencies": {
"ansi-regex": {
"version": "2.0.0",
"from": "ansi-regex@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.0.0.tgz"
}
}
},
"supports-color": {
"version": "2.0.0",
"from": "supports-color@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz"
}
}
},
"commander": {
"version": "2.9.0",
"from": "commander@>=2.9.0 <3.0.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.9.0.tgz",
"dependencies": {
"graceful-readlink": {
"version": "1.0.1",
"from": "graceful-readlink@>=1.0.0",
"resolved": "https://registry.npmjs.org/graceful-readlink/-/graceful-readlink-1.0.1.tgz"
}
}
},
"is-my-json-valid": {
"version": "2.13.1",
"from": "is-my-json-valid@>=2.12.4 <3.0.0",
"resolved": "https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.13.1.tgz",
"dependencies": {
"generate-function": {
"version": "2.0.0",
"from": "generate-function@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/generate-function/-/generate-function-2.0.0.tgz"
},
"generate-object-property": {
"version": "1.2.0",
"from": "generate-object-property@>=1.1.0 <2.0.0",
"resolved": "https://registry.npmjs.org/generate-object-property/-/generate-object-property-1.2.0.tgz",
"dependencies": {
"is-property": {
"version": "1.0.2",
"from": "is-property@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/is-property/-/is-property-1.0.2.tgz"
}
}
},
"jsonpointer": {
"version": "2.0.0",
"from": "jsonpointer@2.0.0",
"resolved": "https://registry.npmjs.org/jsonpointer/-/jsonpointer-2.0.0.tgz"
},
"xtend": {
"version": "4.0.1",
"from": "xtend@>=4.0.0 <5.0.0",
"resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.1.tgz"
}
}
},
"pinkie-promise": {
"version": "2.0.1",
"from": "pinkie-promise@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/pinkie-promise/-/pinkie-promise-2.0.1.tgz",
"dependencies": {
"pinkie": {
"version": "2.0.4",
"from": "pinkie@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/pinkie/-/pinkie-2.0.4.tgz"
}
}
}
}
},
"hawk": {
"version": "3.1.3",
"from": "hawk@>=3.1.3 <3.2.0",
"resolved": "https://registry.npmjs.org/hawk/-/hawk-3.1.3.tgz",
"dependencies": {
"hoek": {
"version": "2.16.3",
"from": "hoek@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/hoek/-/hoek-2.16.3.tgz"
},
"boom": {
"version": "2.10.1",
"from": "boom@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/boom/-/boom-2.10.1.tgz"
},
"cryptiles": {
"version": "2.0.5",
"from": "cryptiles@>=2.0.0 <3.0.0",
"resolved": "https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz"
},
"sntp": {
"version": "1.0.9",
"from": "sntp@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/sntp/-/sntp-1.0.9.tgz"
}
}
},
"http-signature": {
"version": "1.1.1",
"from": "http-signature@>=1.1.0 <1.2.0",
"resolved": "https://registry.npmjs.org/http-signature/-/http-signature-1.1.1.tgz",
"dependencies": {
"assert-plus": {
"version": "0.2.0",
"from": "assert-plus@>=0.2.0 <0.3.0",
"resolved": "https://registry.npmjs.org/assert-plus/-/assert-plus-0.2.0.tgz"
},
"jsprim": {
"version": "1.2.2",
"from": "jsprim@>=1.2.2 <2.0.0",
"resolved": "https://registry.npmjs.org/jsprim/-/jsprim-1.2.2.tgz",
"dependencies": {
"extsprintf": {
"version": "1.0.2",
"from": "extsprintf@1.0.2",
"resolved": "https://registry.npmjs.org/extsprintf/-/extsprintf-1.0.2.tgz"
},
"json-schema": {
"version": "0.2.2",
"from": "json-schema@0.2.2",
"resolved": "https://registry.npmjs.org/json-schema/-/json-schema-0.2.2.tgz"
},
"verror": {
"version": "1.3.6",
"from": "verror@1.3.6",
"resolved": "https://registry.npmjs.org/verror/-/verror-1.3.6.tgz"
}
}
},
"sshpk": {
"version": "1.8.3",
"from": "sshpk@>=1.7.0 <2.0.0",
"resolved": "https://registry.npmjs.org/sshpk/-/sshpk-1.8.3.tgz",
"dependencies": {
"asn1": {
"version": "0.2.3",
"from": "asn1@>=0.2.3 <0.3.0",
"resolved": "https://registry.npmjs.org/asn1/-/asn1-0.2.3.tgz"
},
"assert-plus": {
"version": "1.0.0",
"from": "assert-plus@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/assert-plus/-/assert-plus-1.0.0.tgz"
},
"dashdash": {
"version": "1.14.0",
"from": "dashdash@>=1.12.0 <2.0.0",
"resolved": "https://registry.npmjs.org/dashdash/-/dashdash-1.14.0.tgz"
},
"getpass": {
"version": "0.1.6",
"from": "getpass@>=0.1.1 <0.2.0",
"resolved": "https://registry.npmjs.org/getpass/-/getpass-0.1.6.tgz"
},
"jsbn": {
"version": "0.1.0",
"from": "jsbn@>=0.1.0 <0.2.0",
"resolved": "https://registry.npmjs.org/jsbn/-/jsbn-0.1.0.tgz"
},
"tweetnacl": {
"version": "0.13.3",
"from": "tweetnacl@>=0.13.0 <0.14.0",
"resolved": "https://registry.npmjs.org/tweetnacl/-/tweetnacl-0.13.3.tgz"
},
"jodid25519": {
"version": "1.0.2",
"from": "jodid25519@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/jodid25519/-/jodid25519-1.0.2.tgz"
},
"ecc-jsbn": {
"version": "0.1.1",
"from": "ecc-jsbn@>=0.1.1 <0.2.0",
"resolved": "https://registry.npmjs.org/ecc-jsbn/-/ecc-jsbn-0.1.1.tgz"
}
}
}
}
},
"is-typedarray": {
"version": "1.0.0",
"from": "is-typedarray@>=1.0.0 <1.1.0",
"resolved": "https://registry.npmjs.org/is-typedarray/-/is-typedarray-1.0.0.tgz"
},
"isstream": {
"version": "0.1.2",
"from": "isstream@>=0.1.2 <0.2.0",
"resolved": "https://registry.npmjs.org/isstream/-/isstream-0.1.2.tgz"
},
"json-stringify-safe": {
"version": "5.0.1",
"from": "json-stringify-safe@>=5.0.1 <5.1.0",
"resolved": "https://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-5.0.1.tgz"
},
"mime-types": {
"version": "2.1.11",
"from": "mime-types@>=2.1.7 <2.2.0",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.11.tgz",
"dependencies": {
"mime-db": {
"version": "1.23.0",
"from": "mime-db@>=1.23.0 <1.24.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.23.0.tgz"
}
}
},
"node-uuid": {
"version": "1.4.7",
"from": "node-uuid@>=1.4.7 <1.5.0",
"resolved": "https://registry.npmjs.org/node-uuid/-/node-uuid-1.4.7.tgz"
},
"oauth-sign": {
"version": "0.8.2",
"from": "oauth-sign@>=0.8.1 <0.9.0",
"resolved": "https://registry.npmjs.org/oauth-sign/-/oauth-sign-0.8.2.tgz"
},
"qs": {
"version": "6.1.0",
"from": "qs@>=6.1.0 <6.2.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.1.0.tgz"
},
"stringstream": {
"version": "0.0.5",
"from": "stringstream@>=0.0.4 <0.1.0",
"resolved": "https://registry.npmjs.org/stringstream/-/stringstream-0.0.5.tgz"
},
"tough-cookie": {
"version": "2.2.2",
"from": "tough-cookie@>=2.2.0 <2.3.0",
"resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.2.tgz"
},
"tunnel-agent": {
"version": "0.4.3",
"from": "tunnel-agent@>=0.4.1 <0.5.0",
"resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.3.tgz"
}
}
},
"safetydance": {
"version": "0.0.19",
"from": "https://registry.npmjs.org/safetydance/-/safetydance-0.0.19.tgz",
"resolved": "https://registry.npmjs.org/safetydance/-/safetydance-0.0.19.tgz"
},
"semver": {
"version": "5.1.0",
"from": "https://registry.npmjs.org/semver/-/semver-5.1.0.tgz",
"resolved": "https://registry.npmjs.org/semver/-/semver-5.1.0.tgz"
},
"superagent": {
"version": "0.21.0",
"from": "https://registry.npmjs.org/superagent/-/superagent-0.21.0.tgz",
"resolved": "https://registry.npmjs.org/superagent/-/superagent-0.21.0.tgz",
"dependencies": {
"qs": {
"version": "1.2.0",
"from": "https://registry.npmjs.org/qs/-/qs-1.2.0.tgz",
"resolved": "https://registry.npmjs.org/qs/-/qs-1.2.0.tgz"
},
"formidable": {
"version": "1.0.14",
"from": "https://registry.npmjs.org/formidable/-/formidable-1.0.14.tgz",
"resolved": "https://registry.npmjs.org/formidable/-/formidable-1.0.14.tgz"
},
"mime": {
"version": "1.2.11",
"from": "https://registry.npmjs.org/mime/-/mime-1.2.11.tgz",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.2.11.tgz"
},
"component-emitter": {
"version": "1.1.2",
"from": "http://registry.npmjs.org/component-emitter/-/component-emitter-1.1.2.tgz",
"resolved": "http://registry.npmjs.org/component-emitter/-/component-emitter-1.1.2.tgz"
},
"methods": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/methods/-/methods-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.0.1.tgz"
},
"cookiejar": {
"version": "2.0.1",
"from": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.0.1.tgz",
"resolved": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.0.1.tgz"
},
"reduce-component": {
"version": "1.0.1",
"from": "http://registry.npmjs.org/reduce-component/-/reduce-component-1.0.1.tgz",
"resolved": "http://registry.npmjs.org/reduce-component/-/reduce-component-1.0.1.tgz"
},
"extend": {
"version": "1.2.1",
"from": "https://registry.npmjs.org/extend/-/extend-1.2.1.tgz",
"resolved": "https://registry.npmjs.org/extend/-/extend-1.2.1.tgz"
},
"form-data": {
"version": "0.1.3",
"from": "http://registry.npmjs.org/form-data/-/form-data-0.1.3.tgz",
"resolved": "http://registry.npmjs.org/form-data/-/form-data-0.1.3.tgz",
"dependencies": {
"combined-stream": {
"version": "0.0.7",
"from": "https://registry.npmjs.org/combined-stream/-/combined-stream-0.0.7.tgz",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-0.0.7.tgz",
"dependencies": {
"delayed-stream": {
"version": "0.0.5",
"from": "http://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz",
"resolved": "http://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz"
}
}
},
"async": {
"version": "0.9.2",
"from": "https://registry.npmjs.org/async/-/async-0.9.2.tgz",
"resolved": "https://registry.npmjs.org/async/-/async-0.9.2.tgz"
}
}
},
"readable-stream": {
"version": "1.0.27-1",
"from": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.27-1.tgz",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.27-1.tgz",
"dependencies": {
"core-util-is": {
"version": "1.0.1",
"from": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz"
},
"isarray": {
"version": "0.0.1",
"from": "https://registry.npmjs.org/isarray/-/isarray-0.0.1.tgz",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-0.0.1.tgz"
},
"string_decoder": {
"version": "0.10.31",
"from": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz"
},
"inherits": {
"version": "2.0.1",
"from": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz"
}
}
}
}
}
}
}
-48
View File
@@ -1,48 +0,0 @@
{
"name": "installer",
"description": "Cloudron Installer",
"version": "0.0.1",
"private": "true",
"author": {
"name": "Cloudron authors"
},
"repository": {
"type": "git"
},
"engines": [
"node >=4.0.0 <=4.1.1"
],
"dependencies": {
"async": "^1.5.0",
"body-parser": "^1.12.0",
"connect-lastmile": "0.0.13",
"debug": "^2.1.1",
"express": "^4.11.2",
"json": "^9.0.3",
"morgan": "^1.5.1",
"proxy-middleware": "^0.15.0",
"request": "^2.72.0",
"safetydance": "0.0.19",
"semver": "^5.1.0",
"superagent": "^0.21.0"
},
"devDependencies": {
"colors": "^1.1.2",
"commander": "^2.8.1",
"expect.js": "^0.3.1",
"istanbul": "^0.3.5",
"lodash": "^3.2.0",
"mocha": "^2.1.0",
"nock": "^0.59.1",
"sleep": "^3.0.0",
"superagent-sync": "^0.2.0",
"supererror": "^0.7.0",
"yesno": "0.0.1"
},
"scripts": {
"test": "NODE_ENV=test ./node_modules/istanbul/lib/cli.js test $1 ./node_modules/mocha/bin/_mocha -- -R spec ./src/test",
"precommit": "/bin/true",
"prepush": "npm test",
"postmerge": "/bin/true"
}
}
View File
-112
View File
@@ -1,112 +0,0 @@
/* jslint node: true */
'use strict';
var assert = require('assert'),
child_process = require('child_process'),
debug = require('debug')('installer:installer'),
path = require('path'),
safe = require('safetydance'),
semver = require('semver'),
superagent = require('superagent'),
util = require('util');
exports = module.exports = {
InstallerError: InstallerError,
provision: provision,
_ensureVersion: ensureVersion
};
var INSTALLER_CMD = path.join(__dirname, 'scripts/installer.sh'),
SUDO = '/usr/bin/sudo';
function InstallerError(reason, info) {
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
this.message = !info ? reason : (typeof info === 'object' ? JSON.stringify(info) : info);
}
util.inherits(InstallerError, Error);
InstallerError.INTERNAL_ERROR = 1;
InstallerError.ALREADY_PROVISIONED = 2;
// system until file has KillMode=control-group to bring down child processes
function spawn(tag, cmd, args, callback) {
assert.strictEqual(typeof tag, 'string');
assert.strictEqual(typeof cmd, 'string');
assert(util.isArray(args));
assert.strictEqual(typeof callback, 'function');
var cp = child_process.spawn(cmd, args, { timeout: 0 });
cp.stdout.setEncoding('utf8');
cp.stdout.on('data', function (data) { debug('%s (stdout): %s', tag, data); });
cp.stderr.setEncoding('utf8');
cp.stderr.on('data', function (data) { debug('%s (stderr): %s', tag, data); });
cp.on('error', function (error) {
debug('%s : child process errored %s', tag, error.message);
callback(error);
});
cp.on('exit', function (code, signal) {
debug('%s : child process exited. code: %d signal: %d', tag, code, signal);
if (signal) return callback(new Error('Exited with signal ' + signal));
if (code !== 0) return callback(new Error('Exited with code ' + code));
callback(null);
});
}
function ensureVersion(args, callback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof callback, 'function');
if (!args.data || !args.data.boxVersionsUrl) return callback(new Error('No boxVersionsUrl specified'));
if (args.sourceTarballUrl) return callback(null, args);
superagent.get(args.data.boxVersionsUrl).end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new Error(util.format('Bad status: %s %s', result.statusCode, result.text)));
var versions = safe.JSON.parse(result.text);
if (!versions || typeof versions !== 'object') return callback(new Error('versions is not in valid format:' + safe.error));
var latestVersion = Object.keys(versions).sort(semver.compare).pop();
debug('ensureVersion: Latest version is %s etag:%s', latestVersion, result.header['etag']);
if (!versions[latestVersion]) return callback(new Error('No version available'));
if (!versions[latestVersion].sourceTarballUrl) return callback(new Error('No sourceTarballUrl specified'));
args.sourceTarballUrl = versions[latestVersion].sourceTarballUrl;
args.data.version = latestVersion;
callback(null, args);
});
}
function provision(args, callback) {
assert.strictEqual(typeof args, 'object');
assert.strictEqual(typeof callback, 'function');
if (process.env.NODE_ENV === 'test') return callback(null);
ensureVersion(args, function (error, result) {
if (error) return callback(error);
var pargs = [ INSTALLER_CMD ];
pargs.push('--sourcetarballurl', result.sourceTarballUrl);
pargs.push('--data', JSON.stringify(result.data));
debug('provision: calling with args %j', pargs);
// sudo is required for update()
spawn('provision', SUDO, pargs, callback);
});
}
-68
View File
@@ -1,68 +0,0 @@
#!/bin/bash
set -eu -o pipefail
readonly BOX_SRC_DIR=/home/yellowtent/box
readonly DATA_DIR=/home/yellowtent/data
readonly CLOUDRON_CONF=/home/yellowtent/configs/cloudron.conf
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly json="${script_dir}/../../node_modules/.bin/json"
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 300"
readonly is_update=$([[ -f "${CLOUDRON_CONF}" ]] && echo "yes" || echo "no")
# create a provision file for testing. %q escapes args. %q is reused as much as necessary to satisfy $@
(echo -e "#!/bin/bash\n"; printf "%q " "${script_dir}/installer.sh" "$@") > /home/yellowtent/provision.sh
chmod +x /home/yellowtent/provision.sh
arg_source_tarball_url=""
arg_data=""
args=$(getopt -o "" -l "sourcetarballurl:,data:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--sourcetarballurl) arg_source_tarball_url="$2";;
--data) arg_data="$2";;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
shift 2
done
box_src_tmp_dir=$(mktemp -dt box-src-XXXXXX)
echo "Downloading box code from ${arg_source_tarball_url} to ${box_src_tmp_dir}"
while true; do
if $curl -L "${arg_source_tarball_url}" | tar -zxf - -C "${box_src_tmp_dir}"; then break; fi
echo "Failed to download source tarball, trying again"
sleep 5
done
while true; do
# for reasons unknown, the dtrace package will fail. but rebuilding second time will work
if cd "${box_src_tmp_dir}" && npm rebuild; then break; fi
echo "Failed to rebuild, trying again"
sleep 5
done
if [[ "${is_update}" == "yes" ]]; then
echo "Setting up update splash screen"
"${box_src_tmp_dir}/setup/splashpage.sh" --data "${arg_data}" # show splash from new code
${BOX_SRC_DIR}/setup/stop.sh # stop the old code
fi
# switch the codes
rm -rf "${BOX_SRC_DIR}"
mv "${box_src_tmp_dir}" "${BOX_SRC_DIR}"
chown -R yellowtent.yellowtent "${BOX_SRC_DIR}"
# create a start file for testing. %q escapes args
(echo -e "#!/bin/bash\n"; printf "%q " "${BOX_SRC_DIR}/setup/start.sh" --data "${arg_data}") > /home/yellowtent/setup_start.sh
chmod +x /home/yellowtent/setup_start.sh
echo "Calling box setup script"
"${BOX_SRC_DIR}/setup/start.sh" --data "${arg_data}"
-182
View File
@@ -1,182 +0,0 @@
#!/usr/bin/env node
/* jslint node: true */
'use strict';
var assert = require('assert'),
async = require('async'),
debug = require('debug')('installer:server'),
express = require('express'),
fs = require('fs'),
http = require('http'),
HttpError = require('connect-lastmile').HttpError,
HttpSuccess = require('connect-lastmile').HttpSuccess,
installer = require('./installer.js'),
json = require('body-parser').json,
lastMile = require('connect-lastmile'),
morgan = require('morgan'),
request = require('request'),
superagent = require('superagent');
exports = module.exports = {
start: start,
stop: stop
};
var PROVISION_CONFIG_FILE = '/root/provision.json';
var CLOUDRON_CONFIG_FILE = '/home/yellowtent/configs/cloudron.conf';
var gHttpServer = null; // update server; used for updates
function provisionDigitalOcean(callback) {
superagent.get('http://169.254.169.254/metadata/v1.json').end(function (error, result) {
if (error || result.statusCode !== 200) {
console.error('Error getting metadata', error);
return callback(new Error('Error getting metadata'));
}
callback(null, JSON.parse(result.body.user_data));
});
}
function provisionEC2(callback) {
// need to use request, since octet-stream data
request('http://169.254.169.254/latest/user-data', { timeout: 5000 }, function (error, response, body) {
if (error || response.statusCode !== 200) {
console.error('Error getting metadata', error);
return callback(new Error('Error getting metadata'));
}
callback(null, JSON.parse(body));
});
}
function provision(callback) {
if (fs.existsSync(CLOUDRON_CONFIG_FILE)) {
debug('provision: already provisioned');
return callback(null); // already provisioned
}
async.retry({ times: 5, interval: 30000 }, function (done) {
// try first digitalocean, then ec2
provisionDigitalOcean(function (error1, userData) {
if (!error1) return done(null, userData);
provisionEC2(function (error2, userData) {
if (!error2) return done(null, userData);
console.error('Unable to get meta data: ', error1.message + ' ' + error2.message);
callback(new Error(error1.message + ' ' + error2.message));
});
});
}, function (error, userData) {
if (error) return callback(error);
installer.provision(userData, callback);
});
}
function provisionLocal(callback) {
if (fs.existsSync(CLOUDRON_CONFIG_FILE)) {
debug('provisionLocal: already provisioned');
return callback(null); // already provisioned
}
if (!fs.existsSync(PROVISION_CONFIG_FILE)) {
console.error('No provisioning data found at %s', PROVISION_CONFIG_FILE);
return callback(new Error('No provisioning data found'));
}
var userData = require(PROVISION_CONFIG_FILE);
installer.provision(userData, callback);
}
function update(req, res, next) {
assert.strictEqual(typeof req.body, 'object');
if (!req.body.sourceTarballUrl || typeof req.body.sourceTarballUrl !== 'string') return next(new HttpError(400, 'No sourceTarballUrl provided'));
if (!req.body.data || typeof req.body.data !== 'object') return next(new HttpError(400, 'No data provided'));
debug('provision: received from box %j', req.body);
installer.provision(req.body, function (error) {
if (error) console.error(error);
});
next(new HttpSuccess(202, { }));
}
function startUpdateServer(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Starting update server');
var app = express();
var router = new express.Router();
if (process.env.NODE_ENV !== 'test') app.use(morgan('dev', { immediate: false }));
app.use(json({ strict: true }))
.use(router)
.use(lastMile());
router.post('/api/v1/installer/update', update);
gHttpServer = http.createServer(app);
gHttpServer.on('error', console.error);
gHttpServer.listen(2020, '127.0.0.1', callback);
}
function stopUpdateServer(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Stopping update server');
if (!gHttpServer) return callback(null);
gHttpServer.close(callback);
gHttpServer = null;
}
function start(callback) {
assert.strictEqual(typeof callback, 'function');
var actions;
if (process.env.PROVISION === 'local') {
debug('Starting Installer in selfhost mode');
actions = [
startUpdateServer,
provisionLocal
];
} else { // current fallback, should be 'digitalocean' eventually, see initializeBaseUbuntuImage.sh
debug('Starting Installer in managed mode');
actions = [
startUpdateServer,
provision
];
}
async.series(actions, callback);
}
function stop(callback) {
assert.strictEqual(typeof callback, 'function');
async.series([
stopUpdateServer
], callback);
}
if (require.main === module) {
start(function (error) {
if (error) console.error(error);
});
}
-179
View File
@@ -1,179 +0,0 @@
/* jslint node:true */
/* global it:false */
/* global describe:false */
/* global before:false */
/* global after:false */
'use strict';
var expect = require('expect.js'),
fs = require('fs'),
path = require('path'),
nock = require('nock'),
os = require('os'),
request = require('superagent'),
server = require('../server.js'),
installer = require('../installer.js'),
_ = require('lodash');
var EXTERNAL_SERVER_URL = 'https://localhost:4443';
var INTERNAL_SERVER_URL = 'http://localhost:2020';
var APPSERVER_ORIGIN = 'http://appserver';
var FQDN = os.hostname();
describe('Server', function () {
this.timeout(5000);
before(function (done) {
var user_data = JSON.stringify({ apiServerOrigin: APPSERVER_ORIGIN }); // user_data is a string
var scope = nock('http://169.254.169.254')
.persist()
.get('/metadata/v1.json')
.reply(200, JSON.stringify({ user_data: user_data }), { 'Content-Type': 'application/json' });
done();
});
after(function (done) {
nock.cleanAll();
done();
});
describe('starts and stop', function () {
it('starts', function (done) {
server.start(done);
});
it('stops', function (done) {
server.stop(done);
});
});
describe('update (internal server)', function () {
before(function (done) {
server.start(done);
});
after(function (done) {
server.stop(done);
});
it('does not respond to provision', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/provision').send({ }).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(404);
done();
});
});
it('does not respond to restore', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/restore').send({ }).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(404);
done();
});
});
var data = {
sourceTarballUrl: "https://foo.tar.gz",
data: {
token: 'sometoken',
apiServerOrigin: APPSERVER_ORIGIN,
webServerOrigin: 'https://somethingelse.com',
fqdn: 'www.something.com',
tlsKey: 'key',
tlsCert: 'cert',
boxVersionsUrl: 'https://versions.json',
version: '0.1'
}
};
Object.keys(data).forEach(function (key) {
it('fails due to missing ' + key, function (done) {
var dataCopy = _.merge({ }, data);
delete dataCopy[key];
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/update').send(dataCopy).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(400);
done();
});
});
});
it('succeeds', function (done) {
request.post(INTERNAL_SERVER_URL + '/api/v1/installer/update').send(data).end(function (error, result) {
expect(error).to.not.be.ok();
expect(result.statusCode).to.equal(202);
done();
});
});
});
describe('ensureVersion', function () {
before(function () {
process.env.NODE_ENV = undefined;
});
after(function () {
process.env.NODE_ENV = 'test';
});
it ('fails without data', function (done) {
installer._ensureVersion({}, function (error) {
expect(error).to.be.an(Error);
done();
});
});
it ('fails without boxVersionsUrl', function (done) {
installer._ensureVersion({ data: {}}, function (error) {
expect(error).to.be.an(Error);
done();
});
});
it ('succeeds with sourceTarballUrl', function (done) {
var data = {
sourceTarballUrl: 'sometarballurl',
data: {
boxVersionsUrl: 'http://foobar/versions.json'
}
};
installer._ensureVersion(data, function (error, result) {
expect(error).to.equal(null);
expect(result).to.eql(data);
done();
});
});
it ('succeeds without sourceTarballUrl', function (done) {
var versions = {
'0.1.0': {
sourceTarballUrl: 'sometarballurl1'
},
'0.2.0': {
sourceTarballUrl: 'sometarballurl2'
}
};
var scope = nock('http://foobar')
.get('/versions.json')
.reply(200, JSON.stringify(versions), { 'Content-Type': 'application/json' });
var data = {
data: {
boxVersionsUrl: 'http://foobar/versions.json'
}
};
installer._ensureVersion(data, function (error, result) {
expect(error).to.equal(null);
expect(result.sourceTarballUrl).to.equal(versions['0.2.0'].sourceTarballUrl);
expect(result.data.boxVersionsUrl).to.equal(data.data.boxVersionsUrl);
done();
});
});
});
});
@@ -0,0 +1,74 @@
'use strict';
var dbm = dbm || require('db-migrate');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN ownerId VARCHAR(128)'),
db.runSql.bind(db, 'ALTER TABLE mailboxes ADD COLUMN ownerType VARCHAR(16)'),
db.runSql.bind(db, 'START TRANSACTION;'),
function addGroupMailboxes(done) {
console.log('Importing group mailboxes');
db.all('SELECT id, name FROM groups', function (error, results) {
if (error) return done(error);
async.eachSeries(results, function (g, next) {
db.runSql('INSERT INTO mailboxes (ownerId, ownerType, name) VALUES (?, ?, ?)', [ g.id, 'group', g.name ], function (error) {
if (error) console.error('Error importing group ' + JSON.stringify(g) + error);
next();
});
}, done);
});
},
function addAppMailboxes(done) {
console.log('Importing app mail boxes');
db.all('SELECT id, location, manifestJson FROM apps', function (error, results) {
if (error) return done(error);
async.eachSeries(results, function (a, next) {
var manifest = JSON.parse(a.manifestJson);
if (!manifest.addons['sendmail'] && !manifest.addons['recvmail']) return next();
var mailboxName = (a.location ? a.location : manifest.title.replace(/[^a-zA-Z0-9]/g, '')) + '.app';
db.runSql('INSERT INTO mailboxes (ownerId, ownerType, name) VALUES (?, ?, ?)', [ a.id, 'app', mailboxName ], function (error) {
if (error) console.error('Error importing app ' + JSON.stringify(a) + error);
next();
});
}, done);
});
},
function setUserMailboxOwnerIds(done) {
console.log('Setting owner id of user mailboxes and aliases');
db.all('SELECT id, username FROM users', function (error, results) {
if (error) return done(error);
async.eachSeries(results, function (u, next) {
if (!u.username) return next();
db.runSql('UPDATE mailboxes SET ownerId = ?, ownerType = ? WHERE name = ? OR aliasTarget = ?', [ u.id, 'user', u.username, u.username ], function (error) {
if (error) console.error('Error setting ownerid ' + JSON.stringify(u) + error);
next();
});
}, done);
});
},
db.runSql.bind(db, 'COMMIT'),
db.runSql.bind(db, 'ALTER TABLE mailboxes MODIFY ownerId VARCHAR(128) NOT NULL'),
db.runSql.bind(db, 'ALTER TABLE mailboxes MODIFY ownerType VARCHAR(128) NOT NULL'),
], callback);
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE mailboxes DROP COLUMN ownerId', function (error) {
if (error) console.error(error);
db.runSql('ALTER TABLE mailboxes DROP COLUMN ownerType', function (error) {
if (error) console.error(error);
callback(error);
});
});
};
@@ -0,0 +1,16 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN sso BOOLEAN DEFAULT 1', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN sso', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,16 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN oauthProxy', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN oauthProxy BOOLEAN DEFAULT 0', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,16 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN showTutorial', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN showTutorial BOOLEAN DEFAULT 0', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,15 @@
dbm = dbm || require('db-migrate');
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN debugModeJson TEXT', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN debugModeJson ', function (error) {
if (error) console.error(error);
callback(error);
});
};
+6 -4
View File
@@ -19,12 +19,11 @@ CREATE TABLE IF NOT EXISTS users(
modifiedAt VARCHAR(512) NOT NULL,
admin INTEGER NOT NULL,
displayName VARCHAR(512) DEFAULT '',
showTutorial BOOLEAN DEFAULT 0,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS groups(
id VARCHAR(128) NOT NULL UNIQUE,
username VARCHAR(254) NOT NULL UNIQUE,
name VARCHAR(254) NOT NULL UNIQUE,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS groupMembers(
@@ -63,11 +62,12 @@ CREATE TABLE IF NOT EXISTS apps(
location VARCHAR(128) NOT NULL UNIQUE,
dnsRecordId VARCHAR(512),
accessRestrictionJson TEXT, // { users: [ ], groups: [ ] }
oauthProxy BOOLEAN DEFAULT 0,
createdAt TIMESTAMP(2) NOT NULL DEFAULT CURRENT_TIMESTAMP,
memoryLimit BIGINT DEFAULT 0,
altDomain VARCHAR(256),
xFrameOptions VARCHAR(512),
sso BOOLEAN DEFAULT 1, // whether user chose to enable SSO
debugModeJson TEXT, // options for development mode
lastBackupId VARCHAR(128), // tracks last valid backup, can be removed
@@ -125,7 +125,9 @@ CREATE TABLE IF NOT EXISTS eventlog(
*/
CREATE TABLE IF NOT EXISTS mailboxes(
name VARCHAR(128) NOT NULL,
ownerId VARCHAR(128) NOT NULL, /* app id or user id or group id */
ownerType VARCHAR(16) NOT NULL, /* 'app' or 'user' or 'group' */
aliasTarget VARCHAR(128), /* the target name type is an alias */
creationTime TIMESTAMP,
PRIMARY KEY (id));
PRIMARY KEY (name));
+5378 -2039
View File
File diff suppressed because it is too large Load Diff
+8 -9
View File
@@ -16,7 +16,8 @@
"async": "^1.2.1",
"aws-sdk": "^2.1.46",
"body-parser": "^1.13.1",
"cloudron-manifestformat": "^2.4.3",
"checksum": "^0.1.1",
"cloudron-manifestformat": "^2.6.0",
"connect-ensure-login": "^0.1.1",
"connect-lastmile": "^0.1.0",
"connect-timeout": "^1.5.0",
@@ -30,11 +31,12 @@
"ejs": "^2.2.4",
"ejs-cli": "^1.2.0",
"express": "^4.12.4",
"express-rate-limit": "^2.6.0",
"express-session": "^1.11.3",
"gulp-sass": "^3.0.0",
"hat": "0.0.3",
"ini": "^1.3.4",
"json": "^9.0.3",
"ldapjs": "^0.7.1",
"ldapjs": "^1.0.0",
"mime": "^1.3.4",
"moment-timezone": "^0.5.5",
"morgan": "^1.7.0",
@@ -57,19 +59,17 @@
"proxy-middleware": "^0.13.0",
"safetydance": "^0.1.1",
"semver": "^4.3.6",
"showdown": "^1.6.0",
"split": "^1.0.0",
"superagent": "^1.8.3",
"supererror": "^0.7.1",
"tail-stream": "https://registry.npmjs.org/tail-stream/-/tail-stream-0.2.1.tgz",
"tldjs": "^1.6.2",
"underscore": "^1.7.0",
"ursa": "^0.9.3",
"valid-url": "^1.0.9",
"validator": "^4.9.0",
"x509": "^0.2.4"
},
"devDependencies": {
"apidoc": "*",
"bootstrap-sass": "^3.3.3",
"deep-extend": "^0.4.1",
"del": "^1.1.1",
@@ -79,7 +79,7 @@
"gulp-concat": "^2.4.3",
"gulp-cssnano": "^2.1.0",
"gulp-ejs": "^1.0.0",
"gulp-sass": "^2.0.1",
"gulp-sass": "^3.0.0",
"gulp-serve": "^1.0.0",
"gulp-sourcemaps": "^1.5.2",
"gulp-uglify": "^1.1.0",
@@ -87,10 +87,9 @@
"istanbul": "*",
"js2xmlparser": "^1.0.0",
"mocha": "*",
"nock": "^3.4.0",
"nock": "^9.0.2",
"node-sass": "^3.0.0-alpha.0",
"request": "^2.65.0",
"sinon": "^1.12.2",
"yargs": "^3.15.0"
},
"scripts": {
+243
View File
@@ -0,0 +1,243 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
if [[ $(lsb_release -rs) != "16.04" ]]; then
echo "Cloudron requires Ubuntu 16.04" > /dev/stderr
exit 1
fi
# change this to a hash when we make a upgrade release
readonly LOG_FILE="/var/log/cloudron-setup.log"
readonly MINIMUM_DISK_SIZE_GB="19" # this is the size of "/" and required to fit in docker images 19 is a safe bet for different reporting on 20GB min
readonly MINIMUM_MEMORY="990" # this is mostly reported for 1GB main memory (DO 992, EC2 990)
# copied from cloudron-resize-fs.sh
readonly physical_memory=$(free -m | awk '/Mem:/ { print $2 }')
readonly disk_device="$(for d in $(find /dev -type b); do [ "$(mountpoint -d /)" = "$(mountpoint -x $d)" ] && echo $d && break; done)"
readonly disk_size_bytes=$(fdisk -l ${disk_device} | grep "Disk ${disk_device}" | awk '{ printf $5 }')
readonly disk_size_gb=$((${disk_size_bytes}/1024/1024/1024))
# verify the system has minimum requirements met
if [[ "${physical_memory}" -lt "${MINIMUM_MEMORY}" ]]; then
echo "Error: Cloudron requires atleast 1GB physical memory"
exit 1
fi
if [[ "${disk_size_gb}" -lt "${MINIMUM_DISK_SIZE_GB}" ]]; then
echo "Error: Cloudron requires atleast 20GB disk space (Disk space on ${disk_device} is ${disk_size_gb}GB)"
exit 1
fi
initBaseImage="true"
# provisioning data
domain=""
provider=""
encryptionKey=""
restoreUrl=""
dnsProvider="manual"
tlsProvider="le-prod"
versionsUrl="https://s3.amazonaws.com/prod-cloudron-releases/versions.json"
requestedVersion="latest"
apiServerOrigin="https://api.cloudron.io"
dataJson=""
prerelease=false
args=$(getopt -o "" -l "domain:,help,skip-baseimage-init,data:,provider:,encryption-key:,restore-url:,tls-provider:,version:,versions-url:,api-server:,dns-provider:,env:,prerelease" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--domain) domain="$2"; shift 2;;
--help) echo "See https://cloudron.io/references/selfhosting.html on how to install Cloudron"; exit 0;;
--provider) provider="$2"; shift 2;;
--encryption-key) encryptionKey="$2"; shift 2;;
--restore-url) restoreUrl="$2"; shift 2;;
--tls-provider) tlsProvider="$2"; shift 2;;
--dns-provider) dnsProvider="$2"; shift 2;;
--version) requestedVersion="$2"; shift 2;;
--env)
if [[ "$2" == "dev" ]]; then
apiServerOrigin="https://api.dev.cloudron.io"
versionsUrl="https://s3.amazonaws.com/dev-cloudron-releases/versions.json"
tlsProvider="le-staging"
prerelease="true"
elif [[ "$2" == "staging" ]]; then
apiServerOrigin="https://api.staging.cloudron.io"
versionsUrl="https://s3.amazonaws.com/staging-cloudron-releases/versions.json"
tlsProvider="le-staging"
prerelease="true"
fi
shift 2;;
--versions-url) versionsUrl="$2"; shift 2;;
--api-server) apiServerOrigin="$2"; shift 2;;
--skip-baseimage-init) initBaseImage="false"; shift;;
--data) dataJson="$2"; shift 2;;
--prerelease) prerelease="true"; shift;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
# validate arguments in the absence of data
if [[ -z "${dataJson}" ]]; then
if [[ -z "${provider}" ]]; then
echo "--provider is required (generic, scaleway, ec2, digitalocean)"
exit 1
elif [[ \
"${provider}" != "generic" && \
"${provider}" != "scaleway" && \
"${provider}" != "ec2" && \
"${provider}" != "digitalocean" \
]]; then
echo "--provider must be one of: generic, scaleway, ec2, digitalocean"
exit 1
fi
if [[ "${tlsProvider}" != "fallback" && "${tlsProvider}" != "le-prod" && "${tlsProvider}" != "le-staging" ]]; then
echo "--tls-provider must be one of: le-prod, le-staging, fallback"
exit 1
fi
if [[ -z "${dnsProvider}" ]]; then
echo "--dns-provider is required (noop, manual)"
exit 1
elif [[ "${dnsProvider}" != "noop" && "${dnsProvider}" != "manual" ]]; then
echo "--dns-provider must be one of : manual, noop"
exit 1
fi
fi
echo ""
echo "##############################################"
echo " Cloudron Setup (${requestedVersion}) "
echo "##############################################"
echo ""
echo " Follow setup logs in a second terminal with:"
echo " $ tail -f ${LOG_FILE}"
echo ""
echo " Join us at https://chat.cloudron.io for any questions."
echo ""
if [[ "${initBaseImage}" == "true" ]]; then
echo "=> Updating apt and installing script dependancies"
if ! apt-get update &>> "${LOG_FILE}"; then
echo "Could not update package repositories"
exit 1
fi
if ! apt-get install curl python3 ubuntu-standard -y &>> "${LOG_FILE}"; then
echo "Could not install setup dependencies (curl)"
exit 1
fi
fi
echo "=> Checking version"
releaseJson=$(curl -s "${versionsUrl}")
if [[ "$requestedVersion" == "latest" ]]; then
pre=$([[ "${prerelease}" == "true" ]] && echo "null" || echo "-pre")
version=$(echo "${releaseJson}" | python3 -c "import json,sys,collections;obj=json.load(sys.stdin, object_pairs_hook=collections.OrderedDict);latest=list(v for v in obj if '${pre}' not in v)[-1];print(latest)")
else
version="${requestedVersion}"
fi
if ! sourceTarballUrl=$(echo "${releaseJson}" | python3 -c 'import json,sys;obj=json.load(sys.stdin);print(obj[sys.argv[1]]["sourceTarballUrl"])' "${version}"); then
echo "No source code for version ${requestedVersion}"
exit 1
fi
# Build data
if [[ -z "${dataJson}" ]]; then
if [[ -z "${restoreUrl}" ]]; then
data=$(cat <<EOF
{
"boxVersionsUrl": "${versionsUrl}",
"fqdn": "${domain}",
"provider": "${provider}",
"apiServerOrigin": "${apiServerOrigin}",
"tlsConfig": {
"provider": "${tlsProvider}"
},
"dnsConfig": {
"provider": "${dnsProvider}"
},
"backupConfig" : {
"provider": "filesystem",
"backupFolder": "/var/backups",
"key": "${encryptionKey}"
},
"updateConfig": {
"prerelease": ${prerelease}
},
"version": "${version}"
}
EOF
)
else
data=$(cat <<EOF
{
"boxVersionsUrl": "${versionsUrl}",
"fqdn": "${domain}",
"provider": "${provider}",
"apiServerOrigin": "${apiServerOrigin}",
"restore": {
"url": "${restoreUrl}",
"key": "${encryptionKey}"
},
"version": "${version}"
}
EOF
)
fi
else
data="${dataJson}"
fi
echo "=> Downloading version ${version} ..."
box_src_tmp_dir=$(mktemp -dt box-src-XXXXXX)
if ! curl -sL "${sourceTarballUrl}" | tar -zxf - -C "${box_src_tmp_dir}"; then
echo "Could not download source tarball. See ${LOG_FILE} for details"
exit 1
fi
if [[ "${initBaseImage}" == "true" ]]; then
echo -n "=> Installing base dependencies and downloading docker images (this takes some time) ..."
if ! /bin/bash "${box_src_tmp_dir}/baseimage/initializeBaseUbuntuImage.sh" "${provider}" "../src" &>> "${LOG_FILE}"; then
echo "Init script failed. See ${LOG_FILE} for details"
exit 1
fi
echo ""
fi
echo "=> Installing version ${version} (this takes some time) ..."
if ! /bin/bash "${box_src_tmp_dir}/scripts/installer.sh" --data "${data}" &>> "${LOG_FILE}"; then
echo "Failed to install cloudron. See ${LOG_FILE} for details"
exit 1
fi
echo -n "=> Waiting for cloudron to be ready (this takes some time) ..."
while true; do
echo -n "."
if status=$(curl -q -f "http://localhost:3000/api/v1/cloudron/status" 2>/dev/null); then
[[ -z "$domain" ]] && break # with no domain, we are up and running
[[ "$status" == *"\"tls\": true"* ]] && break # with a domain, wait for the cert
fi
sleep 10
done
echo -e "\n\nRebooting this server now to let bootloader changes take effect.\n"
if [[ -n "${domain}" ]]; then
echo -e "Visit https://my.${domain} to finish setup once the server has rebooted.\n"
else
echo -e "Visit https://<IP> to finish setup once the server has rebooted.\n"
fi
if [[ "${initBaseImage}" == "true" ]]; then
systemctl reboot
fi
+7 -20
View File
@@ -11,15 +11,13 @@ assertNotEmpty() {
[[ $(uname -s) == "Darwin" ]] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
args=$(${GNU_GETOPT} -o "" -l "revision:,output:,publish,no-upload" -n "$0" -- "$@")
args=$(${GNU_GETOPT} -o "" -l "revision:,output:,no-upload" -n "$0" -- "$@")
eval set -- "${args}"
readonly RELEASE_TOOL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../release" && pwd)"
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
delete_bundle="yes"
commitish="HEAD"
publish="no"
upload="yes"
bundle_file=""
@@ -28,29 +26,20 @@ while true; do
--revision) commitish="$2"; shift 2;;
--output) bundle_file="$2"; delete_bundle="no"; shift 2;;
--no-upload) upload="no"; shift;;
--publish) publish="yes"; shift;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
if [[ "${upload}" == "no" && "${publish}" == "yes" ]]; then
echo "Cannot publish without uploading"
exit 1
fi
readonly TMPDIR=${TMPDIR:-/tmp} # why is this not set on mint?
assertNotEmpty AWS_DEV_ACCESS_KEY
assertNotEmpty AWS_DEV_SECRET_KEY
if ! $(cd "${SOURCE_DIR}" && git diff --exit-code >/dev/null); then
echo "You have local changes, stash or commit them to proceed"
exit 1
fi
if [[ "$(node --version)" != "v4.1.1" ]]; then
echo "This script requires node 4.1.1"
if [[ "$(node --version)" != "v6.9.2" ]]; then
echo "This script requires node 6.9.2"
exit 1
fi
@@ -103,16 +92,15 @@ rm -rf "${bundle_dir}"
if [[ "${upload}" == "yes" ]]; then
echo "Uploading bundle to S3"
assertNotEmpty AWS_DEV_ACCESS_KEY
assertNotEmpty AWS_DEV_SECRET_KEY
# That special header is needed to allow access with singed urls created with different aws credentials than the ones the file got uploaded
s3cmd --multipart-chunk-size-mb=5 --ssl --acl-public --access_key="${AWS_DEV_ACCESS_KEY}" --secret_key="${AWS_DEV_SECRET_KEY}" --no-mime-magic put "${bundle_file}" "s3://dev-cloudron-releases/box-${version}.tar.gz"
versions_file_url="https://dev-cloudron-releases.s3.amazonaws.com/box-${version}.tar.gz"
echo "The URL for the versions file is: ${versions_file_url}"
if [[ "${publish}" == "yes" ]]; then
echo "Publishing to dev"
${RELEASE_TOOL_DIR}/release create --env dev --code "${versions_file_url}"
fi
fi
if [[ "${delete_bundle}" == "no" ]]; then
@@ -120,4 +108,3 @@ if [[ "${delete_bundle}" == "no" ]]; then
else
rm "${bundle_file}"
fi
+68
View File
@@ -0,0 +1,68 @@
#!/bin/bash
set -eu -o pipefail
if [[ ${EUID} -ne 0 ]]; then
echo "This script should be run as root." > /dev/stderr
exit 1
fi
readonly USER=yellowtent
readonly BOX_SRC_DIR=/home/${USER}/box
readonly CLOUDRON_CONF=/home/yellowtent/configs/cloudron.conf
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly box_src_tmp_dir="$(realpath ${script_dir}/..)"
readonly is_update=$([[ -f "${CLOUDRON_CONF}" ]] && echo "yes" || echo "no")
arg_data=""
args=$(getopt -o "" -l "data:,data-file:" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--data) arg_data="$2"; shift 2;;
--data-file) arg_data=$(cat $2); shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
for try in `seq 1 10`; do
# for reasons unknown, the dtrace package will fail. but rebuilding second time will work
# We need --unsafe-perm as we run as root and the folder is owned by root,
# however by default npm drops privileges for npm rebuild
# https://docs.npmjs.com/misc/config#unsafe-perm
if cd "${box_src_tmp_dir}" && npm rebuild --unsafe-perm; then break; fi
echo "Failed to rebuild, trying again"
sleep 5
done
if [[ ${try} -eq 10 ]]; then
echo "npm rebuild failed"
exit 4
fi
if ! id "${USER}" 2>/dev/null; then
useradd "${USER}" -m
fi
if [[ "${is_update}" == "yes" ]]; then
echo "Setting up update splash screen"
"${box_src_tmp_dir}/setup/splashpage.sh" --data "${arg_data}" # show splash from new code
${BOX_SRC_DIR}/setup/stop.sh # stop the old code
fi
# ensure we are not inside the source directory, which we will remove now
cd /root
echo "==> installer: switching the box code"
rm -rf "${BOX_SRC_DIR}"
mv "${box_src_tmp_dir}" "${BOX_SRC_DIR}"
chown -R "${USER}:${USER}" "${BOX_SRC_DIR}"
echo "==> installer: calling box setup script"
"${BOX_SRC_DIR}/setup/start.sh" --data "${arg_data}"
-57
View File
@@ -1,57 +0,0 @@
This document gives the design of this setup code.
box code should be delivered in the form of a (docker) container.
This is not the case currently but we want to do structure the code
in spirit that way.
### container.sh
This contains code that essential goes into Dockerfile.
This file contains static configuration over a base image. Currently,
the yellowtent user is created in the installer base image but it
could very well be placed here.
The idea is that the installer would simply remove the old box container
and replace it with a new one for an update.
Because we do not package things as Docker yet, we should be careful
about the code here. We have to expect remains of an older setup code.
For example, older systemd or nginx configs might be around.
The config directory is _part_ of the container and is not a VOLUME.
Which is to say that the files will be nuked from one update to the next.
The data directory is a VOLUME. Contents of this directory are expected
to survive an update. This is a good place to place config files that
are "dynamic" and need to survive restarts. For example, the infra
version (see below) or the mysql/postgresql data etc.
### start.sh
* It is called in 3 modes - new, update, restore.
* The first thing this does is to do the static container.sh setup.
* It then downloads any box restore data and restores the box db from the
backup.
* It then proceeds to call the db-migrate script.
* It then does dynamic configuration like setting up nginx, collectd.
* It then setups up the cloud infra (setup_infra.sh) and creates cloudron.conf.
* box services are then started
setup_infra.sh
This setups containers like graphite, mail and the addons containers.
Containers are relaunched based on the INFRA_VERSION. The script compares
the version here with the version in the file DATA_DIR/INFRA_VERSION.
If they match, the containers are not recreated and nothing is to be done.
nginx, collectd configs are part of data already and containers are running.
If they do not match, it deletes all containers (including app containers) and starts
them all afresh. Important thing here is that, DATA_DIR is never removed across
updates. So, it is only the containers being recreated and not the data.
+11 -2
View File
@@ -1,7 +1,7 @@
#!/bin/bash
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
json="${script_dir}/../node_modules/.bin/json"
source_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
json="${source_dir}/../node_modules/.bin/json"
# IMPORTANT: Fix cloudron.js:doUpdate if you add/remove any arg. keep these sorted for readability
arg_api_server_origin=""
@@ -41,12 +41,19 @@ while true; do
--data)
# these params must be valid in all cases
arg_fqdn=$(echo "$2" | $json fqdn)
arg_is_custom_domain=$(echo "$2" | $json isCustomDomain)
[[ "${arg_is_custom_domain}" == "" ]] && arg_is_custom_domain="true"
# only update/restore have this valid (but not migrate)
arg_api_server_origin=$(echo "$2" | $json apiServerOrigin)
[[ "${arg_api_server_origin}" == "" ]] && arg_api_server_origin="https://api.cloudron.io"
arg_web_server_origin=$(echo "$2" | $json webServerOrigin)
[[ "${arg_web_server_origin}" == "" ]] && arg_web_server_origin="https://cloudron.io"
arg_box_versions_url=$(echo "$2" | $json boxVersionsUrl)
[[ "${arg_box_versions_url}" == "" ]] && arg_box_versions_url="https://s3.amazonaws.com/prod-cloudron-releases/versions.json"
# TODO check if an where this is used
arg_version=$(echo "$2" | $json version)
# read possibly empty parameters here
@@ -59,7 +66,9 @@ while true; do
arg_tls_cert=$(echo "$2" | $json tlsCert)
arg_tls_key=$(echo "$2" | $json tlsKey)
arg_token=$(echo "$2" | $json token)
arg_provider=$(echo "$2" | $json provider)
[[ "${arg_provider}" == "" ]] && arg_provider="generic"
arg_tls_config=$(echo "$2" | $json tlsConfig)
[[ "${arg_tls_config}" == "null" ]] && arg_tls_config=""
-44
View File
@@ -1,44 +0,0 @@
#!/bin/bash
set -eu -o pipefail
# This file can be used in Dockerfile
readonly container_files="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/container"
readonly CONFIG_DIR="/home/yellowtent/configs"
readonly DATA_DIR="/home/yellowtent/data"
########## create config directory
rm -rf "${CONFIG_DIR}"
sudo -u yellowtent mkdir "${CONFIG_DIR}"
########## systemd
rm -f /etc/systemd/system/janitor.*
cp -r "${container_files}/systemd/." /etc/systemd/system/
systemctl daemon-reload
systemctl enable cloudron.target
########## sudoers
rm -f /etc/sudoers.d/yellowtent
cp "${container_files}/sudoers" /etc/sudoers.d/yellowtent
########## collectd
rm -rf /etc/collectd
ln -sfF "${DATA_DIR}/collectd" /etc/collectd
########## apparmor docker profile
cp "${container_files}/docker-cloudron-app.apparmor" /etc/apparmor.d/docker-cloudron-app
systemctl restart apparmor
########## nginx
# link nginx config to system config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${DATA_DIR}/nginx" /etc/nginx
########## mysql
cp "${container_files}/mysql.cnf" /etc/mysql/mysql.cnf
########## Enable services
update-rc.d -f collectd defaults
+3 -3
View File
@@ -5,7 +5,7 @@ set -eu -o pipefail
readonly SETUP_WEBSITE_DIR="/home/yellowtent/setup/website"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly BOX_SRC_DIR="/home/yellowtent/box"
readonly box_src_dir="$(realpath ${script_dir}/..)"
readonly DATA_DIR="/home/yellowtent/data"
readonly ADMIN_LOCATION="my" # keep this in sync with constants.js
@@ -28,11 +28,11 @@ existing_infra="none"
if [[ "${arg_retire_reason}" != "" || "${existing_infra}" != "${current_infra}" ]]; then
echo "Showing progress bar on all subdomains in retired mode or infra update. retire: ${arg_retire_reason} existing: ${existing_infra} current: ${current_infra}"
rm -f ${DATA_DIR}/nginx/applications/*
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
${box_src_dir}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
else
echo "Show progress bar only on admin domain for normal update"
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
${box_src_dir}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\", \"certFilePath\": \"cert/host.cert\", \"keyFilePath\": \"cert/host.key\", \"xFrameOptions\": \"SAMEORIGIN\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
fi
+209 -112
View File
@@ -2,68 +2,222 @@
set -eu -o pipefail
echo "==== Cloudron Start ===="
echo "==> Cloudron Start"
readonly USER="yellowtent"
readonly BOX_SRC_DIR="/home/${USER}/box"
readonly DATA_DIR="/home/${USER}/data"
readonly CONFIG_DIR="/home/${USER}/configs"
readonly SETUP_PROGRESS_JSON="/home/yellowtent/setup/website/progress.json"
readonly ADMIN_LOCATION="my" # keep this in sync with constants.js
readonly DATA_FILE="/root/user_data.img"
readonly HOME_DIR="/home/${USER}"
readonly BOX_SRC_DIR="${HOME_DIR}/box"
readonly DATA_DIR="${HOME_DIR}/data" # app and platform data
readonly BOX_DATA_DIR="${HOME_DIR}/boxdata" # box data
readonly CONFIG_DIR="${HOME_DIR}/configs"
readonly SETUP_PROGRESS_JSON="${HOME_DIR}/setup/website/progress.json"
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${script_dir}/argparser.sh" "$@" # this injects the arg_* variables used below
# keep this is sync with config.js appFqdn()
admin_fqdn=$([[ "${arg_is_custom_domain}" == "true" ]] && echo "${ADMIN_LOCATION}.${arg_fqdn}" || echo "${ADMIN_LOCATION}-${arg_fqdn}")
admin_origin="https://${admin_fqdn}"
readonly is_update=$([[ -f "${CONFIG_DIR}/cloudron.conf" ]] && echo "true" || echo "false")
set_progress() {
local percent="$1"
local message="$2"
echo "==== ${percent} - ${message} ===="
echo "==> ${percent} - ${message}"
(echo "{ \"update\": { \"percent\": \"${percent}\", \"message\": \"${message}\" }, \"backup\": {} }" > "${SETUP_PROGRESS_JSON}") 2> /dev/null || true # as this will fail in non-update mode
}
set_progress "1" "Create container"
$script_dir/container.sh
set_progress "5" "Adjust system settings"
set_progress "20" "Configuring host"
sed -e 's/^#NTP=/NTP=0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org/' -i /etc/systemd/timesyncd.conf
timedatectl set-ntp 1
timedatectl set-timezone UTC
hostnamectl set-hostname "${arg_fqdn}"
set_progress "10" "Ensuring directories"
echo "==> Setting up firewall"
iptables -t filter -N CLOUDRON || true
iptables -t filter -F CLOUDRON # empty any existing rules
# NOTE: keep these in sync with src/apps.js validatePortBindings
# allow ssh, http, https, ping, dns
iptables -t filter -I CLOUDRON -m state --state RELATED,ESTABLISHED -j ACCEPT
# caas has ssh on port 202
if [[ "${arg_provider}" == "caas" ]]; then
iptables -A CLOUDRON -p tcp -m tcp -m multiport --dports 25,80,202,443,587,993,4190 -j ACCEPT
else
iptables -A CLOUDRON -p tcp -m tcp -m multiport --dports 25,80,22,443,587,993,4190 -j ACCEPT
fi
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-request -j ACCEPT
iptables -t filter -A CLOUDRON -p icmp --icmp-type echo-reply -j ACCEPT
iptables -t filter -A CLOUDRON -p udp --sport 53 -j ACCEPT
iptables -t filter -A CLOUDRON -s 172.18.0.0/16 -j ACCEPT # required to accept any connections from apps to our IP:<public port>
iptables -t filter -A CLOUDRON -i lo -j ACCEPT # required for localhost connections (mysql)
# log dropped incoming. keep this at the end of all the rules
iptables -t filter -A CLOUDRON -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7
iptables -t filter -A CLOUDRON -j DROP
if ! iptables -t filter -C INPUT -j CLOUDRON 2>/dev/null; then
iptables -t filter -I INPUT -j CLOUDRON
fi
# so it gets restored across reboot
mkdir -p /etc/iptables && iptables-save > /etc/iptables/rules.v4
echo "==> Configuring docker"
cp "${script_dir}/start/docker-cloudron-app.apparmor" /etc/apparmor.d/docker-cloudron-app
systemctl enable apparmor
systemctl restart apparmor
usermod ${USER} -a -G docker
temp_file=$(mktemp)
# create systemd drop-in. some apps do not work with aufs
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/docker daemon -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=devicemapper --dns=172.18.0.1 --dns-search=." > "${temp_file}"
systemctl enable docker
# restart docker if options changed
if [[ ! -f /etc/systemd/system/docker.service.d/cloudron.conf ]] || ! diff -q /etc/systemd/system/docker.service.d/cloudron.conf "${temp_file}" >/dev/null; then
mkdir -p /etc/systemd/system/docker.service.d
mv "${temp_file}" /etc/systemd/system/docker.service.d/cloudron.conf
systemctl daemon-reload
systemctl restart docker
fi
docker network create --subnet=172.18.0.0/16 cloudron || true
# caas has ssh on port 202 and we disable password login
if [[ "${arg_provider}" == "caas" ]]; then
# https://stackoverflow.com/questions/4348166/using-with-sed on why ? must be escaped
sed -e 's/^#\?PermitRootLogin .*/PermitRootLogin without-password/g' \
-e 's/^#\?PermitEmptyPasswords .*/PermitEmptyPasswords no/g' \
-e 's/^#\?PasswordAuthentication .*/PasswordAuthentication no/g' \
-e 's/^#\?Port .*/Port 202/g' \
-i /etc/ssh/sshd_config
# required so we can connect to this machine since port 22 is blocked by iptables by now
systemctl reload sshd
fi
echo "==> Setup btrfs data"
if ! grep -q loop.ko /lib/modules/`uname -r`/modules.builtin; then
# on scaleway loop is not built-in
echo "loop" >> /etc/modules
modprobe loop
fi
if [[ ! -d "${DATA_DIR}" ]]; then
echo "==> Mounting loopback btrfs"
truncate -s "8192m" "${DATA_FILE}" # 8gb start (this will get resized dynamically by cloudron-resize-fs.service)
mkfs.btrfs -L UserDataHome "${DATA_FILE}"
mkdir -p "${DATA_DIR}"
mount -t btrfs -o loop,nosuid "${DATA_FILE}" ${DATA_DIR}
fi
# keep these in sync with paths.js
[[ "${is_update}" == "false" ]] && btrfs subvolume create "${DATA_DIR}/box"
mkdir -p "${DATA_DIR}/box/appicons"
mkdir -p "${DATA_DIR}/box/certs"
mkdir -p "${DATA_DIR}/box/mail/dkim/${arg_fqdn}"
mkdir -p "${DATA_DIR}/box/acme" # acme keys
echo "==> Ensuring directories"
if ! btrfs subvolume show "${DATA_DIR}/mail" &> /dev/null; then
# Migrate mail data to new format
docker stop mail || true # otherwise the move below might fail if mail container writes in the middle
rm -rf "${DATA_DIR}/mail" # this used to be mail container's run directory
btrfs subvolume create "${DATA_DIR}/mail"
[[ -d "${DATA_DIR}/box/mail" ]] && mv "${DATA_DIR}/box/mail/"* "${DATA_DIR}/mail"
rm -rf "${DATA_DIR}/box/mail"
fi
mkdir -p "${DATA_DIR}/graphite"
mkdir -p "${DATA_DIR}/mail/dkim"
mkdir -p "${DATA_DIR}/mysql"
mkdir -p "${DATA_DIR}/postgresql"
mkdir -p "${DATA_DIR}/mongodb"
mkdir -p "${DATA_DIR}/snapshots"
mkdir -p "${DATA_DIR}/addons"
mkdir -p "${DATA_DIR}/addons/mail"
mkdir -p "${DATA_DIR}/collectd/collectd.conf.d"
mkdir -p "${DATA_DIR}/acme" # acme challenges
mkdir -p "${DATA_DIR}/acme"
mkdir -p "${BOX_DATA_DIR}"
if btrfs subvolume show "${DATA_DIR}/box" &> /dev/null; then
# Migrate box data out of data volume
mv "${DATA_DIR}/box/"* "${BOX_DATA_DIR}"
btrfs subvolume delete "${DATA_DIR}/box"
fi
mkdir -p "${BOX_DATA_DIR}/appicons"
mkdir -p "${BOX_DATA_DIR}/certs"
mkdir -p "${BOX_DATA_DIR}/acme" # acme keys
echo "==> Configuring journald"
sed -e "s/^#SystemMaxUse=.*$/SystemMaxUse=100M/" \
-e "s/^#ForwardToSyslog=.*$/ForwardToSyslog=no/" \
-i /etc/systemd/journald.conf
# When rotating logs, systemd kills journald too soon sometimes
# See https://github.com/systemd/systemd/issues/1353 (this is upstream default)
sed -e "s/^WatchdogSec=.*$/WatchdogSec=3min/" \
-i /lib/systemd/system/systemd-journald.service
# Give user access to system logs
usermod -a -G systemd-journal ${USER}
mkdir -p /var/log/journal # in some images, this directory is not created making system log to /run/systemd instead
chown root:systemd-journal /var/log/journal
systemctl daemon-reload
systemctl restart systemd-journald
setfacl -n -m u:${USER}:r /var/log/journal/*/system.journal
echo "==> Creating config directory"
rm -rf "${CONFIG_DIR}" && mkdir "${CONFIG_DIR}"
echo "==> Setting up unbound"
# DO uses Google nameservers by default. This causes RBL queries to fail (host 2.0.0.127.zen.spamhaus.org)
# We do not use dnsmasq because it is not a recursive resolver and defaults to the value in the interfaces file (which is Google DNS!)
# We listen on 0.0.0.0 because there is no way control ordering of docker (which creates the 172.18.0.0/16) and unbound
echo -e "server:\n\tinterface: 0.0.0.0\n\taccess-control: 127.0.0.1 allow\n\taccess-control: 172.18.0.1/16 allow" > /etc/unbound/unbound.conf.d/cloudron-network.conf
echo "==> Adding systemd services"
cp -r "${script_dir}/start/systemd/." /etc/systemd/system/
systemctl daemon-reload
systemctl enable unbound
systemctl enable cloudron.target
systemctl enable iptables-restore
# For logrotate
systemctl enable --now cron
# ensure unbound runs
systemctl restart unbound
echo "==> Configuring sudoers"
rm -f /etc/sudoers.d/${USER}
cp "${script_dir}/start/sudoers" /etc/sudoers.d/${USER}
echo "==> Configuring collectd"
rm -rf /etc/collectd
ln -sfF "${DATA_DIR}/collectd" /etc/collectd
cp "${script_dir}/start/collectd.conf" "${DATA_DIR}/collectd/collectd.conf"
systemctl restart collectd
echo "==> Configuring nginx"
# link nginx config to system config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${DATA_DIR}/nginx" /etc/nginx
mkdir -p "${DATA_DIR}/nginx/applications"
mkdir -p "${DATA_DIR}/nginx/cert"
cp "${script_dir}/start/nginx/nginx.conf" "${DATA_DIR}/nginx/nginx.conf"
cp "${script_dir}/start/nginx/mime.types" "${DATA_DIR}/nginx/mime.types"
if ! grep "^Restart=" /etc/systemd/system/multi-user.target.wants/nginx.service; then
# default nginx service file does not restart on crash
echo -e "\n[Service]\nRestart=always\n" >> /etc/systemd/system/multi-user.target.wants/nginx.service
systemctl daemon-reload
fi
systemctl start nginx
# bookkeep the version as part of data
echo "{ \"version\": \"${arg_version}\", \"boxVersionsUrl\": \"${arg_box_versions_url}\" }" > "${DATA_DIR}/box/version"
echo "{ \"version\": \"${arg_version}\", \"boxVersionsUrl\": \"${arg_box_versions_url}\" }" > "${BOX_DATA_DIR}/version"
# remove old snapshots. if we do want to keep this around, we will have to fix the chown -R below
# which currently fails because these are readonly fs
echo "Cleaning up snapshots"
echo "==> Cleaning up snapshots"
find "${DATA_DIR}/snapshots" -mindepth 1 -maxdepth 1 | xargs --no-run-if-empty btrfs subvolume delete
# restart mysql to make sure it has latest config
# wait for all running mysql jobs
cp "${script_dir}/start/mysql.cnf" /etc/mysql/mysql.cnf
while true; do
if ! systemctl list-jobs | grep mysql; then break; fi
echo "Waiting for mysql jobs..."
@@ -76,70 +230,31 @@ mysqladmin -u root -ppassword password password # reset default root password
mysql -u root -p${mysql_root_password} -e 'CREATE DATABASE IF NOT EXISTS box'
if [[ -n "${arg_restore_url}" ]]; then
set_progress "15" "Downloading restore data"
set_progress "30" "Downloading restore data"
echo "Downloading backup: ${arg_restore_url} and key: ${arg_restore_key}"
echo "==> Downloading backup: ${arg_restore_url} and key: ${arg_restore_key}"
while true; do
if $curl -L "${arg_restore_url}" | openssl aes-256-cbc -d -pass "pass:${arg_restore_key}" | tar -zxf - -C "${DATA_DIR}/box"; then break; fi
if $curl -L "${arg_restore_url}" | openssl aes-256-cbc -d -pass "pass:${arg_restore_key}" \
| tar -zxf - --overwrite --transform="s,^box/\?,boxdata/," --transform="s,^mail/\?,data/mail/," --show-transformed-names -C "${HOME_DIR}"; then break; fi
echo "Failed to download data, trying again"
done
set_progress "21" "Setting up MySQL"
if [[ -f "${DATA_DIR}/box/box.mysqldump" ]]; then
echo "Importing existing database into MySQL"
mysql -u root -p${mysql_root_password} box < "${DATA_DIR}/box/box.mysqldump"
set_progress "35" "Setting up MySQL"
if [[ -f "${BOX_DATA_DIR}/box.mysqldump" ]]; then
echo "==> Importing existing database into MySQL"
mysql -u root -p${mysql_root_password} box < "${BOX_DATA_DIR}/box.mysqldump"
fi
fi
set_progress "25" "Migrating data"
set_progress "40" "Migrating data"
sudo -u "${USER}" -H bash <<EOF
set -eu
cd "${BOX_SRC_DIR}"
BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@localhost/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up
EOF
set_progress "28" "Setup collectd"
cp "${script_dir}/start/collectd.conf" "${DATA_DIR}/collectd/collectd.conf"
systemctl restart collectd
set_progress "30" "Setup nginx"
mkdir -p "${DATA_DIR}/nginx/applications"
cp "${script_dir}/start/nginx/nginx.conf" "${DATA_DIR}/nginx/nginx.conf"
cp "${script_dir}/start/nginx/mime.types" "${DATA_DIR}/nginx/mime.types"
# generate these for update code paths as well to overwrite splash
admin_cert_file="${DATA_DIR}/nginx/cert/host.cert"
admin_key_file="${DATA_DIR}/nginx/cert/host.key"
if [[ -f "${DATA_DIR}/box/certs/${admin_fqdn}.cert" && -f "${DATA_DIR}/box/certs/${admin_fqdn}.key" ]]; then
admin_cert_file="${DATA_DIR}/box/certs/${admin_fqdn}.cert"
admin_key_file="${DATA_DIR}/box/certs/${admin_fqdn}.key"
fi
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"admin\", \"sourceDir\": \"${BOX_SRC_DIR}\", \"certFilePath\": \"${admin_cert_file}\", \"keyFilePath\": \"${admin_key_file}\", \"xFrameOptions\": \"SAMEORIGIN\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
mkdir -p "${DATA_DIR}/nginx/cert"
if [[ -f "${DATA_DIR}/box/certs/host.cert" && -f "${DATA_DIR}/box/certs/host.key" ]]; then
cp "${DATA_DIR}/box/certs/host.cert" "${DATA_DIR}/nginx/cert/host.cert"
cp "${DATA_DIR}/box/certs/host.key" "${DATA_DIR}/nginx/cert/host.key"
else
echo "${arg_tls_cert}" > "${DATA_DIR}/nginx/cert/host.cert"
echo "${arg_tls_key}" > "${DATA_DIR}/nginx/cert/host.key"
fi
set_progress "33" "Changing ownership"
chown "${USER}:${USER}" -R "${DATA_DIR}/nginx" "${DATA_DIR}/collectd" "${DATA_DIR}/addons" "${DATA_DIR}/acme"
# during updates, do not trample mail ownership behind the the mail container's back
find "${DATA_DIR}/box" -mindepth 1 -maxdepth 1 -not -path "${DATA_DIR}/box/mail" -print0 | xargs -0 chown -R "${USER}:${USER}"
chown "${USER}:${USER}" "${DATA_DIR}/box"
chown "${USER}:${USER}" -R "${DATA_DIR}/box/mail/dkim" # this is owned by box currently since it generates the keys
chown "${USER}:${USER}" "${DATA_DIR}/INFRA_VERSION" || true
chown "${USER}:${USER}" "${DATA_DIR}"
set_progress "65" "Creating cloudron.conf"
sudo -u yellowtent -H bash <<EOF
set -eu
echo "Creating cloudron.conf"
echo "==> Creating cloudron.conf"
cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
{
"version": "${arg_version}",
@@ -161,69 +276,51 @@ cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
"appBundle": ${arg_app_bundle}
}
CONF_END
# pass these out-of-band because they have new lines which interfere with json
if [[ -n "${arg_tls_cert}" && -n "${arg_tls_key}" ]]; then
echo "${arg_tls_cert}" > "${CONFIG_DIR}/host.cert"
echo "${arg_tls_key}" > "${CONFIG_DIR}/host.key"
fi
echo "Creating config.json for webadmin"
echo "==> Creating config.json for webadmin"
cat > "${BOX_SRC_DIR}/webadmin/dist/config.json" <<CONF_END
{
"webServerOrigin": "${arg_web_server_origin}"
}
CONF_END
EOF
# Add Backup Configuration
echo "==> Changing ownership"
chown "${USER}:${USER}" -R "${CONFIG_DIR}"
chown "${USER}:${USER}" -R "${DATA_DIR}/nginx" "${DATA_DIR}/collectd" "${DATA_DIR}/addons" "${DATA_DIR}/acme"
chown "${USER}:${USER}" -R "${BOX_DATA_DIR}"
chown "${USER}:${USER}" -R "${DATA_DIR}/mail/dkim" # this is owned by box currently since it generates the keys
chown "${USER}:${USER}" "${DATA_DIR}/INFRA_VERSION" 2>/dev/null || true
chown "${USER}:${USER}" "${DATA_DIR}"
echo "==> Adding automated configs"
if [[ ! -z "${arg_backup_config}" ]]; then
echo "Add Backup Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"backup_config\", '$arg_backup_config')" box
fi
# Add DNS Configuration
if [[ ! -z "${arg_dns_config}" ]]; then
echo "Add DNS Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"dns_config\", '$arg_dns_config')" box
fi
# Add Update Configuration
if [[ ! -z "${arg_update_config}" ]]; then
echo "Add Update Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"update_config\", '$arg_update_config')" box
fi
# Add TLS Configuration
if [[ ! -z "${arg_tls_config}" ]]; then
echo "Add TLS Config"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO settings (name, value) VALUES (\"tls_config\", '$arg_tls_config')" box
fi
# The domain might have changed, therefor we have to update the record
# !!! This needs to be in sync with the webadmin, specifically login_callback.js
echo "Add webadmin api cient"
readonly ADMIN_SCOPES="cloudron,developer,profile,users,apps,settings"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (\"cid-webadmin\", \"Settings\", \"built-in\", \"secret-webadmin\", \"${admin_origin}\", \"${ADMIN_SCOPES}\")" box
echo "Add SDK api client"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (\"cid-sdk\", \"SDK\", \"built-in\", \"secret-sdk\", \"${admin_origin}\", \"*,roleSdk\")" box
echo "Add cli api client"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (\"cid-cli\", \"Cloudron Tool\", \"built-in\", \"secret-cli\", \"${admin_origin}\", \"*,roleSdk\")" box
set_progress "80" "Starting Cloudron"
set_progress "60" "Starting Cloudron"
systemctl start cloudron.target
sleep 2 # give systemd sometime to start the processes
set_progress "85" "Reloading nginx"
nginx -s reload
set_progress "100" "Done"
set_progress "90" "Done"
@@ -7,34 +7,27 @@ readonly APPS_SWAP_FILE="/apps.swap"
readonly USER_DATA_FILE="/root/user_data.img"
readonly USER_DATA_DIR="/home/yellowtent/data"
# detect device
if [[ -b "/dev/vda1" ]]; then
disk_device="/dev/vda1"
fi
# detect device of rootfs (http://forums.fedoraforum.org/showthread.php?t=270316)
disk_device="$(for d in $(find /dev -type b); do [ "$(mountpoint -d /)" = "$(mountpoint -x $d)" ] && echo $d && break; done)"
if [[ -b "/dev/xvda1" ]]; then
disk_device="/dev/xvda1"
fi
# allow root access over ssh
sed -e 's/.* \(ssh-rsa.*\)/\1/' -i /root/.ssh/authorized_keys
existing_swap=$(cat /proc/meminfo | grep SwapTotal | awk '{ printf "%.0f", $2/1024 }')
# all sizes are in mb
readonly physical_memory=$(free -m | awk '/Mem:/ { print $2 }')
readonly swap_size="${physical_memory}" # if you change this, fix enoughResourcesAvailable() in client.js
readonly swap_size=$((${physical_memory} - ${existing_swap})) # if you change this, fix enoughResourcesAvailable() in client.js
readonly app_count=$((${physical_memory} / 200)) # estimated app count
readonly disk_size_gb=$(fdisk -l ${disk_device} | grep "Disk ${disk_device}" | awk '{ print $3 }')
readonly disk_size=$((disk_size_gb * 1024))
readonly system_size=10240 # 10 gigs for system libs, apps images, installer, box code and tmp
readonly disk_size_bytes=$(fdisk -l ${disk_device} | grep "Disk ${disk_device}" | awk '{ printf $5 }') # can't rely on fdisk human readable units, using bytes instead
readonly disk_size=$((${disk_size_bytes}/1024/1024))
readonly system_size=10240 # 10 gigs for system libs, apps images, installer, box code, data and tmp
readonly ext4_reserved=$((disk_size * 5 / 100)) # this can be changes using tune2fs -m percent /dev/vda1
echo "Disk device: ${disk_device}"
echo "Physical memory: ${physical_memory}"
echo "Estimated app count: ${app_count}"
echo "Disk size: ${disk_size}"
echo "Disk size: ${disk_size}M"
# Allocate swap for general app usage
if [[ ! -f "${APPS_SWAP_FILE}" ]]; then
if [[ ! -f "${APPS_SWAP_FILE}" && ${swap_size} -gt 0 ]]; then
echo "Creating Apps swap file of size ${swap_size}M"
fallocate -l "${swap_size}m" "${APPS_SWAP_FILE}"
chmod 600 "${APPS_SWAP_FILE}"
@@ -45,6 +38,7 @@ else
echo "Apps Swap file already exists"
fi
# see start.sh for the initial default size of 8gb. On small disks the calculation might be lower than 8gb resulting in a failure to resize here.
echo "Resizing data volume"
home_data_size=$((disk_size - system_size - swap_size - ext4_reserved))
echo "Resizing up btrfs user data to size ${home_data_size}M"
@@ -54,4 +48,3 @@ umount "${USER_DATA_DIR}" || true
truncate -s "${home_data_size}m" "${USER_DATA_FILE}" # this will shrink it if the file had existed. this is useful when running this script on a live system
mount -t btrfs -o loop,nosuid "${USER_DATA_FILE}" ${USER_DATA_DIR}
btrfs filesystem resize max "${USER_DATA_DIR}"
+8 -6
View File
@@ -5,8 +5,12 @@ map $http_upgrade $connection_upgrade {
}
server {
<% if (vhost) { %>
listen 443;
server_name <%= vhost %>;
<% } else { %>
listen 443 default_server;
<% } %>
ssl on;
# paths are relative to prefix and not to this file
@@ -43,9 +47,10 @@ server {
proxy_set_header Connection $connection_upgrade;
# only serve up the status page if we get proxy gateway errors
error_page 502 503 504 @appstatus;
location @appstatus {
return 307 <%= adminOrigin %>/appstatus.html?referrer=https://$host$request_uri;
root <%= sourceDir %>/webadmin/dist;
error_page 502 503 504 /appstatus.html;
location /appstatus.html {
internal;
}
location / {
@@ -80,9 +85,6 @@ server {
index index.html index.htm;
}
<% } else if ( endpoint === 'oauthproxy' ) { %>
proxy_pass http://127.0.0.1:3003;
proxy_set_header X-Cloudron-Proxy-Port <%= port %>;
<% } else if ( endpoint === 'app' ) { %>
proxy_pass http://127.0.0.1:<%= port %>;
<% } else if ( endpoint === 'splash' ) { %>
-29
View File
@@ -57,35 +57,6 @@ http {
}
}
# This server handles the naked domain for custom domains.
# It can also be used for wildcard subdomain 404. This feature is not used by the Cloudron itself
# because box always sets up DNS records for app subdomains.
server {
listen 443 default_server;
ssl on;
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
error_page 404 = @fallback;
location @fallback {
internal;
root /home/yellowtent/box/webadmin/dist;
rewrite ^/$ /nakeddomain.html break;
}
location / {
internal;
root /home/yellowtent/box/webadmin/dist;
rewrite ^/$ /nakeddomain.html break;
}
# required for /api/v1/cloudron/avatar
location /api/ {
proxy_pass http://127.0.0.1:3000;
client_max_body_size 1m;
}
}
include applications/*.conf;
}
@@ -31,3 +31,8 @@ yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/collectlogs.sh
Defaults!/home/yellowtent/box/src/scripts/retire.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/retire.sh
Defaults!/home/yellowtent/box/src/scripts/rmbackup.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/rmbackup.sh
Defaults!/home/yellowtent/box/src/scripts/update.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/update.sh
@@ -4,6 +4,9 @@ OnFailure=crashnotifier@%n.service
StopWhenUnneeded=true
; journald crashes result in a EPIPE in node. Cannot ignore it as it results in loss of logs.
BindsTo=systemd-journald.service
After=mysql.service nginx.service
; As cloudron-resize-fs is a one-shot, the Wants= automatically ensures that the service *finishes*
Wants=cloudron-resize-fs.service
[Service]
Type=idle
@@ -0,0 +1,16 @@
# Allocate swap files
# https://bbs.archlinux.org/viewtopic.php?id=194792 ensures this runs after do-resize.service
# On ubuntu ec2 we use cloud-init https://wiki.archlinux.org/index.php/Cloud-init
[Unit]
Description=Cloudron FS Resizer
Before=docker.service collectd.service mysql.service sshd.service nginx.service
After=cloud-init.service
[Service]
Type=oneshot
ExecStart="/home/yellowtent/box/setup/start/cloudron-resize-fs.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
@@ -0,0 +1,11 @@
[Unit]
Description=IPTables Restore
Before=docker.service
[Service]
Type=oneshot
ExecStart=/sbin/iptables-restore /etc/iptables/rules.v4
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
+14
View File
@@ -0,0 +1,14 @@
# The default ubuntu unbound service uses SysV fallback mode, we want a proper unit file so unbound gets restarted correctly
[Unit]
Description=Unbound DNS Resolver
After=network.target
[Service]
PIDFile=/run/unbound.pid
ExecStart=/usr/sbin/unbound -d
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
[Install]
WantedBy=multi-user.target
+36 -32
View File
@@ -28,6 +28,7 @@ var appdb = require('./appdb.js'),
generatePassword = require('password-generator'),
hat = require('hat'),
infra = require('./infra_version.js'),
mailboxdb = require('./mailboxdb.js'),
once = require('once'),
path = require('path'),
paths = require('./paths.js'),
@@ -253,6 +254,8 @@ function setupOauth(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.sso) return callback(null);
var appId = app.id;
var redirectURI = 'https://' + config.appFqdn(app.location);
var scope = 'profile';
@@ -295,6 +298,8 @@ function setupSimpleAuth(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.sso) return callback(null);
var appId = app.id;
var scope = 'profile';
@@ -369,6 +374,8 @@ function setupLdap(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.sso) return callback(null);
var env = [
'LDAP_SERVER=172.18.0.1',
'LDAP_PORT=' + config.get('ldapPort'),
@@ -399,14 +406,21 @@ function setupSendMail(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var from = (app.location ? app.location : app.manifest.title.replace(/[^a-zA-Z0-9]/g, '')) + '.app';
debugApp(app, 'Setting up SendMail');
var cmd = [ '/addons/mail/service.sh', 'add-send', from ];
docker.execContainer('mail', cmd, { bufferStdout: true }, function (error, stdout) {
mailboxdb.getByOwnerId(app.id, function (error, results) {
if (error) return callback(error);
var env = stdout.toString('utf8').split('\n').slice(0, -1); // remove trailing newline
var mailbox = results.filter(function (r) { return !r.aliasTarget; })[0];
var env = [
"MAIL_SMTP_SERVER=mail",
"MAIL_SMTP_PORT=2525",
"MAIL_SMTP_USERNAME=" + mailbox.name,
"MAIL_SMTP_PASSWORD=" + app.id,
"MAIL_FROM=" + mailbox.name + '@' + config.fqdn(),
"MAIL_DOMAIN=" + config.fqdn()
];
debugApp(app, 'Setting sendmail addon config to %j', env);
appdb.setAddonConfig(app.id, 'sendmail', env, callback);
});
@@ -417,17 +431,9 @@ function teardownSendMail(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var from = (app.location ? app.location : app.manifest.title.replace(/[^a-zA-Z0-9]/g, '')) + '.app';
debugApp(app, 'Tearing down sendmail');
var cmd = [ '/addons/mail/service.sh', 'remove-send', from ];
debugApp(app, 'Tearing down sendmail : %j', cmd);
docker.execContainer('mail', cmd, { }, function (error) {
if (error) return callback(error);
appdb.unsetAddonConfig(app.id, 'sendmail', callback);
});
appdb.unsetAddonConfig(app.id, 'sendmail', callback);
}
function setupRecvMail(app, options, callback) {
@@ -437,15 +443,21 @@ function setupRecvMail(app, options, callback) {
debugApp(app, 'Setting up recvmail');
var to = (app.location ? app.location : app.manifest.title.replace(/[^a-zA-Z0-9]/g, '')) + '.app';
var cmd = [ '/addons/mail/service.sh', 'add-recv', to ];
docker.execContainer('mail', cmd, { bufferStdout: true }, function (error, stdout) {
mailboxdb.getByOwnerId(app.id, function (error, results) {
if (error) return callback(error);
var env = stdout.toString('utf8').split('\n').slice(0, -1); // remove trailing newline
debugApp(app, 'Setting recvmail addon config to %j', env);
var mailbox = results.filter(function (r) { return !r.aliasTarget; })[0];
var env = [
"MAIL_IMAP_SERVER=mail",
"MAIL_IMAP_PORT=9993",
"MAIL_IMAP_USERNAME=" + mailbox.name,
"MAIL_IMAP_PASSWORD=" + app.id,
"MAIL_TO=" + mailbox.name + '@' + config.fqdn(),
"MAIL_DOMAIN=" + config.fqdn()
];
debugApp(app, 'Setting sendmail addon config to %j', env);
appdb.setAddonConfig(app.id, 'recvmail', env, callback);
});
}
@@ -455,17 +467,9 @@ function teardownRecvMail(app, options, callback) {
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var to = (app.location ? app.location : app.manifest.title.replace(/[^a-zA-Z0-9]/g, '')) + '.app';
debugApp(app, 'Tearing down recvmail');
var cmd = [ '/addons/mail/service.sh', 'remove-recv', to ];
debugApp(app, 'Tearing down recvmail: %j', cmd);
docker.execContainer('mail', cmd, { }, function (error) {
if (error) return callback(error);
appdb.unsetAddonConfig(app.id, 'recvmail', callback);
});
appdb.unsetAddonConfig(app.id, 'recvmail', callback);
}
function setupMySql(app, options, callback) {
+13 -7
View File
@@ -1,5 +1,3 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
@@ -60,7 +58,7 @@ var assert = require('assert'),
var APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.installationState', 'apps.installationProgress', 'apps.runState',
'apps.health', 'apps.containerId', 'apps.manifestJson', 'apps.httpPort', 'apps.location', 'apps.dnsRecordId',
'apps.accessRestrictionJson', 'apps.lastBackupId', 'apps.oldConfigJson', 'apps.memoryLimit', 'apps.altDomain',
'apps.xFrameOptions', 'apps.oauthProxy' ].join(',');
'apps.xFrameOptions', 'apps.sso', 'apps.debugModeJson' ].join(',');
var PORT_BINDINGS_FIELDS = [ 'hostPort', 'environmentVariable', 'appId' ].join(',');
@@ -97,7 +95,11 @@ function postProcess(result) {
// TODO remove later once all apps have this attribute
result.xFrameOptions = result.xFrameOptions || 'SAMEORIGIN';
result.oauthProxy = !!result.oauthProxy; // make it bool
result.sso = !!result.sso; // make it bool
assert(result.debugModeJson === null || typeof result.debugModeJson === 'string');
result.debugMode = safe.JSON.parse(result.debugModeJson);
delete result.debugModeJson;
}
function get(id, callback) {
@@ -184,12 +186,13 @@ function add(id, appStoreId, manifest, location, portBindings, data, callback) {
var xFrameOptions = data.xFrameOptions || '';
var installationState = data.installationState || exports.ISTATE_PENDING_INSTALL;
var lastBackupId = data.lastBackupId || null; // used when cloning
var oauthProxy = data.oauthProxy || false;
var sso = 'sso' in data ? data.sso : null;
var debugModeJson = data.debugMode ? JSON.stringify(data.debugMode) : null;
var queries = [ ];
queries.push({
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, location, accessRestrictionJson, memoryLimit, altDomain, xFrameOptions, lastBackupId, oauthProxy) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, installationState, location, accessRestrictionJson, memoryLimit, altDomain, xFrameOptions, lastBackupId, oauthProxy ]
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, location, accessRestrictionJson, memoryLimit, altDomain, xFrameOptions, lastBackupId, sso, debugModeJson) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, installationState, location, accessRestrictionJson, memoryLimit, altDomain, xFrameOptions, lastBackupId, sso, debugModeJson ]
});
Object.keys(portBindings).forEach(function (env) {
@@ -299,6 +302,9 @@ function updateWithConstraints(id, app, constraints, callback) {
} else if (p === 'accessRestriction') {
fields.push('accessRestrictionJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'debugMode') {
fields.push('debugModeJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p !== 'portBindings') {
fields.push(p + ' = ?');
values.push(app[p]);
+8 -8
View File
@@ -1,9 +1,9 @@
'use strict';
var appdb = require('./appdb.js'),
apps = require('./apps.js'),
assert = require('assert'),
async = require('async'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:apphealthmonitor'),
docker = require('./docker.js').connection,
@@ -50,7 +50,7 @@ function setHealth(app, health, callback) {
debugApp(app, 'marking as unhealthy since not seen for more than %s minutes', UNHEALTHY_THRESHOLD/(60 * 1000));
if (app.appStoreId !== '') mailer.appDied(app); // do not send mails for dev apps
if (app.debugMode) mailer.appDied(app); // do not send mails for dev apps
gHealthInfo[app.id].emailSent = true;
} else {
debugApp(app, 'waiting for sometime to update the app health');
@@ -93,7 +93,7 @@ function checkAppHealth(app, callback) {
var healthCheckUrl = 'http://127.0.0.1:' + app.httpPort + manifest.healthCheckPath;
superagent
.get(healthCheckUrl)
.set('Host', config.appFqdn(app.location)) // required for some apache configs with rewrite rules
.set('Host', app.fqdn) // required for some apache configs with rewrite rules
.redirects(0)
.timeout(HEALTHCHECK_INTERVAL)
.end(function (error, res) {
@@ -111,13 +111,13 @@ function checkAppHealth(app, callback) {
}
function processApps(callback) {
appdb.getAll(function (error, apps) {
apps.getAll(function (error, result) {
if (error) return callback(error);
async.each(apps, checkAppHealth, function (error) {
async.each(result, checkAppHealth, function (error) {
if (error) console.error(error);
var alive = apps
var alive = result
.filter(function (a) { return a.installationState === appdb.ISTATE_INSTALLED && a.runState === appdb.RSTATE_RUNNING && a.health === appdb.HEALTH_HEALTHY; })
.map(function (a) { return (a.location || 'naked_domain') + '|' + a.manifest.id; }).join(', ');
@@ -166,8 +166,8 @@ function processDockerEvents() {
debug('OOM Context: %s', context);
// do not send mails for dev apps
if ((!app || app.appStoreId !== '') && (now - lastOomMailTime > OOM_MAIL_LIMIT)) {
mailer.unexpectedExit(program, context); // app can be null if it's an addon crash
if ((!app || !app.debugMode) && (now - lastOomMailTime > OOM_MAIL_LIMIT)) {
mailer.oomEvent(program, context); // app can be null if it's an addon crash
lastOomMailTime = now;
}
});
+152 -63
View File
@@ -58,6 +58,7 @@ var addons = require('./addons.js'),
eventlog = require('./eventlog.js'),
fs = require('fs'),
groups = require('./groups.js'),
mailboxdb = require('./mailboxdb.js'),
manifestFormat = require('cloudron-manifestformat'),
path = require('path'),
paths = require('./paths.js'),
@@ -68,6 +69,7 @@ var addons = require('./addons.js'),
split = require('split'),
superagent = require('superagent'),
taskmanager = require('./taskmanager.js'),
updateChecker = require('./updatechecker.js'),
url = require('url'),
util = require('util'),
uuid = require('node-uuid'),
@@ -104,7 +106,6 @@ AppsError.PORT_RESERVED = 'Port Reserved';
AppsError.PORT_CONFLICT = 'Port Conflict';
AppsError.BILLING_REQUIRED = 'Billing Required';
AppsError.ACCESS_DENIED = 'Access denied';
AppsError.USER_REQUIRED = 'User required';
AppsError.BAD_CERTIFICATE = 'Invalid certificate';
// Hostname validation comes from RFC 1123 (section 2.1)
@@ -128,18 +129,21 @@ function validateHostname(location, fqdn) {
// validate the port bindings
function validatePortBindings(portBindings, tcpPorts) {
assert.strictEqual(typeof portBindings, 'object');
// keep the public ports in sync with firewall rules in scripts/initializeBaseUbuntuImage.sh
// these ports are reserved even if we listen only on 127.0.0.1 because we setup HostIp to be 127.0.0.1
// for custom tcp ports
var RESERVED_PORTS = [
22, /* ssh */
25, /* smtp */
53, /* dns */
80, /* http */
143, /* imap */
202, /* caas ssh */
443, /* https */
465, /* smtps */
587, /* submission */
919, /* ssh */
993, /* imaps */
2003, /* graphite (lo) */
2004, /* graphite (lo) */
@@ -148,7 +152,6 @@ function validatePortBindings(portBindings, tcpPorts) {
config.get('sysadminPort'), /* sysadmin app server (lo) */
config.get('smtpPort'), /* internal smtp port (lo) */
config.get('ldapPort'), /* ldap server (lo) */
config.get('oauthProxyPort'), /* oauth proxy server (lo) */
config.get('simpleAuthPort'), /* simple auth server (lo) */
3306, /* mysql (lo) */
4190, /* managesieve */
@@ -162,9 +165,9 @@ function validatePortBindings(portBindings, tcpPorts) {
if (!/^[a-zA-Z0-9_]+$/.test(env)) return new AppsError(AppsError.BAD_FIELD, env + ' is not valid environment variable');
if (!Number.isInteger(portBindings[env])) return new AppsError(AppsError.BAD_FIELD, portBindings[env] + ' is not an integer');
if (portBindings[env] <= 0 || portBindings[env] > 65535) return new AppsError(AppsError.BAD_FIELD, portBindings[env] + ' is out of range');
if (RESERVED_PORTS.indexOf(portBindings[env]) !== -1) return new AppsError(AppsError.PORT_RESERVED, String(portBindings[env]));
if (portBindings[env] <= 1023 || portBindings[env] > 65535) return new AppsError(AppsError.BAD_FIELD, portBindings[env] + ' is not in permitted range');
}
// it is OK if there is no 1-1 mapping between values in manifest.tcpPorts and portBindings. missing values implies
@@ -207,6 +210,9 @@ function validateMemoryLimit(manifest, memoryLimit) {
// this is needed so an app update can change the value in the manifest, and if not set by the user, the new value should be used
if (memoryLimit === 0) return null;
// a special value that indicates unlimited memory
if (memoryLimit === -1) return null;
if (memoryLimit < min) return new AppsError(AppsError.BAD_FIELD, 'memoryLimit too small');
if (memoryLimit > max) return new AppsError(AppsError.BAD_FIELD, 'memoryLimit too large');
@@ -227,6 +233,16 @@ function validateXFrameOptions(xFrameOptions) {
return (uri.protocol === 'http:' || uri.protocol === 'https:') ? null : new AppsError(AppsError.BAD_FIELD, 'xFrameOptions ALLOW-FROM uri must be a valid http[s] uri' );
}
function validateDebugMode(debugMode) {
assert.strictEqual(typeof debugMode, 'object');
if (debugMode === null) return null;
if ('cmd' in debugMode && debugMode.cmd !== null && !Array.isArray(debugMode.cmd)) return new AppsError(AppsError.BAD_FIELD, 'debugMode.cmd must be an array or null' );
if ('readonlyRootfs' in debugMode && typeof debugMode.readonlyRootfs !== 'boolean') return new AppsError(AppsError.BAD_FIELD, 'debugMode.readonlyRootfs must be a boolean' );
return null;
}
function getDuplicateErrorDetails(location, portBindings, error) {
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
@@ -262,7 +278,7 @@ function getAppConfig(app) {
}
function getIconUrlSync(app) {
var iconPath = paths.APPICONS_DIR + '/' + app.id + '.png';
var iconPath = paths.APP_ICONS_DIR + '/' + app.id + '.png';
return fs.existsSync(iconPath) ? '/api/v1/apps/' + app.id + '/icon' : null;
}
@@ -359,24 +375,41 @@ function purchase(appId, appstoreId, callback) {
if (appstoreId === '') return callback(null);
// Skip for caas at the moment
if (config.provider() === 'caas') return callback(null);
function purchaseWithAppstoreConfig(appstoreConfig) {
assert.strictEqual(typeof appstoreConfig.userId, 'string');
assert.strictEqual(typeof appstoreConfig.cloudronId, 'string');
assert.strictEqual(typeof appstoreConfig.token, 'string');
settings.getAppstoreConfig(function (error, result) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!result.token) return callback(new AppsError(AppsError.BILLING_REQUIRED));
var url = config.apiServerOrigin() + '/api/v1/users/' + result.userId + '/cloudrons/' + result.cloudronId + '/apps/' + appId;
var url = config.apiServerOrigin() + '/api/v1/users/' + appstoreConfig.userId + '/cloudrons/' + appstoreConfig.cloudronId + '/apps/' + appId;
var data = { appstoreId: appstoreId };
superagent.post(url).send(data).query({ accessToken: result.token }).timeout(30 * 1000).end(function (error, result) {
superagent.post(url).send(data).query({ accessToken: appstoreConfig.token }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error));
if (result.statusCode === 404) return callback(new AppsError(AppsError.NOT_FOUND));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new AppsError(AppsError.BILLING_REQUIRED));
if (result.statusCode !== 201 && result.statusCode !== 200) return callback(new AppsError(AppsError.EXTERNAL_ERROR, util.format('App purchase failed. %s %j', result.status, result.body)));
callback(null);
});
});
}
// Caas Cloudrons do not store appstore credentials in their local database
if (config.provider() === 'caas') {
var url = config.apiServerOrigin() + '/api/v1/exchangeBoxTokenWithUserToken';
superagent.post(url).query({ token: config.token() }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 201) return callback(new AppsError(AppsError.EXTERNAL_ERROR, util.format('App purchase failed. %s %j', result.status, result.body)));
purchaseWithAppstoreConfig(result.body);
});
} else {
settings.getAppstoreConfig(function (error, result) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!result.token) return callback(new AppsError(AppsError.BILLING_REQUIRED));
purchaseWithAppstoreConfig(result);
});
}
}
function unpurchase(appId, appstoreId, callback) {
@@ -386,12 +419,10 @@ function unpurchase(appId, appstoreId, callback) {
if (appstoreId === '') return callback(null);
// Skip for caas at the moment
if (config.provider() === 'caas') return callback(null);
settings.getAppstoreConfig(function (error, appstoreConfig) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!appstoreConfig.token) return callback(new AppsError(AppsError.BILLING_REQUIRED));
function unpurchaseWithAppstoreConfig(appstoreConfig) {
assert.strictEqual(typeof appstoreConfig.userId, 'string');
assert.strictEqual(typeof appstoreConfig.cloudronId, 'string');
assert.strictEqual(typeof appstoreConfig.token, 'string');
var url = config.apiServerOrigin() + '/api/v1/users/' + appstoreConfig.userId + '/cloudrons/' + appstoreConfig.cloudronId + '/apps/' + appId;
@@ -406,7 +437,25 @@ function unpurchase(appId, appstoreId, callback) {
callback(null);
});
});
});
}
// Caas Cloudrons do not store appstore credentials in their local database
if (config.provider() === 'caas') {
var url = config.apiServerOrigin() + '/api/v1/exchangeBoxTokenWithUserToken';
superagent.post(url).query({ token: config.token() }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 201) return callback(new AppsError(AppsError.EXTERNAL_ERROR, util.format('App purchase failed. %s %j', result.status, result.body)));
unpurchaseWithAppstoreConfig(result.body);
});
} else {
settings.getAppstoreConfig(function (error, result) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (!result.token) return callback(new AppsError(AppsError.BILLING_REQUIRED));
unpurchaseWithAppstoreConfig(result);
});
}
}
function downloadManifest(appStoreId, manifest, callback) {
@@ -443,7 +492,8 @@ function install(data, auditSource, callback) {
memoryLimit = data.memoryLimit || 0,
altDomain = data.altDomain || null,
xFrameOptions = data.xFrameOptions || 'SAMEORIGIN',
oauthProxy = data.oauthProxy === true;
sso = 'sso' in data ? data.sso : null,
debugMode = data.debugMode || null;
assert(data.appStoreId || data.manifest); // atleast one of them is required
@@ -471,18 +521,21 @@ function install(data, auditSource, callback) {
error = validateXFrameOptions(xFrameOptions);
if (error) return callback(error);
if (altDomain !== null && !validator.isFQDN(altDomain)) return callback(new AppsError(AppsError.BAD_FIELD, 'Invalid alt domain'));
error = validateDebugMode(debugMode);
if (error) return callback(error);
// singleUser mode requires accessRestriction to contain exactly one user
if (manifest.singleUser && accessRestriction === null) return callback(new AppsError(AppsError.USER_REQUIRED));
if (manifest.singleUser && accessRestriction.users.length !== 1) return callback(new AppsError(AppsError.USER_REQUIRED));
if ('sso' in data && !('optionalSso' in manifest)) return callback(new AppsError(AppsError.BAD_FIELD, 'sso can only be specified for apps with optionalSso'));
// if sso was unspecified, enable it by default if possible
if (sso === null) sso = !!manifest.addons['simpleauth'] || !!manifest.addons['ldap'] || !!manifest.addons['oauth'];
if (altDomain !== null && !validator.isFQDN(altDomain)) return callback(new AppsError(AppsError.BAD_FIELD, 'Invalid alt domain'));
var appId = uuid.v4();
if (icon) {
if (!validator.isBase64(icon)) return callback(new AppsError(AppsError.BAD_FIELD, 'icon is not base64'));
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, appId + '.png'), new Buffer(icon, 'base64'))) {
if (!safe.fs.writeFileSync(path.join(paths.APP_ICONS_DIR, appId + '.png'), new Buffer(icon, 'base64'))) {
return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving icon:' + safe.error.message));
}
}
@@ -500,24 +553,31 @@ function install(data, auditSource, callback) {
memoryLimit: memoryLimit,
altDomain: altDomain,
xFrameOptions: xFrameOptions,
oauthProxy: oauthProxy
sso: sso,
debugMode: debugMode
};
appdb.add(appId, appStoreId, manifest, location, portBindings, data, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
var from = (location ? location : manifest.title.toLowerCase().replace(/[^a-zA-Z0-9]/g, '')) + '.app';
mailboxdb.add(from, appId, mailboxdb.TYPE_APP, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new AppsError(AppsError.ALREADY_EXISTS, 'Mailbox already exists'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
// save cert to data/box/certs
if (cert && key) {
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.user.cert'), cert)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving cert: ' + safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.user.key'), key)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving key: ' + safe.error.message));
}
appdb.add(appId, appStoreId, manifest, location, portBindings, data, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
// save cert to boxdata/certs
if (cert && key) {
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.user.cert'), cert)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving cert: ' + safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_CERTS_DIR, config.appFqdn(location) + '.user.key'), key)) return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving key: ' + safe.error.message));
}
eventlog.add(eventlog.ACTION_APP_INSTALL, auditSource, { appId: appId, location: location, manifest: manifest });
taskmanager.restartAppTask(appId);
callback(null, { id : appId });
eventlog.add(eventlog.ACTION_APP_INSTALL, auditSource, { appId: appId, location: location, manifest: manifest });
callback(null, { id : appId });
});
});
});
});
@@ -573,11 +633,13 @@ function configure(appId, data, auditSource, callback) {
if (error) return callback(error);
}
if ('oauthProxy' in data) {
values.oauthProxy = data.oauthProxy;
if ('debugMode' in data) {
values.debugMode = data.debugMode;
error = validateDebugMode(values.debugMode);
if (error) return callback(error);
}
// save cert to data/box/certs. TODO: move this to apptask when we have a real task queue
// save cert to boxdata/certs. TODO: move this to apptask when we have a real task queue
if ('cert' in data && 'key' in data) {
if (data.cert && data.key) {
error = certificates.validateCertificate(data.cert, data.key, config.appFqdn(location));
@@ -595,16 +657,24 @@ function configure(appId, data, auditSource, callback) {
debug('Will configure app with id:%s values:%j', appId, values);
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_CONFIGURE, values, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
var oldName = (app.location ? app.location : app.manifest.title.toLowerCase().replace(/[^a-zA-Z0-9]/g, '')) + '.app';
var newName = (location ? location : app.manifest.title.toLowerCase().replace(/[^a-zA-Z0-9]/g, '')) + '.app';
mailboxdb.updateName(oldName, newName, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new AppsError(AppsError.ALREADY_EXISTS, 'This mailbox is already taken'));
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_CONFIGURE, values, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId: appId });
taskmanager.restartAppTask(appId);
callback(null);
eventlog.add(eventlog.ACTION_APP_CONFIGURE, auditSource, { appId: appId });
callback(null);
});
});
});
}
@@ -640,11 +710,11 @@ function update(appId, data, auditSource, callback) {
if (data.icon) {
if (!validator.isBase64(data.icon)) return callback(new AppsError(AppsError.BAD_FIELD, 'icon is not base64'));
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, appId + '.png'), new Buffer(data.icon, 'base64'))) {
if (!safe.fs.writeFileSync(path.join(paths.APP_ICONS_DIR, appId + '.png'), new Buffer(data.icon, 'base64'))) {
return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving icon:' + safe.error.message));
}
} else {
safe.fs.unlinkSync(path.join(paths.APPICONS_DIR, appId + '.png'));
safe.fs.unlinkSync(path.join(paths.APP_ICONS_DIR, appId + '.png'));
}
}
@@ -656,12 +726,16 @@ function update(appId, data, auditSource, callback) {
// this allows cloudron install -f --app <appid> for an app installed from the appStore
if (app.manifest.id !== values.manifest.id) {
if (!data.force) return callback(new AppsError(AppsError.BAD_FIELD, 'manifest id does not match. force to override'));
// clear appStoreId so that this app does not get updates anymore. this will mark it as a dev app
// clear appStoreId so that this app does not get updates anymore
values.appStoreId = '';
}
// do not update apps in debug mode
if (app.debugMode && !data.force) return callback(new AppsError(AppsError.BAD_STATE, 'debug mode enabled. force to override'));
// Ensure we update the memory limit in case the new app requires more memory as a minimum
if (values.manifest.memoryLimit && app.memoryLimit < values.manifest.memoryLimit) {
// 0 and -1 are special values for memory limit indicating unset and unlimited
if (app.memoryLimit > 0 && values.manifest.memoryLimit && app.memoryLimit < values.manifest.memoryLimit) {
values.memoryLimit = values.manifest.memoryLimit;
}
@@ -676,6 +750,9 @@ function update(appId, data, auditSource, callback) {
eventlog.add(eventlog.ACTION_APP_UPDATE, auditSource, { appId: appId, toManifest: manifest, fromManifest: app.manifest, force: data.force });
// clear update indicator, if update fails, it will come back through the update checker
updateChecker.resetAppUpdateInfo(appId);
callback(null);
});
});
@@ -743,6 +820,7 @@ function restore(appId, data, auditSource, callback) {
var func = data.backupId ? backups.getRestoreConfig.bind(null, data.backupId) : function (next) { return next(null, { manifest: app.manifest }); };
func(function (error, restoreConfig) {
if (error && error.reason === BackupsError.NOT_FOUND) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error.message));
if (error && error.reason === BackupsError.EXTERNAL_ERROR) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error.message));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
@@ -819,18 +897,25 @@ function clone(appId, data, auditSource, callback) {
memoryLimit: app.memoryLimit,
accessRestriction: app.accessRestriction,
xFrameOptions: app.xFrameOptions,
lastBackupId: backupId
lastBackupId: backupId,
sso: !!app.sso
};
appdb.add(newAppId, appStoreId, manifest, location, portBindings, data, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
var from = (location ? location : manifest.title.toLowerCase().replace(/[^a-zA-Z0-9]/g, '')) + '.app';
mailboxdb.add(from, newAppId, mailboxdb.TYPE_APP, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new AppsError(AppsError.ALREADY_EXISTS, 'Mailbox already exists'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(newAppId);
appdb.add(newAppId, appStoreId, manifest, location, portBindings, data, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location, portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
eventlog.add(eventlog.ACTION_APP_CLONE, auditSource, { appId: newAppId, oldAppId: appId, backupId: backupId, location: location, manifest: manifest });
taskmanager.restartAppTask(newAppId);
callback(null, { id : newAppId });
eventlog.add(eventlog.ACTION_APP_CLONE, auditSource, { appId: newAppId, oldAppId: appId, backupId: backupId, location: location, manifest: manifest });
callback(null, { id : newAppId });
});
});
});
});
@@ -850,14 +935,18 @@ function uninstall(appId, auditSource, callback) {
unpurchase(appId, result.appStoreId, function (error) {
if (error) return callback(error);
taskmanager.stopAppTask(appId, function () {
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_UNINSTALL, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
mailboxdb.delByOwnerId(appId, function (error) {
if (error && error.reason !== DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
eventlog.add(eventlog.ACTION_APP_UNINSTALL, auditSource, { appId: appId });
taskmanager.stopAppTask(appId, function () {
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_UNINSTALL, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.startAppTask(appId, callback);
eventlog.add(eventlog.ACTION_APP_UNINSTALL, auditSource, { appId: appId });
taskmanager.startAppTask(appId, callback);
});
});
});
});
+14 -69
View File
@@ -12,8 +12,6 @@ exports = module.exports = {
_unconfigureNginx: unconfigureNginx,
_createVolume: createVolume,
_deleteVolume: deleteVolume,
_allocateOAuthProxyCredentials: allocateOAuthProxyCredentials,
_removeOAuthProxyCredentials: removeOAuthProxyCredentials,
_verifyManifest: verifyManifest,
_registerSubdomain: registerSubdomain,
_unregisterSubdomain: unregisterSubdomain,
@@ -24,9 +22,8 @@ exports = module.exports = {
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
require('debug').formatArgs = function formatArgs() {
arguments[0] = this.namespace + ' ' + arguments[0];
return arguments;
require('debug').formatArgs = function formatArgs(args) {
args[0] = this.namespace + ' ' + args[0];
};
var addons = require('./addons.js'),
@@ -36,8 +33,6 @@ var addons = require('./addons.js'),
async = require('async'),
backups = require('./backups.js'),
certificates = require('./certificates.js'),
clients = require('./clients.js'),
ClientsError = clients.ClientsError,
config = require('./config.js'),
database = require('./database.js'),
debug = require('debug')('box:apptask'),
@@ -56,7 +51,6 @@ var addons = require('./addons.js'),
superagent = require('superagent'),
sysinfo = require('./sysinfo.js'),
util = require('util'),
waitForDns = require('./waitfordns.js'),
_ = require('underscore');
var COLLECTD_CONFIG_EJS = fs.readFileSync(__dirname + '/collectd.config.ejs', { encoding: 'utf8' }),
@@ -155,34 +149,6 @@ function deleteVolume(app, callback) {
shell.sudo('deleteVolume', [ RMAPPDIR_CMD, app.id ], callback);
}
function allocateOAuthProxyCredentials(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.oauthProxy) return callback(null);
debugApp(app, 'Creating oauth proxy credentials');
var redirectURI = 'https://' + config.appFqdn(app.location);
var scope = 'profile';
clients.add(app.id, clients.TYPE_PROXY, redirectURI, scope, callback);
}
function removeOAuthProxyCredentials(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
clients.delByAppIdAndType(app.id, clients.TYPE_PROXY, function (error) {
if (error && error.reason !== ClientsError.NOT_FOUND) {
debugApp(app, 'Error removing OAuth client id', error);
return callback(error);
}
callback(null);
});
}
function addCollectdProfile(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
@@ -224,6 +190,9 @@ function downloadIcon(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
// nothing to download if we dont have an appStoreId
if (!app.appStoreId) return callback(null);
debugApp(app, 'Downloading icon of %s@%s', app.appStoreId, app.manifest.version);
var iconUrl = config.apiServerOrigin() + '/api/v1/apps/' + app.appStoreId + '/versions/' + app.manifest.version + '/icon';
@@ -237,7 +206,7 @@ function downloadIcon(app, callback) {
if (error && !error.response) return retryCallback(new Error('Network error downloading icon:' + error.message));
if (res.statusCode !== 200) return retryCallback(null); // ignore error. this can also happen for apps installed with cloudron-cli
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, app.id + '.png'), res.body)) return retryCallback(new Error('Error saving icon:' + safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.APP_ICONS_DIR, app.id + '.png'), res.body)) return retryCallback(new Error('Error saving icon:' + safe.error.message));
retryCallback(null);
});
@@ -312,7 +281,7 @@ function removeIcon(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
fs.unlink(path.join(paths.APPICONS_DIR, app.id + '.png'), function (error) {
fs.unlink(path.join(paths.APP_ICONS_DIR, app.id + '.png'), function (error) {
if (error && error.code !== 'ENOENT') debugApp(app, 'cannot remove icon : %s', error);
callback(null);
});
@@ -327,17 +296,11 @@ function waitForDnsPropagation(app, callback) {
return callback(null);
}
async.retry({ interval: 5000, times: 120 }, function checkStatus(retryCallback) {
subdomains.status(app.dnsRecordId, function (error, result) {
if (error) return retryCallback(new Error('Failed to get dns record status : ' + error.message));
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
debugApp(app, 'waitForDnsPropagation: dnsRecordId:%s status:%s', app.dnsRecordId, result);
if (result !== 'done') return retryCallback(new Error(util.format('app:%s not ready yet: %s', app.id, result)));
retryCallback(null, result);
});
}, callback);
subdomains.waitForDns(config.appFqdn(app.location), ip, 'A', { interval: 5000, times: 120 }, callback);
});
}
function waitForAltDomainDnsPropagation(app, callback) {
@@ -345,7 +308,7 @@ function waitForAltDomainDnsPropagation(app, callback) {
// try for 10 minutes before giving up. this allows the user to "reconfigure" the app in the case where
// an app has an external domain and cloudron is migrated to custom domain.
waitForDns(app.altDomain, config.appFqdn(app.location), 'CNAME', { interval: 10000, times: 60 }, callback);
subdomains.waitForDns(app.altDomain, config.appFqdn(app.location), 'CNAME', { interval: 10000, times: 60 }, callback);
}
// updates the app object and the database
@@ -393,17 +356,12 @@ function install(app, callback) {
addons.teardownAddons.bind(null, app, app.manifest.addons),
deleteVolume.bind(null, app),
unregisterSubdomain.bind(null, app, app.location),
removeOAuthProxyCredentials.bind(null, app),
// removeIcon.bind(null, app), // do not remove icon for non-appstore installs
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '20, Downloading icon' }),
downloadIcon.bind(null, app),
updateApp.bind(null, app, { installationProgress: '25, Creating OAuth proxy credentials' }),
allocateOAuthProxyCredentials.bind(null, app),
updateApp.bind(null, app, { installationProgress: '30, Registering subdomain' }),
registerSubdomain.bind(null, app),
@@ -453,7 +411,7 @@ function backup(app, callback) {
async.series([
updateApp.bind(null, app, { installationProgress: '10, Backing up' }),
backups.backupApp.bind(null, app, app.manifest),
backups.backupApp.bind(null, app, app.manifest, 'appbackups' /* tag */),
// done!
function (callback) {
@@ -497,17 +455,12 @@ function restore(app, callback) {
docker.deleteImage(app.oldConfig.manifest, done);
},
removeOAuthProxyCredentials.bind(null, app),
removeIcon.bind(null, app),
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '40, Downloading icon' }),
downloadIcon.bind(null, app),
updateApp.bind(null, app, { installationProgress: '50, Create OAuth proxy credentials' }),
allocateOAuthProxyCredentials.bind(null, app),
updateApp.bind(null, app, { installationProgress: '55, Registering subdomain' }), // ip might change during upgrades
registerSubdomain.bind(null, app),
@@ -568,13 +521,9 @@ function configure(app, callback) {
if (!app.oldConfig || app.oldConfig.location === app.location) return next();
unregisterSubdomain(app, app.oldConfig.location, next);
},
removeOAuthProxyCredentials.bind(null, app),
reserveHttpPort.bind(null, app),
updateApp.bind(null, app, { installationProgress: '30, Create OAuth proxy credentials' }),
allocateOAuthProxyCredentials.bind(null, app),
updateApp.bind(null, app, { installationProgress: '35, Registering subdomain' }),
registerSubdomain.bind(null, app),
@@ -644,14 +593,13 @@ function update(app, callback) {
docker.deleteImage(app.oldConfig.manifest, done);
},
// removeIcon.bind(null, app), // do not remove icon, otherwise the UI breaks for a short time...
function (next) {
if (app.installationState === appdb.ISTATE_PENDING_FORCE_UPDATE) return next(null);
async.series([
updateApp.bind(null, app, { installationProgress: '30, Backing up app' }),
backups.backupApp.bind(null, app, app.oldConfig.manifest)
backups.backupApp.bind(null, app, app.oldConfig.manifest, 'appbackups' /* tag */)
], next);
},
@@ -714,9 +662,6 @@ function uninstall(app, callback) {
updateApp.bind(null, app, { installationProgress: '60, Unregistering subdomain' }),
unregisterSubdomain.bind(null, app, app.location),
updateApp.bind(null, app, { installationProgress: '70, Remove OAuth credentials' }),
removeOAuthProxyCredentials.bind(null, app),
updateApp.bind(null, app, { installationProgress: '80, Cleanup icon' }),
removeIcon.bind(null, app),
+1 -1
View File
@@ -32,7 +32,7 @@ function initialize(callback) {
user.get(userId, function (error, result) {
if (error) return callback(error);
var md5 = crypto.createHash('md5').update(result.email.toLowerCase()).digest('hex');
var md5 = crypto.createHash('md5').update(result.alternateEmail || result.email).digest('hex');
result.gravatar = 'https://www.gravatar.com/avatar/' + md5 + '.jpg?s=24&d=mm';
callback(null, result);
+2 -1
View File
@@ -49,8 +49,9 @@ function getByAppIdPaged(page, perPage, appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
// box versions (0.93.x and below) used to use appbackup_ prefix
database.query('SELECT ' + BACKUPS_FIELDS + ' FROM backups WHERE type = ? AND state = ? AND id LIKE ? ORDER BY creationTime DESC LIMIT ?,?',
[ exports.BACKUP_TYPE_APP, exports.BACKUP_STATE_NORMAL, 'appbackup\\_' + appId + '\\_%', (page-1)*perPage, perPage ], function (error, results) {
[ exports.BACKUP_TYPE_APP, exports.BACKUP_STATE_NORMAL, '%app%\\_' + appId + '\\_%', (page-1)*perPage, perPage ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results.forEach(function (result) { postProcess(result); });
+139 -143
View File
@@ -3,6 +3,8 @@
exports = module.exports = {
BackupsError: BackupsError,
testConfig: testConfig,
getPaged: getPaged,
getByAppIdPaged: getByAppIdPaged,
@@ -15,7 +17,11 @@ exports = module.exports = {
backupApp: backupApp,
restoreApp: restoreApp,
backupBoxAndApps: backupBoxAndApps
backupBoxAndApps: backupBoxAndApps,
getLocalDownloadPath: getLocalDownloadPath,
removeBackup: removeBackup
};
var addons = require('./addons.js'),
@@ -29,7 +35,9 @@ var addons = require('./addons.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:backups'),
eventlog = require('./eventlog.js'),
filesystem = require('./storage/filesystem.js'),
locker = require('./locker.js'),
mailer = require('./mailer.js'),
path = require('path'),
paths = require('./paths.js'),
progress = require('./progress.js'),
@@ -37,9 +45,8 @@ var addons = require('./addons.js'),
safe = require('safetydance'),
shell = require('./shell.js'),
settings = require('./settings.js'),
superagent = require('superagent'),
util = require('util'),
webhooks = require('./webhooks.js');
SettingsError = require('./settings.js').SettingsError,
util = require('util');
var BACKUP_BOX_CMD = path.join(__dirname, 'scripts/backupbox.sh'),
BACKUP_APP_CMD = path.join(__dirname, 'scripts/backupapp.sh'),
@@ -76,6 +83,7 @@ util.inherits(BackupsError, Error);
BackupsError.EXTERNAL_ERROR = 'external error';
BackupsError.INTERNAL_ERROR = 'internal error';
BackupsError.BAD_STATE = 'bad state';
BackupsError.NOT_FOUND = 'not found';
BackupsError.MISSING_CREDENTIALS = 'missing credentials';
// choose which storage backend we use for test purpose we use s3
@@ -83,10 +91,21 @@ function api(provider) {
switch (provider) {
case 'caas': return caas;
case 's3': return s3;
case 'filesystem': return filesystem;
default: return null;
}
}
function testConfig(backupConfig, callback) {
assert.strictEqual(typeof backupConfig, 'object');
assert.strictEqual(typeof callback, 'function');
var func = api(backupConfig.provider);
if (!func) return callback(new SettingsError(SettingsError.BAD_FIELD, 'unkown storage provider'));
api(backupConfig.provider).testConfig(backupConfig, callback);
}
function getPaged(page, perPage, callback) {
assert(typeof page === 'number' && page > 0);
assert(typeof perPage === 'number' && perPage > 0);
@@ -112,85 +131,22 @@ function getByAppIdPaged(page, perPage, appId, callback) {
});
}
function getBoxBackupCredentials(appBackupIds, callback) {
assert(util.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
var now = new Date();
var filebase = util.format('backup_%s-v%s', now.toISOString(), config.version());
var filename = filebase + '.tar.gz';
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).getBackupCredentials(backupConfig, function (error, result) {
if (error) return callback(error);
result.id = filename;
result.s3Url = 's3://' + backupConfig.bucket + '/' + backupConfig.prefix + '/' + filename;
result.backupKey = backupConfig.key;
debug('getBoxBackupCredentials: %j', result);
callback(null, result);
});
});
}
function getAppBackupCredentials(app, manifest, callback) {
assert.strictEqual(typeof app, 'object');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof callback, 'function');
var now = new Date();
var filebase = util.format('appbackup_%s_%s-v%s', app.id, now.toISOString(), manifest.version);
var configFilename = filebase + '.json', dataFilename = filebase + '.tar.gz';
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).getBackupCredentials(backupConfig, function (error, result) {
if (error) return callback(error);
result.id = dataFilename;
result.s3ConfigUrl = 's3://' + backupConfig.bucket + '/' + backupConfig.prefix + '/' + configFilename;
result.s3DataUrl = 's3://' + backupConfig.bucket + '/' + backupConfig.prefix + '/' + dataFilename;
result.backupKey = backupConfig.key;
debug('getAppBackupCredentials: %j', result);
callback(null, result);
});
});
}
// backupId is the s3 filename. appbackup_%s_%s-v%s.tar.gz
function getRestoreConfig(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
var configFile = backupId.replace(/\.tar\.gz$/, '.json');
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).getRestoreUrl(backupConfig, configFile, function (error, result) {
if (error) return callback(error);
api(backupConfig.provider).getAppRestoreConfig(backupConfig, backupId, function (error, result) {
if (error && error.reason === BackupsError.NOT_FOUND) return callback(error);
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
superagent.get(result.url).buffer(true).timeout(30 * 1000).end(function (error, response) {
if (error && !error.response) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
if (response.statusCode !== 200) return callback(new Error('Invalid response code when getting config.json : ' + response.statusCode));
var config = safe.JSON.parse(response.text);
if (!config) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, 'Error in config:' + safe.error.message));
return callback(null, config);
});
callback(null, result);
});
});
}
// backupId is the s3 filename. appbackup_%s_%s-v%s.tar.gz
function getRestoreUrl(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -204,25 +160,27 @@ function getRestoreUrl(backupId, callback) {
var obj = {
id: backupId,
url: result.url,
backupKey: backupConfig.key
backupKey: backupConfig.key,
sha1: result.sha1 || null // not supported by all backends
};
debug('getRestoreUrl: id:%s url:%s backupKey:%s', obj.id, obj.url, obj.backupKey);
debug('getRestoreUrl: id:%s url:%s backupKey:%s sha1:%s', obj.id, obj.url, obj.backupKey, obj.sha1);
callback(null, obj);
});
});
}
function copyLastBackup(app, manifest, callback) {
function copyLastBackup(app, manifest, prefix, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof app.lastBackupId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof prefix, 'string');
assert.strictEqual(typeof callback, 'function');
var now = new Date();
var toFilenameArchive = util.format('appbackup_%s_%s-v%s.tar.gz', app.id, now.toISOString(), manifest.version);
var toFilenameConfig = util.format('appbackup_%s_%s-v%s.json', app.id, now.toISOString(), manifest.version);
var timestamp = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
var toFilenameArchive = util.format('%s/app_%s_%s_v%s.tar.gz', prefix, app.id, timestamp, manifest.version);
var toFilenameConfig = util.format('%s/app_%s_%s_v%s.json', prefix, app.id, timestamp, manifest.version);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
@@ -246,47 +204,40 @@ function copyLastBackup(app, manifest, callback) {
});
}
function backupBoxWithAppBackupIds(appBackupIds, callback) {
function backupBoxWithAppBackupIds(appBackupIds, prefix, callback) {
assert(util.isArray(appBackupIds));
assert.strictEqual(typeof prefix, 'string');
getBoxBackupCredentials(appBackupIds, function (error, result) {
if (error && error.reason === BackupsError.EXTERNAL_ERROR) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error.message));
var timestamp = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
var filebase = util.format('%s/box_%s_v%s', prefix, timestamp, config.version());
var filename = filebase + '.tar.gz';
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
debug('backupBoxWithAppBackupIds: %j', result);
api(backupConfig.provider).getBoxBackupDetails(backupConfig, filename, function (error, result) {
if (error) return callback(error);
var args = [ result.s3Url, result.accessKeyId, result.secretAccessKey, result.region, result.backupKey ];
if (result.sessionToken) args.push(result.sessionToken);
debug('backupBoxWithAppBackupIds: backup details %j', result);
shell.sudo('backupBox', [ BACKUP_BOX_CMD ].concat(args), function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
debug('backupBoxWithAppBackupIds: success');
backupdb.add({ id: result.id, version: config.version(), type: backupdb.BACKUP_TYPE_BOX, dependsOn: appBackupIds }, function (error) {
shell.sudo('backupBox', [ BACKUP_BOX_CMD ].concat(result.backupScriptArguments), function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
webhooks.backupDone(result.id, null /* app */, appBackupIds, function (error) {
if (error) return callback(error);
callback(null, result.id);
debug('backupBoxWithAppBackupIds: success');
backupdb.add({ id: filename, version: config.version(), type: backupdb.BACKUP_TYPE_BOX, dependsOn: appBackupIds }, function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).backupDone(filename, null /* app */, appBackupIds, function (error) {
if (error) return callback(error);
callback(null, filename);
});
});
});
});
});
}
// this function expects you to have a lock
// function backupBox(callback) {
// apps.getAll(function (error, allApps) {
// if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
//
// var appBackupIds = allApps.map(function (app) { return app.lastBackupId; });
// appBackupIds = appBackupIds.filter(function (id) { return id !== null; }); // remove apps that were never backed up
//
// backupBoxWithAppBackupIds(appBackupIds, callback);
// });
// }
function canBackupApp(app) {
// only backup apps that are installed or pending configure or called from apptask. Rest of them are in some
// state not good for consistent backup (i.e addons may not have been setup completely)
@@ -296,47 +247,37 @@ function canBackupApp(app) {
app.installationState === appdb.ISTATE_PENDING_UPDATE; // called from apptask
}
// set the 'creation' date of lastBackup so that the backup persists across time based archival rules
// s3 does not allow changing creation time, so copying the last backup is easy way out for now
function reuseOldAppBackup(app, manifest, callback) {
assert.strictEqual(typeof app.lastBackupId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof callback, 'function');
copyLastBackup(app, manifest, function (error, newBackupId) {
if (error) return callback(error);
debugApp(app, 'reuseOldAppBackup: reused old backup %s as %s', app.lastBackupId, newBackupId);
callback(null, newBackupId);
});
}
function createNewAppBackup(app, manifest, callback) {
function createNewAppBackup(app, manifest, prefix, callback) {
assert.strictEqual(typeof app, 'object');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof prefix, 'string');
assert.strictEqual(typeof callback, 'function');
getAppBackupCredentials(app, manifest, function (error, result) {
if (error) return callback(error);
var timestamp = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
var filebase = util.format('%s/app_%s_%s_v%s', prefix, app.id, timestamp, manifest.version);
var configFilename = filebase + '.json', dataFilename = filebase + '.tar.gz';
debugApp(app, 'createNewAppBackup: backup url:%s backup config url:%s', result.s3DataUrl, result.s3ConfigUrl);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
var args = [ app.id, result.s3ConfigUrl, result.s3DataUrl, result.accessKeyId, result.secretAccessKey, result.region, result.backupKey ];
if (result.sessionToken) args.push(result.sessionToken);
api(backupConfig.provider).getAppBackupDetails(backupConfig, app.id, dataFilename, configFilename, function (error, result) {
if (error) return callback(error);
async.series([
addons.backupAddons.bind(null, app, manifest.addons),
shell.sudo.bind(null, 'backupApp', [ BACKUP_APP_CMD ].concat(args))
], function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
debug('createNewAppBackup: backup details %j', result);
debugApp(app, 'createNewAppBackup: %s done', result.id);
backupdb.add({ id: result.id, version: manifest.version, type: backupdb.BACKUP_TYPE_APP, dependsOn: [ ] }, function (error) {
async.series([
addons.backupAddons.bind(null, app, manifest.addons),
shell.sudo.bind(null, 'backupApp', [ BACKUP_APP_CMD ].concat(result.backupScriptArguments))
], function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
callback(null, result.id);
debugApp(app, 'createNewAppBackup: %s done', dataFilename);
backupdb.add({ id: dataFilename, version: manifest.version, type: backupdb.BACKUP_TYPE_APP, dependsOn: [ ] }, function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
callback(null, dataFilename);
});
});
});
});
@@ -355,9 +296,10 @@ function setRestorePoint(appId, lastBackupId, callback) {
});
}
function backupApp(app, manifest, callback) {
function backupApp(app, manifest, prefix, callback) {
assert.strictEqual(typeof app, 'object');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof prefix, 'string');
assert.strictEqual(typeof callback, 'function');
var backupFunction;
@@ -368,11 +310,13 @@ function backupApp(app, manifest, callback) {
return callback(new BackupsError(BackupsError.BAD_STATE, 'App not healthy and never backed up previously'));
}
backupFunction = reuseOldAppBackup.bind(null, app, manifest);
// set the 'creation' date of lastBackup so that the backup persists across time based archival rules
// s3 does not allow changing creation time, so copying the last backup is easy way out for now
backupFunction = copyLastBackup.bind(null, app, manifest, prefix);
} else {
var appConfig = apps.getAppConfig(app);
appConfig.manifest = manifest;
backupFunction = createNewAppBackup.bind(null, app, manifest);
backupFunction = createNewAppBackup.bind(null, app, manifest, prefix);
if (!safe.fs.writeFileSync(path.join(paths.DATA_DIR, app.id + '/config.json'), JSON.stringify(appConfig), 'utf8')) {
return callback(safe.error);
@@ -398,6 +342,8 @@ function backupBoxAndApps(auditSource, callback) {
callback = callback || NOOP_CALLBACK;
var prefix = (new Date()).toISOString().replace(/[T.]/g, '-').replace(/[:Z]/g,'');
eventlog.add(eventlog.ACTION_BACKUP_START, auditSource, { });
apps.getAll(function (error, allApps) {
@@ -406,18 +352,20 @@ function backupBoxAndApps(auditSource, callback) {
var processed = 0;
var step = 100/(allApps.length+1);
progress.set(progress.BACKUP, processed, '');
progress.set(progress.BACKUP, step * processed, '');
async.mapSeries(allApps, function iterator(app, iteratorCallback) {
progress.set(progress.BACKUP, step * processed, 'Backing up ' + (app.altDomain || config.appFqdn(app.location)));
++processed;
backupApp(app, app.manifest, function (error, backupId) {
backupApp(app, app.manifest, prefix, function (error, backupId) {
if (error && error.reason !== BackupsError.BAD_STATE) {
debugApp(app, 'Unable to backup', error);
return iteratorCallback(error);
}
progress.set(progress.BACKUP, step * processed, 'Backed up app at ' + app.location);
progress.set(progress.BACKUP, step * processed, 'Backed up ' + (app.altDomain || config.appFqdn(app.location)));
iteratorCallback(null, backupId || null); // clear backupId if is in BAD_STATE and never backed up
});
@@ -429,7 +377,9 @@ function backupBoxAndApps(auditSource, callback) {
backupIds = backupIds.filter(function (id) { return id !== null; }); // remove apps in bad state that were never backed up
backupBoxWithAppBackupIds(backupIds, function (error, filename) {
progress.set(progress.BACKUP, step * processed, 'Backing up system data');
backupBoxWithAppBackupIds(backupIds, prefix, function (error, filename) {
progress.set(progress.BACKUP, 100, error ? error.message : '');
eventlog.add(eventlog.ACTION_BACKUP_FINISH, auditSource, { errorMessage: error ? error.message : null, filename: filename });
@@ -450,7 +400,10 @@ function backup(auditSource, callback) {
progress.set(progress.BACKUP, 0, 'Starting'); // ensure tools can 'wait' on progress
backupBoxAndApps(auditSource, function (error) { // start the backup operation in the background
if (error) debug('backup failed.', error);
if (error) {
debug('backup failed.', error);
mailer.backupFailed(error);
}
locker.unlock(locker.OP_FULL_BACKUP);
});
@@ -461,6 +414,8 @@ function backup(auditSource, callback) {
function ensureBackup(auditSource, callback) {
assert.strictEqual(typeof auditSource, 'object');
debug('ensureBackup: %j', auditSource);
getPaged(1, 1, function (error, backups) {
if (error) {
debug('Unable to list backups', error);
@@ -495,3 +450,44 @@ function restoreApp(app, addonsToRestore, backupId, callback) {
});
});
}
function getLocalDownloadPath(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).getLocalFilePath(backupConfig, backupId, function (error, result) {
if (error) return callback(error);
debug('getLocalDownloadPath: id:%s path:%s', backupId, result.filePath);
callback(null, result.filePath);
});
});
}
function removeBackup(backupId, appBackupIds, callback) {
assert.strictEqual(typeof backupId, 'string');
assert(util.isArray(appBackupIds));
assert.strictEqual(typeof callback, 'function');
debug('removeBackup: %s', backupId);
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
api(backupConfig.provider).removeBackup(backupConfig, backupId, appBackupIds, function (error) {
if (error) return callback(error);
backupdb.del(backupId, function (error) {
if (error) return callback(new BackupsError(BackupsError.INTERNAL_ERROR, error));
debug('removeBackup: %s done', backupId);
callback(null);
});
});
});
}
+21 -16
View File
@@ -4,13 +4,13 @@ var assert = require('assert'),
async = require('async'),
crypto = require('crypto'),
debug = require('debug')('box:cert/acme'),
execSync = require('safetydance').child_process.execSync,
fs = require('fs'),
parseLinks = require('parse-links'),
path = require('path'),
paths = require('../paths.js'),
safe = require('safetydance'),
superagent = require('superagent'),
ursa = require('ursa'),
util = require('util'),
_ = require('underscore');
@@ -81,23 +81,33 @@ function b64(str) {
return urlBase64Encode(buf.toString('base64'));
}
function getModulus(pem) {
assert(util.isBuffer(pem));
var stdout = execSync('openssl rsa -modulus -noout', { input: pem, encoding: 'utf8' });
if (!stdout) return null;
var match = stdout.match(/Modulus=([0-9a-fA-F]+)$/m);
if (!match) return null;
return Buffer.from(match[1], 'hex');
}
Acme.prototype.sendSignedRequest = function (url, payload, callback) {
assert.strictEqual(typeof url, 'string');
assert.strictEqual(typeof payload, 'string');
assert.strictEqual(typeof callback, 'function');
assert(util.isBuffer(this.accountKeyPem));
var privateKey = ursa.createPrivateKey(this.accountKeyPem);
var that = this;
var header = {
alg: 'RS256',
jwk: {
e: b64(privateKey.getExponent()),
e: b64(Buffer.from([0x01, 0x00, 0x01])), // exponent - 65537
kty: 'RSA',
n: b64(privateKey.getModulus())
n: b64(getModulus(this.accountKeyPem))
}
};
var payload64 = b64(payload);
this.getNonce(function (error, nonce) {
@@ -107,9 +117,9 @@ Acme.prototype.sendSignedRequest = function (url, payload, callback) {
var protected64 = b64(JSON.stringify(_.extend({ }, header, { nonce: nonce })));
var signer = ursa.createSigner('sha256');
var signer = crypto.createSign('RSA-SHA256');
signer.update(protected64 + '.' + payload64, 'utf8');
var signature64 = urlBase64Encode(signer.sign(privateKey, 'base64'));
var signature64 = urlBase64Encode(signer.sign(that.accountKeyPem, 'base64'));
var data = {
header: header,
@@ -207,12 +217,11 @@ Acme.prototype.prepareHttpChallenge = function (challenge, callback) {
var token = challenge.token;
assert(util.isBuffer(this.accountKeyPem));
var privateKey = ursa.createPrivateKey(this.accountKeyPem);
var jwk = {
e: b64(privateKey.getExponent()),
e: b64(Buffer.from([0x01, 0x00, 0x01])), // Exponent - 65537
kty: 'RSA',
n: b64(privateKey.getModulus())
n: b64(getModulus(this.accountKeyPem))
};
var shasum = crypto.createHash('sha256');
@@ -269,7 +278,7 @@ Acme.prototype.waitForChallenge = function (challenge, callback) {
return retryCallback(new AcmeError(AcmeError.EXTERNAL_ERROR, 'Bad response code:' + result.statusCode));
}
debug('waitForChallenge: status is "%s"', result.body.status);
debug('waitForChallenge: status is "%s %j', result.body.status, result.body);
if (result.body.status === 'pending') return retryCallback(new AcmeError(AcmeError.NOT_COMPLETED));
else if (result.body.status === 'valid') return retryCallback();
@@ -318,7 +327,6 @@ Acme.prototype.createKeyAndCsr = function (domain, callback) {
var outdir = paths.APP_CERTS_DIR;
var csrFile = path.join(outdir, domain + '.csr');
var privateKeyFile = path.join(outdir, domain + '.key');
var execSync = safe.child_process.execSync;
if (safe.fs.existsSync(privateKeyFile)) {
// in some old releases, csr file was corrupt. so always regenerate it
@@ -345,7 +353,7 @@ Acme.prototype.downloadChain = function (linkHeader, callback) {
if (!linkHeader) return new AcmeError(AcmeError.EXTERNAL_ERROR, 'Empty link header when downloading certificate chain');
var linkInfo = parseLinks(linkHeader);
if (!linkInfo || !linkInfo.up) return new AcmeError(AcmeError.EXTERNAL_ERROR, 'Failed to parse link header when downloading certificate chain');
if (!linkInfo || !linkInfo.up) return new AcmeError(AcmeError.EXTERNAL_ERROR, 'Failed to parse link header when downloading certificate chain');
debug('downloadChain: downloading from %s', this.caOrigin + linkInfo.up);
@@ -358,8 +366,6 @@ Acme.prototype.downloadChain = function (linkHeader, callback) {
if (result.statusCode !== 200) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to get cert. Expecting 200, got %s %s', result.statusCode, result.text)));
var chainDer = result.text;
var execSync = safe.child_process.execSync;
var chainPem = execSync('openssl x509 -inform DER -outform PEM', { input: chainDer }); // this is really just base64 encoding with header
if (!chainPem) return callback(new AcmeError(AcmeError.INTERNAL_ERROR, safe.error));
@@ -385,7 +391,6 @@ Acme.prototype.downloadCertificate = function (domain, certUrl, callback) {
if (result.statusCode !== 200) return callback(new AcmeError(AcmeError.EXTERNAL_ERROR, util.format('Failed to get cert. Expecting 200, got %s %s', result.statusCode, result.text)));
var certificateDer = result.text;
var execSync = safe.child_process.execSync;
safe.fs.writeFileSync(path.join(outdir, domain + '.der'), certificateDer);
debug('downloadCertificate: cert der file for %s saved', domain);
+4 -4
View File
@@ -8,12 +8,12 @@ exports = module.exports = {
};
var assert = require('assert'),
debug = require('debug')('box:cert/caas.js');
debug = require('debug')('box:cert/caas.js');
function getCertificate(domain, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('getCertificate: using fallback certificate', domain);
+21
View File
@@ -0,0 +1,21 @@
'use strict';
exports = module.exports = {
getCertificate: getCertificate,
// testing
_name: 'fallback'
};
var assert = require('assert'),
debug = require('debug')('box:cert/fallback.js');
function getCertificate(domain, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('getCertificate: using fallback certificate', domain);
return callback(null, 'cert/host.cert', 'cert/host.key');
}
+22
View File
@@ -0,0 +1,22 @@
'use strict';
// -------------------------------------------
// This file just describes the interface
//
// New backends can start from here
// -------------------------------------------
exports = module.exports = {
getCertificate: getCertificate
};
var assert = require('assert');
function getCertificate(domain, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
return callback(new Error('Not implemented'));
}
+87 -35
View File
@@ -1,14 +1,21 @@
'use strict';
exports = module.exports = {
installAdminCertificate: installAdminCertificate,
renewAll: renewAll,
setFallbackCertificate: setFallbackCertificate,
setAdminCertificate: setAdminCertificate,
CertificatesError: CertificatesError,
ensureFallbackCertificate: ensureFallbackCertificate,
setFallbackCertificate: setFallbackCertificate,
validateCertificate: validateCertificate,
ensureCertificate: ensureCertificate,
getAdminCertificatePath: getAdminCertificatePath,
setAdminCertificate: setAdminCertificate,
getAdminCertificate: getAdminCertificate,
renewAll: renewAll,
events: new (require('events').EventEmitter)(),
EVENT_CERT_CHANGED: 'cert_changed',
// exported for testing
_getApi: getApi
@@ -23,6 +30,7 @@ var acme = require('./cert/acme.js'),
constants = require('./constants.js'),
debug = require('debug')('box:src/certificates'),
eventlog = require('./eventlog.js'),
fallback = require('./cert/fallback.js'),
fs = require('fs'),
mailer = require('./mailer.js'),
nginx = require('./nginx.js'),
@@ -30,10 +38,8 @@ var acme = require('./cert/acme.js'),
paths = require('./paths.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
sysinfo = require('./sysinfo.js'),
user = require('./user.js'),
util = require('util'),
waitForDns = require('./waitfordns.js'),
x509 = require('x509');
function CertificatesError(reason, errorOrMessage) {
@@ -66,6 +72,8 @@ function getApi(app, callback) {
settings.getTlsConfig(function (error, tlsConfig) {
if (error) return callback(error);
if (tlsConfig.provider === 'fallback') return callback(null, fallback, {});
// use acme if we have altDomain or the tlsConfig is not caas
var api = (app.altDomain || tlsConfig.provider) !== 'caas' ? acme : caas;
@@ -81,36 +89,53 @@ function getApi(app, callback) {
// we simply update the account with the latest email we have each time when getting letsencrypt certs
// https://github.com/ietf-wg-acme/acme/issues/30
user.getOwner(function (error, owner) {
options.email = error ? 'support@cloudron.io' : owner.email; // can error if not activated yet
options.email = error ? 'support@cloudron.io' : (owner.alternateEmail || owner.email); // can error if not activated yet
callback(null, api, options);
});
});
}
function installAdminCertificate(callback) {
settings.getTlsConfig(function (error, tlsConfig) {
if (error) return callback(error);
function ensureFallbackCertificate(callback) {
// ensure a fallback certificate that much of our code requires
var certFilePath = path.join(paths.APP_CERTS_DIR, 'host.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, 'host.key');
if (tlsConfig.provider === 'caas') return callback();
var fallbackCertPath = path.join(paths.NGINX_CERT_DIR, 'host.cert');
var fallbackKeyPath = path.join(paths.NGINX_CERT_DIR, 'host.key');
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
if (fs.existsSync(certFilePath) && fs.existsSync(keyFilePath)) { // existing custom fallback certs (when restarting, restoring, updating)
debug('ensureFallbackCertificate: using fallback certs provided by user');
if (!safe.child_process.execSync('cp ' + certFilePath + ' ' + fallbackCertPath)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.child_process.execSync('cp ' + keyFilePath + ' ' + fallbackKeyPath)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
waitForDns(config.adminFqdn(), ip, 'A', { interval: 30000, times: 50000 }, function (error) {
if (error) return callback(error);
return callback();
}
ensureCertificate({ location: constants.ADMIN_LOCATION }, function (error, certFilePath, keyFilePath) {
if (error) { // currently, this can never happen
debug('Error obtaining certificate. Proceed anyway', error);
return callback();
}
if (config.tlsCert() && config.tlsKey()) {
// cert from CaaS or cloudron-setup. these files should _not_ be part of the backup
debug('ensureFallbackCertificate: using CaaS/cloudron-setup fallback certs');
if (!safe.fs.writeFileSync(fallbackCertPath, config.tlsCert())) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(fallbackKeyPath, config.tlsKey())) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
nginx.configureAdmin(certFilePath, keyFilePath, callback);
});
});
});
});
return callback();
}
// generate a self-signed cert. it's in backup dir so that we don't create a new cert across restarts
// FIXME: this cert does not cover the naked domain. needs SAN
if (config.fqdn()) {
debug('ensureFallbackCertificate: generating self-signed certificate');
var certCommand = util.format('openssl req -x509 -newkey rsa:2048 -keyout %s -out %s -days 3650 -subj /CN=*.%s -nodes', keyFilePath, certFilePath, config.fqdn());
safe.child_process.execSync(certCommand);
if (!safe.child_process.execSync('cp ' + certFilePath + ' ' + fallbackCertPath)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.child_process.execSync('cp ' + keyFilePath + ' ' + fallbackKeyPath)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
return callback();
} else {
debug('ensureFallbackCertificate: cannot generate fallback certificate without domain');
return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, 'No domain set'));
}
}
function isExpiringSync(certFilePath, hours) {
@@ -198,12 +223,14 @@ function renewAll(auditSource, callback) {
// reconfigure and reload nginx. this is required for the case where we got a renewed cert after fallback
var configureFunc = app.location === constants.ADMIN_LOCATION ?
nginx.configureAdmin.bind(null, certFilePath, keyFilePath)
nginx.configureAdmin.bind(null, certFilePath, keyFilePath, constants.NGINX_ADMIN_CONFIG_FILE_NAME, config.adminFqdn())
: nginx.configureApp.bind(null, app, certFilePath, keyFilePath);
configureFunc(function (ignoredError) {
if (ignoredError) debug('fallbackExpiredCertificates: error reconfiguring app', ignoredError);
exports.events.emit(exports.EVENT_CERT_CHANGED, domain);
iteratorCallback(); // move to next app
});
});
@@ -268,6 +295,8 @@ function setFallbackCertificate(cert, key, callback) {
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), cert)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), key)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
exports.events.emit(exports.EVENT_CERT_CHANGED, '*.' + config.fqdn());
nginx.reload(function (error) {
if (error) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, error));
@@ -282,15 +311,14 @@ function getFallbackCertificatePath(callback) {
callback(null, path.join(paths.NGINX_CERT_DIR, 'host.cert'), path.join(paths.NGINX_CERT_DIR, 'host.key'));
}
// FIXME: setting admin cert needs to restart the mail container because it uses admin cert
function setAdminCertificate(cert, key, callback) {
assert.strictEqual(typeof cert, 'string');
assert.strictEqual(typeof key, 'string');
assert.strictEqual(typeof callback, 'function');
var vhost = config.adminFqdn();
var certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.key');
var certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.user.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.user.key');
var error = validateCertificate(cert, key, vhost);
if (error) return callback(new CertificatesError(CertificatesError.INVALID_CERT, error.message));
@@ -299,21 +327,44 @@ function setAdminCertificate(cert, key, callback) {
if (!safe.fs.writeFileSync(certFilePath, cert)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
if (!safe.fs.writeFileSync(keyFilePath, key)) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error.message));
nginx.configureAdmin(certFilePath, keyFilePath, callback);
exports.events.emit(exports.EVENT_CERT_CHANGED, vhost);
nginx.configureAdmin(certFilePath, keyFilePath, constants.NGINX_ADMIN_CONFIG_FILE_NAME, config.adminFqdn(), callback);
}
function getAdminCertificatePath(callback) {
assert.strictEqual(typeof callback, 'function');
var vhost = config.adminFqdn();
var certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.key');
var certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.user.cert');
var keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.user.key');
if (fs.existsSync(certFilePath) && fs.existsSync(keyFilePath)) return callback(null, certFilePath, keyFilePath);
certFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.cert');
keyFilePath = path.join(paths.APP_CERTS_DIR, vhost + '.key');
if (fs.existsSync(certFilePath) && fs.existsSync(keyFilePath)) return callback(null, certFilePath, keyFilePath);
getFallbackCertificatePath(callback);
}
function getAdminCertificate(callback) {
assert.strictEqual(typeof callback, 'function');
getAdminCertificatePath(function (error, certFilePath, keyFilePath) {
if (error) return callback(error);
var cert = safe.fs.readFileSync(certFilePath);
if (!cert) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error));
var key = safe.fs.readFileSync(keyFilePath);
if (!cert) return callback(new CertificatesError(CertificatesError.INTERNAL_ERROR, safe.error));
return callback(null, cert, key);
});
}
function ensureCertificate(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
@@ -335,10 +386,11 @@ function ensureCertificate(app, callback) {
debug('ensureCertificate: %s. certificate already exists at %s', domain, keyFilePath);
if (!isExpiringSync(certFilePath, 24 * 1)) return callback(null, certFilePath, keyFilePath);
debug('ensureCertificate: %s cert require renewal', domain);
} else {
debug('ensureCertificate: %s cert does not exist', domain);
}
debug('ensureCertificate: %s cert require renewal', domain);
getApi(app, function (error, api, apiOptions) {
if (error) return callback(error);
+31 -3
View File
@@ -1,5 +1,3 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
@@ -12,13 +10,17 @@ exports = module.exports = {
getByAppId: getByAppId,
getByAppIdAndType: getByAppIdAndType,
upsert: upsert,
delByAppId: delByAppId,
delByAppIdAndType: delByAppIdAndType,
_clear: clear
_clear: clear,
_addDefaultClients: addDefaultClients
};
var assert = require('assert'),
async = require('async'),
database = require('./database.js'),
DatabaseError = require('./databaseerror.js');
@@ -112,6 +114,25 @@ function add(id, appId, type, clientSecret, redirectURI, scope, callback) {
});
}
function upsert(id, appId, type, clientSecret, redirectURI, scope, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof clientSecret, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
var data = [ id, appId, type, clientSecret, redirectURI, scope ];
database.query('REPLACE INTO clients (id, appId, type, clientSecret, redirectURI, scope) VALUES (?, ?, ?, ?, ?, ?)', data, function (error, result) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || result.affectedRows === 0) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
function del(id, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -159,3 +180,10 @@ function clear(callback) {
});
}
function addDefaultClients(callback) {
async.series([
add.bind(null, 'cid-webadmin', 'Settings', 'built-in', 'secret-webadmin', 'https://admin-localhost', 'cloudron,profile,users,apps,settings'),
add.bind(null, 'cid-sdk', 'SDK', 'built-in', 'secret-sdk', 'https://admin-localhost', '*,roleSdk'),
add.bind(null, 'cid-cli', 'Cloudron Tool', 'built-in', 'secret-cli', 'https://admin-localhost', '*,roleSdk')
], callback);
}
+27 -6
View File
@@ -14,6 +14,8 @@ exports = module.exports = {
addClientTokenByUserId: addClientTokenByUserId,
delToken: delToken,
addDefaultClients: addDefaultClients,
// keep this in sync with start.sh ADMIN_SCOPES that generates the cid-webadmin
SCOPE_APPS: 'apps',
SCOPE_DEVELOPER: 'developer',
@@ -34,14 +36,16 @@ exports = module.exports = {
TYPE_PROXY: 'addon-proxy'
};
var assert = require('assert'),
util = require('util'),
hat = require('hat'),
appdb = require('./appdb.js'),
tokendb = require('./tokendb.js'),
var appdb = require('./appdb.js'),
assert = require('assert'),
async = require('async'),
clientdb = require('./clientdb.js'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:clients'),
hat = require('hat'),
tokendb = require('./tokendb.js'),
util = require('util'),
uuid = require('node-uuid');
function ClientsError(reason, errorOrMessage) {
@@ -304,7 +308,7 @@ function delToken(clientId, tokenId, callback) {
assert.strictEqual(typeof tokenId, 'string');
assert.strictEqual(typeof callback, 'function');
get(clientId, function (error, result) {
get(clientId, function (error) {
if (error) return callback(error);
tokendb.del(tokenId, function (error) {
@@ -315,3 +319,20 @@ function delToken(clientId, tokenId, callback) {
});
});
}
function addDefaultClients(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Adding default clients');
// The domain might have changed, therefor we have to update the record
// !!! This needs to be in sync with the webadmin, specifically login_callback.js
const ADMIN_SCOPES="cloudron,developer,profile,users,apps,settings";
// id, appId, type, clientSecret, redirectURI, scope
async.series([
clientdb.upsert.bind(null, 'cid-webadmin', 'Settings', 'built-in', 'secret-webadmin', config.adminOrigin(), ADMIN_SCOPES),
clientdb.upsert.bind(null, 'cid-sdk', 'SDK', 'built-in', 'secret-sdk', config.adminOrigin(), '*,roleSdk'),
clientdb.upsert.bind(null, 'cid-cli', 'Cloudron Tool', 'built-in', 'secret-cli', config.adminOrigin(), '*, roleSdk')
], callback);
}
+338 -138
View File
@@ -8,41 +8,48 @@ exports = module.exports = {
activate: activate,
getConfig: getConfig,
getStatus: getStatus,
dnsSetup: dnsSetup,
sendHeartbeat: sendHeartbeat,
sendAliveStatus: sendAliveStatus,
updateToLatest: updateToLatest,
reboot: reboot,
retire: retire,
migrate: migrate,
isConfiguredSync: isConfiguredSync,
getConfigStateSync: getConfigStateSync,
checkDiskSpace: checkDiskSpace,
events: new (require('events').EventEmitter)(),
readDkimPublicKeySync: readDkimPublicKeySync,
refreshDNS: refreshDNS,
EVENT_ACTIVATED: 'activated',
EVENT_CONFIGURED: 'configured'
events: new (require('events').EventEmitter)(),
EVENT_ACTIVATED: 'activated'
};
var apps = require('./apps.js'),
assert = require('assert'),
async = require('async'),
backups = require('./backups.js'),
certificates = require('./certificates.js'),
child_process = require('child_process'),
clients = require('./clients.js'),
config = require('./config.js'),
constants = require('./constants.js'),
cron = require('./cron.js'),
debug = require('debug')('box:cloudron'),
df = require('node-df'),
eventlog = require('./eventlog.js'),
fs = require('fs'),
locker = require('./locker.js'),
mailer = require('./mailer.js'),
nginx = require('./nginx.js'),
os = require('os'),
path = require('path'),
paths = require('./paths.js'),
platform = require('./platform.js'),
progress = require('./progress.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
@@ -51,6 +58,7 @@ var apps = require('./apps.js'),
subdomains = require('./subdomains.js'),
superagent = require('superagent'),
sysinfo = require('./sysinfo.js'),
taskmanager = require('./taskmanager.js'),
tokendb = require('./tokendb.js'),
updateChecker = require('./updatechecker.js'),
user = require('./user.js'),
@@ -60,7 +68,7 @@ var apps = require('./apps.js'),
_ = require('underscore');
var REBOOT_CMD = path.join(__dirname, 'scripts/reboot.sh'),
INSTALLER_UPDATE_URL = 'http://127.0.0.1:2020/api/v1/installer/update',
UPDATE_CMD = path.join(__dirname, 'scripts/update.sh'),
RETIRE_CMD = path.join(__dirname, 'scripts/retire.sh');
var NOOP_CALLBACK = function (error) { if (error) debug(error); };
@@ -80,7 +88,7 @@ const BOX_AND_USER_TEMPLATE = {
var gUpdatingDns = false, // flag for dns update reentrancy
gBoxAndUserDetails = null, // cached cloudron details like region,size...
gIsConfigured = null; // cached configured state so that return value is synchronous. null means we are not initialized yet
gConfigState = { dns: false, tls: false, configured: false };
function CloudronError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
@@ -105,6 +113,7 @@ CloudronError.BAD_FIELD = 'Field error';
CloudronError.INTERNAL_ERROR = 'Internal Error';
CloudronError.EXTERNAL_ERROR = 'External Error';
CloudronError.ALREADY_PROVISIONED = 'Already Provisioned';
CloudronError.ALREADY_SETUP = 'Already Setup';
CloudronError.BAD_STATE = 'Bad state';
CloudronError.ALREADY_UPTODATE = 'No Update Available';
CloudronError.NOT_FOUND = 'Not found';
@@ -113,67 +122,149 @@ CloudronError.SELF_UPGRADE_NOT_SUPPORTED = 'Self upgrade not supported';
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
ensureDkimKeySync();
exports.events.on(exports.EVENT_CONFIGURED, addDnsRecords);
if (!fs.existsSync(paths.FIRST_RUN_FILE)) {
debug('initialize: installing app bundle on first run');
process.nextTick(installAppBundle);
fs.writeFileSync(paths.FIRST_RUN_FILE, 'been there, done that', 'utf8');
}
syncConfigState(callback);
async.series([
installAppBundle,
checkConfigState,
configureDefaultServer
], callback);
}
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
exports.events.removeListener(exports.EVENT_CONFIGURED, addDnsRecords);
exports.events.removeListener(exports.EVENT_FIRST_RUN, installAppBundle);
platform.events.removeListener(platform.EVENT_READY, onPlatformReady);
callback(null);
async.series([
cron.uninitialize,
taskmanager.pauseTasks,
mailer.stop,
platform.uninitialize
], callback);
}
function isConfiguredSync() {
return gIsConfigured === true;
function onConfigured(callback) {
callback = callback || NOOP_CALLBACK;
// if we hit here, the domain has to be set, this is a logic issue if it isn't
assert(config.fqdn());
debug('onConfigured: current state: %j', gConfigState);
if (gConfigState.configured) return callback(); // re-entracy flag
gConfigState.configured = true;
platform.events.on(platform.EVENT_READY, onPlatformReady);
async.series([
clients.addDefaultClients,
cron.initialize,
certificates.ensureFallbackCertificate,
platform.initialize, // requires fallback certs for mail container
addDnsRecords,
configureAdmin,
mailer.start
], callback);
}
function isConfigured(callback) {
// set of rules to see if we have the configs required for cloudron to function
// note this checks for missing configs and not invalid configs
function onPlatformReady(callback) {
callback = callback || NOOP_CALLBACK;
settings.getDnsConfig(function (error, dnsConfig) {
if (error) return callback(error);
debug('onPlatformReady');
if (!dnsConfig) return callback(null, false);
async.series([
taskmanager.resumeTasks
], callback);
}
var isConfigured = (config.isCustomDomain() && dnsConfig.provider === 'route53') ||
(!config.isCustomDomain() && dnsConfig.provider === 'caas');
function getConfigStateSync() {
return gConfigState;
}
callback(null, isConfigured);
function checkConfigState(callback) {
callback = callback || NOOP_CALLBACK;
if (!config.fqdn()) {
settings.events.once(settings.DNS_CONFIG_KEY, function () { checkConfigState(); }); // check again later
return callback(null);
}
debug('checkConfigState: configured');
onConfigured(callback);
}
function dnsSetup(dnsConfig, domain, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof callback, 'function');
if (config.fqdn()) return callback(new CloudronError(CloudronError.ALREADY_SETUP));
settings.setDnsConfig(dnsConfig, domain, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return callback(new CloudronError(CloudronError.BAD_FIELD, error.message));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
config.set('fqdn', domain); // set fqdn only after dns config is valid, otherwise cannot re-setup if we failed
onConfigured(); // do not block
callback();
});
}
function syncConfigState(callback) {
assert(!gIsConfigured);
function configureDefaultServer(callback) {
callback = callback || NOOP_CALLBACK;
isConfigured(function (error, configured) {
debug('configureDefaultServer: domain %s', config.fqdn());
if (process.env.BOX_ENV === 'test') return callback();
var certFilePath = path.join(paths.NGINX_CERT_DIR, 'default.cert');
var keyFilePath = path.join(paths.NGINX_CERT_DIR, 'default.key');
if (!fs.existsSync(certFilePath) || !fs.existsSync(keyFilePath)) {
debug('configureDefaultServer: create new cert');
var certCommand = util.format('openssl req -x509 -newkey rsa:2048 -keyout %s -out %s -days 3650 -subj /CN=%s -nodes', keyFilePath, certFilePath, 'localhost');
safe.child_process.execSync(certCommand);
}
nginx.configureAdmin(certFilePath, keyFilePath, 'default.conf', '', function (error) {
if (error) return callback(error);
debug('syncConfigState: configured = %s', configured);
debug('configureDefaultServer: done');
if (configured) {
exports.events.emit(exports.EVENT_CONFIGURED);
} else {
settings.events.once(settings.DNS_CONFIG_KEY, function () { syncConfigState(); }); // check again later
}
callback(null);
});
}
gIsConfigured = configured;
function configureAdmin(callback) {
callback = callback || NOOP_CALLBACK;
callback();
if (process.env.BOX_ENV === 'test') return callback();
debug('configureAdmin');
sysinfo.getIp(function (error, ip) {
if (error) return callback(error);
subdomains.waitForDns(config.adminFqdn(), ip, 'A', { interval: 30000, times: 50000 }, function (error) {
if (error) return callback(error);
gConfigState.dns = true;
certificates.ensureCertificate({ location: constants.ADMIN_LOCATION }, function (error, certFilePath, keyFilePath) {
if (error) { // currently, this can never happen
debug('Error obtaining certificate. Proceed anyway', error);
return callback();
}
gConfigState.tls = true;
nginx.configureAdmin(certFilePath, keyFilePath, constants.NGINX_ADMIN_CONFIG_FILE_NAME, config.adminFqdn(), callback);
});
});
});
}
@@ -234,11 +325,10 @@ function activate(username, password, email, displayName, ip, auditSource, callb
tokendb.add(token, userObject.id, result.id, expires, '*', function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
// EE API is sync. do not keep the REST API reponse waiting
process.nextTick(function () { exports.events.emit(exports.EVENT_ACTIVATED); });
eventlog.add(eventlog.ACTION_ACTIVATE, auditSource, { });
exports.events.emit(exports.EVENT_ACTIVATED);
callback(null, { token: token, expires: expires });
});
});
@@ -260,7 +350,9 @@ function getStatus(callback) {
boxVersionsUrl: config.get('boxVersionsUrl'),
apiServerOrigin: config.apiServerOrigin(), // used by CaaS tool
provider: config.provider(),
cloudronName: cloudronName
cloudronName: cloudronName,
adminFqdn: config.fqdn() ? config.adminFqdn() : null,
configState: gConfigState
});
});
});
@@ -302,30 +394,25 @@ function getConfig(callback) {
settings.getDeveloperMode(function (error, developerMode) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
sysinfo.getIp(function (error, ip) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, {
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
isDev: config.isDev(),
fqdn: config.fqdn(),
ip: ip,
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
isCustomDomain: config.isCustomDomain(),
isDemo: config.isDemo(),
developerMode: developerMode,
region: result.box.region,
size: result.box.size,
billing: !!result.user.billing,
plan: result.box.plan,
currency: result.user.currency,
memory: os.totalmem(),
provider: config.provider(),
cloudronName: cloudronName
});
callback(null, {
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
isDev: config.isDev(),
fqdn: config.fqdn(),
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
isCustomDomain: config.isCustomDomain(),
isDemo: config.isDemo(),
developerMode: developerMode,
region: result.box.region,
size: result.box.size,
billing: !!result.user.billing,
plan: result.box.plan,
currency: result.user.currency,
memory: os.totalmem(),
provider: config.provider(),
cloudronName: cloudronName
});
});
});
@@ -333,7 +420,7 @@ function getConfig(callback) {
}
function sendHeartbeat() {
if (!config.token()) return;
if (config.provider() !== 'caas') return;
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/heartbeat';
superagent.post(url).query({ token: config.token(), version: config.version() }).timeout(30 * 1000).end(function (error, result) {
@@ -343,23 +430,104 @@ function sendHeartbeat() {
});
}
function ensureDkimKeySync() {
var dkimPrivateKeyFile = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn() + '/private');
var dkimPublicKeyFile = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn() + '/public');
if (fs.existsSync(dkimPrivateKeyFile) && fs.existsSync(dkimPublicKeyFile)) {
debug('DKIM keys already present');
return;
function sendAliveStatus(callback) {
if (typeof callback !== 'function') {
callback = function (error) {
if (error && error.reason !== CloudronError.INTERNAL_ERROR) console.error(error);
else if (error) debug(error);
};
}
debug('Generating new DKIM keys');
function sendAliveStatusWithAppstoreConfig(backendSettings, appstoreConfig) {
assert.strictEqual(typeof backendSettings, 'object');
assert.strictEqual(typeof appstoreConfig.userId, 'string');
assert.strictEqual(typeof appstoreConfig.cloudronId, 'string');
assert.strictEqual(typeof appstoreConfig.token, 'string');
child_process.execSync('openssl genrsa -out ' + dkimPrivateKeyFile + ' 1024');
child_process.execSync('openssl rsa -in ' + dkimPrivateKeyFile + ' -out ' + dkimPublicKeyFile + ' -pubout -outform PEM');
var url = config.apiServerOrigin() + '/api/v1/users/' + appstoreConfig.userId + '/cloudrons/' + appstoreConfig.cloudronId;
var data = {
domain: config.fqdn(),
version: config.version(),
provider: config.provider(),
backendSettings: backendSettings
};
superagent.post(url).send(data).query({ accessToken: appstoreConfig.token }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, error));
if (result.statusCode === 404) return callback(new CloudronError(CloudronError.NOT_FOUND));
if (result.statusCode !== 201) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('Sending alive status failed. %s %j', result.status, result.body)));
callback(null);
});
}
settings.getAll(function (error, result) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
var backendSettings = {
dnsConfig: {
provider: result[settings.DNS_CONFIG_KEY].provider,
wildcard: result[settings.DNS_CONFIG_KEY].provider === 'manual' ? result[settings.DNS_CONFIG_KEY].wildcard : undefined
},
tlsConfig: {
provider: result[settings.TLS_CONFIG_KEY].provider
},
backupConfig: {
provider: result[settings.BACKUP_CONFIG_KEY].provider
},
mailConfig: {
enabled: result[settings.MAIL_CONFIG_KEY].enabled
}
};
// Caas Cloudrons do not store appstore credentials in their local database
if (config.provider() === 'caas') {
var url = config.apiServerOrigin() + '/api/v1/exchangeBoxTokenWithUserToken';
superagent.post(url).query({ token: config.token() }).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, error));
if (result.statusCode !== 201) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('App purchase failed. %s %j', result.status, result.body)));
sendAliveStatusWithAppstoreConfig(backendSettings, result.body);
});
} else {
settings.getAppstoreConfig(function (error, result) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
if (!result.token) {
debug('sendAliveStatus: Cloudron not yet registered');
return callback(null);
}
sendAliveStatusWithAppstoreConfig(backendSettings, result);
});
}
});
}
function readDkimPublicKeySync() {
var dkimPublicKeyFile = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn() + '/public');
if (!config.fqdn()) {
debug('Cannot read dkim public key without a domain.', safe.error);
return null;
}
var dkimPath = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn());
var dkimPrivateKeyFile = path.join(dkimPath, 'private');
var dkimPublicKeyFile = path.join(dkimPath, 'public');
if (!fs.existsSync(dkimPrivateKeyFile) || !fs.existsSync(dkimPublicKeyFile)) {
debug('Generating new DKIM keys');
if (!safe.fs.mkdirSync(dkimPath) && safe.error.code !== 'EEXIST') {
debug('Error creating dkim.', safe.error);
return null;
}
child_process.execSync('openssl genrsa -out ' + dkimPrivateKeyFile + ' 1024');
child_process.execSync('openssl rsa -in ' + dkimPrivateKeyFile + ' -out ' + dkimPublicKeyFile + ' -pubout -outform PEM');
} else {
debug('DKIM keys already present');
}
var publicKey = safe.fs.readFileSync(dkimPublicKeyFile, 'utf8');
if (publicKey === null) {
@@ -403,8 +571,8 @@ function txtRecordsWithSpf(callback) {
});
}
function addDnsRecords() {
var callback = NOOP_CALLBACK;
function addDnsRecords(callback) {
callback = callback || NOOP_CALLBACK;
if (process.env.BOX_ENV === 'test') return callback();
@@ -414,9 +582,6 @@ function addDnsRecords() {
}
gUpdatingDns = true;
var DKIM_SELECTOR = 'cloudron';
var DMARC_REPORT_EMAIL = 'dmarc-report@cloudron.io';
var dkimKey = readDkimPublicKeySync();
if (!dkimKey) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, new Error('Failed to read dkim public key')));
@@ -425,22 +590,19 @@ function addDnsRecords() {
var webadminRecord = { subdomain: constants.ADMIN_LOCATION, type: 'A', values: [ ip ] };
// t=s limits the domainkey to this domain and not it's subdomains
var dkimRecord = { subdomain: DKIM_SELECTOR + '._domainkey', type: 'TXT', values: [ '"v=DKIM1; t=s; p=' + dkimKey + '"' ] };
// DMARC requires special setup if report email id is in different domain
var dmarcRecord = { subdomain: '_dmarc', type: 'TXT', values: [ '"v=DMARC1; p=none; pct=100; rua=mailto:' + DMARC_REPORT_EMAIL + '; ruf=' + DMARC_REPORT_EMAIL + '"' ] };
var dkimRecord = { subdomain: constants.DKIM_SELECTOR + '._domainkey', type: 'TXT', values: [ '"v=DKIM1; t=s; p=' + dkimKey + '"' ] };
var records = [ ];
if (config.isCustomDomain()) {
records.push(webadminRecord);
records.push(dkimRecord);
} else {
// for non-custom domains, we show a nakeddomain.html page
// for non-custom domains, we show a noapp.html page
var nakedDomainRecord = { subdomain: '', type: 'A', values: [ ip ] };
records.push(nakedDomainRecord);
records.push(webadminRecord);
records.push(dkimRecord);
records.push(dmarcRecord);
}
debug('addDnsRecords: %j', records);
@@ -492,12 +654,7 @@ function update(boxUpdateInfo, auditSource, callback) {
progress.set(progress.UPDATE, 0, 'Starting');
// initiate the update/upgrade but do not wait for it
if (config.version().match(/[-+]/) !== null && config.version().replace(/[-+].*/, '') === boxUpdateInfo.version) {
doShortCircuitUpdate(boxUpdateInfo, function (error) {
if (error) debug('Short-circuit update failed', error);
locker.unlock(locker.OP_BOX_UPDATE);
});
} else if (boxUpdateInfo.upgrade) {
if (boxUpdateInfo.upgrade) {
debug('Starting upgrade');
doUpgrade(boxUpdateInfo, function (error) {
if (error) {
@@ -526,6 +683,15 @@ function updateToLatest(auditSource, callback) {
var boxUpdateInfo = updateChecker.getUpdateInfo().box;
if (!boxUpdateInfo) return callback(new CloudronError(CloudronError.ALREADY_UPTODATE, 'No update available'));
// check if this is just a version number change
if (config.version().match(/[-+]/) !== null && config.version().replace(/[-+].*/, '') === boxUpdateInfo.version) {
doShortCircuitUpdate(boxUpdateInfo, function (error) {
if (error) debug('Short-circuit update failed', error);
});
return callback(null);
}
if (boxUpdateInfo.upgrade && config.provider() !== 'caas') return callback(new CloudronError(CloudronError.SELF_UPGRADE_NOT_SUPPORTED));
update(boxUpdateInfo, auditSource, callback);
@@ -584,60 +750,53 @@ function doUpdate(boxUpdateInfo, callback) {
backups.backupBoxAndApps({ userId: null, username: 'updater' }, function (error) {
if (error) return updateError(error);
// NOTE: the args here are tied to the installer revision, box code and appstore provisioning logic
var args = {
sourceTarballUrl: boxUpdateInfo.sourceTarballUrl,
// NOTE: this data is opaque and will be passed through the installer.sh
var data= {
provider: config.provider(),
token: config.token(),
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
fqdn: config.fqdn(),
tlsCert: config.tlsCert(),
tlsKey: config.tlsKey(),
isCustomDomain: config.isCustomDomain(),
isDemo: config.isDemo(),
// this data is opaque to the installer
data: {
provider: config.provider(),
appstore: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin()
},
caas: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
fqdn: config.fqdn(),
tlsCert: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), 'utf8'),
tlsKey: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), 'utf8'),
isCustomDomain: config.isCustomDomain(),
webServerOrigin: config.webServerOrigin()
},
appstore: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin()
},
caas: {
token: config.token(),
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin()
},
version: boxUpdateInfo.version,
boxVersionsUrl: config.get('boxVersionsUrl')
}
version: boxUpdateInfo.version,
boxVersionsUrl: config.get('boxVersionsUrl')
};
debug('updating box %j', args);
debug('updating box %s %j', boxUpdateInfo.sourceTarballUrl, data);
superagent.post(INSTALLER_UPDATE_URL).send(args).timeout(30 * 1000).end(function (error, result) {
if (error && !error.response) return updateError(error);
if (result.statusCode !== 202) return updateError(new Error('Error initiating update: ' + JSON.stringify(result.body)));
progress.set(progress.UPDATE, 5, 'Downloading and extracting new version');
progress.set(progress.UPDATE, 10, 'Updating cloudron software');
shell.sudo('update', [ UPDATE_CMD, boxUpdateInfo.sourceTarballUrl, JSON.stringify(data) ], function (error) {
if (error) return updateError(error);
callback(null);
// Do not add any code here. The installer script will stop the box code any instant
});
// Do not add any code here. The installer script will stop the box code any instant
});
}
function installAppBundle(callback) {
callback = callback || NOOP_CALLBACK;
assert.strictEqual(typeof callback, 'function');
if (fs.existsSync(paths.FIRST_RUN_FILE)) return callback();
var bundle = config.get('appBundle');
debug('initialize: installing app bundle on first run: %j', bundle);
if (!bundle || bundle.length === 0) {
debug('installAppBundle: no bundle set');
return callback();
}
if (!bundle || bundle.length === 0) return callback();
async.eachSeries(bundle, function (appInfo, iteratorCallback) {
debug('autoInstall: installing %s at %s', appInfo.appstoreId, appInfo.location);
@@ -653,6 +812,8 @@ function installAppBundle(callback) {
}, function (error) {
if (error) debug('autoInstallApps: ', error);
fs.writeFileSync(paths.FIRST_RUN_FILE, 'been there, done that', 'utf8');
callback();
});
}
@@ -746,12 +907,51 @@ function migrate(options, callback) {
if (!options.domain) return doMigrate(options, callback);
var dnsConfig = _.pick(options, 'domain', 'provider', 'accessKeyId', 'secretAccessKey', 'region', 'endpoint');
var dnsConfig = _.pick(options, 'domain', 'provider', 'accessKeyId', 'secretAccessKey', 'region', 'endpoint', 'token');
settings.setDnsConfig(dnsConfig, function (error) {
settings.setDnsConfig(dnsConfig, options.domain, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return callback(new CloudronError(CloudronError.BAD_FIELD, error.message));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
// TODO: should probably rollback dns config if migrate fails
doMigrate(options, callback);
});
}
function refreshDNS(callback) {
callback = callback || NOOP_CALLBACK;
sysinfo.getIp(function (error, ip) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
debug('refreshDNS: current ip %s', ip);
addDnsRecords(function (error) {
if (error) return callback(error);
debug('refreshDNS: done for system records');
apps.getAll(function (error, result) {
if (error) return callback(error);
async.each(result, function (app, callback) {
// get the current record before updating it
subdomains.get(app.location, 'A', function (error, values) {
if (error) return callback(error);
// refuse to update any existing DNS record for custom domains that we did not create
if (values.length !== 0 && !app.dnsRecordId) return callback(null, new Error('DNS Record already exists'));
subdomains.upsert(app.location, 'A', [ ip ], callback);
});
}, function (error) {
if (error) return callback(error);
debug('refreshDNS: done for apps');
callback();
});
});
});
});
}
+14 -3
View File
@@ -35,6 +35,9 @@ exports = module.exports = {
isDev: isDev,
isDemo: isDemo,
tlsCert: tlsCert,
tlsKey: tlsKey,
// for testing resets to defaults
_reset: _reset
};
@@ -81,7 +84,6 @@ function initConfig() {
data.smtpPort = 2525; // // this value comes from mail container
data.sysadminPort = 3001;
data.ldapPort = 3002;
data.oauthProxyPort = 3003;
data.simpleAuthPort = 3004;
data.provider = 'caas';
data.appBundle = [ ];
@@ -215,6 +217,15 @@ function isDemo() {
}
function provider() {
// FIXME this fallback is only there because old Cloudrons do not have the provider set till the next upgrade
return get('provider') || 'caas';
return get('provider');
}
function tlsCert() {
var certFile = path.join(baseDir(), 'configs/host.cert');
return safe.fs.readFileSync(certFile, 'utf8');
}
function tlsKey() {
var keyFile = path.join(baseDir(), 'configs/host.key');
return safe.fs.readFileSync(keyFile, 'utf8');
}
+19 -1
View File
@@ -9,17 +9,35 @@ exports = module.exports = {
MAIL_LOCATION: 'my', // not a typo! should be same as admin location until we figure out certificates
POSTMAN_LOCATION: 'postman', // used in dovecot bounces
// These are combined into one array because users and groups become mailboxes
RESERVED_NAMES: [
// Reserved usernames
// https://github.com/gogits/gogs/blob/52c8f691630548fe091d30bcfe8164545a05d3d5/models/repo.go#L393
'admin', 'no-reply', 'postmaster', 'mailer-daemon', // apps like wordpress, gogs don't like these
// Reserved groups
'admins', 'users' // ldap code uses 'users' pseudo group
],
ADMIN_NAME: 'Settings',
ADMIN_CLIENT_ID: 'webadmin', // oauth client id
ADMIN_APPID: 'admin', // admin appid (settingsdb)
ADMIN_GROUP_ID: 'admin',
NGINX_ADMIN_CONFIG_FILE_NAME: 'admin.conf',
GHOST_USER_FILE: '/tmp/cloudron_ghost.json',
DEFAULT_TOKEN_EXPIRATION: 7 * 24 * 60 * 60 * 1000, // 1 week
DEFAULT_MEMORY_LIMIT: (256 * 1024 * 1024), // see also client.js
DEMO_USERNAME: 'cloudron'
DEMO_USERNAME: 'cloudron',
DKIM_SELECTOR: 'cloudron',
AUTOUPDATE_PATTERN_NEVER: 'never'
};
+137 -92
View File
@@ -11,6 +11,7 @@ var apps = require('./apps.js'),
certificates = require('./certificates.js'),
cloudron = require('./cloudron.js'),
config = require('./config.js'),
constants = require('./constants.js'),
CronJob = require('cron').CronJob,
debug = require('debug')('box:cron'),
eventlog = require('./eventlog.js'),
@@ -22,14 +23,17 @@ var apps = require('./apps.js'),
var gAutoupdaterJob = null,
gBoxUpdateCheckerJob = null,
gAppUpdateCheckerJob = null,
gHeartbeatJob = null,
gHeartbeatJob = null, // for CaaS health check
gAliveJob = null, // send periodic stats
gBackupJob = null,
gCleanupTokensJob = null,
gCleanupBackupsJob = null,
gDockerVolumeCleanerJob = null,
gSchedulerSyncJob = null,
gCertificateRenewJob = null,
gCheckDiskSpaceJob = null,
gCleanupEventlogJob = null;
gCleanupEventlogJob = null,
gDynamicDNSJob = null;
var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
var AUDIT_SOURCE = { userId: null, username: 'cron' };
@@ -52,100 +56,115 @@ function initialize(callback) {
});
cloudron.sendHeartbeat(); // latest unpublished version of CronJob has runOnInit
if (cloudron.isConfiguredSync()) {
recreateJobs(callback);
} else {
cloudron.events.on(cloudron.EVENT_ACTIVATED, recreateJobs);
callback();
}
}
var randomHourMinute = Math.floor(60*Math.random());
gAliveJob = new CronJob({
cronTime: '00 ' + randomHourMinute + ' * * * *', // every hour on a random minute
onTick: cloudron.sendAliveStatus,
start: true
});
function recreateJobs(unusedTimeZone, callback) {
if (typeof unusedTimeZone === 'function') callback = unusedTimeZone;
settings.events.on(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.on(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
settings.events.on(settings.DYNAMIC_DNS_KEY, dynamicDNSChanged);
settings.getAll(function (error, allSettings) {
debug('Creating jobs with timezone %s', allSettings[settings.TIME_ZONE_KEY]);
if (error) return callback(error);
if (gBackupJob) gBackupJob.stop();
gBackupJob = new CronJob({
cronTime: '00 00 */4 * * *', // every 4 hours. backups.ensureBackup() will only trigger a backup once per day
onTick: backups.ensureBackup.bind(null, AUDIT_SOURCE, NOOP_CALLBACK),
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCheckDiskSpaceJob) gCheckDiskSpaceJob.stop();
gCheckDiskSpaceJob = new CronJob({
cronTime: '00 30 */4 * * *', // every 4 hours
onTick: cloudron.checkDiskSpace,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gBoxUpdateCheckerJob) gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = new CronJob({
cronTime: '00 */10 * * * *', // every 10 minutes
onTick: updateChecker.checkBoxUpdates,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gAppUpdateCheckerJob) gAppUpdateCheckerJob.stop();
gAppUpdateCheckerJob = new CronJob({
cronTime: '00 */10 * * * *', // every 10 minutes
onTick: updateChecker.checkAppUpdates,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCleanupTokensJob) gCleanupTokensJob.stop();
gCleanupTokensJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: janitor.cleanupTokens,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCleanupEventlogJob) gCleanupEventlogJob.stop();
gCleanupEventlogJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: eventlog.cleanup,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gDockerVolumeCleanerJob) gDockerVolumeCleanerJob.stop();
gDockerVolumeCleanerJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: janitor.cleanupDockerVolumes,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gSchedulerSyncJob) gSchedulerSyncJob.stop();
gSchedulerSyncJob = new CronJob({
cronTime: config.TEST ? '*/10 * * * * *' : '00 */1 * * * *', // every minute
onTick: scheduler.sync,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gCertificateRenewJob) gCertificateRenewJob.stop();
gCertificateRenewJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: certificates.renewAll.bind(null, AUDIT_SOURCE, NOOP_CALLBACK),
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
settings.events.removeListener(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
settings.events.on(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
recreateJobs(allSettings[settings.TIME_ZONE_KEY]);
autoupdatePatternChanged(allSettings[settings.AUTOUPDATE_PATTERN_KEY]);
dynamicDNSChanged(allSettings[settings.DYNAMIC_DNS_KEY]);
settings.events.removeListener(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.on(settings.TIME_ZONE_KEY, recreateJobs);
callback();
});
}
if (callback) callback();
function recreateJobs(tz) {
assert.strictEqual(typeof tz, 'string');
debug('Creating jobs with timezone %s', tz);
if (gBackupJob) gBackupJob.stop();
gBackupJob = new CronJob({
cronTime: '00 00 */4 * * *', // every 4 hours. backups.ensureBackup() will only trigger a backup once per day
onTick: backups.ensureBackup.bind(null, AUDIT_SOURCE, NOOP_CALLBACK),
start: true,
timeZone: tz
});
if (gCheckDiskSpaceJob) gCheckDiskSpaceJob.stop();
gCheckDiskSpaceJob = new CronJob({
cronTime: '00 30 */4 * * *', // every 4 hours
onTick: cloudron.checkDiskSpace,
start: true,
timeZone: tz
});
// randomized pattern per cloudron every 10 min
var randomMinute = Math.floor(10*Math.random());
var random10MinPattern = [0,1,2,3,4,5].map(function (n) { return n*10+randomMinute; }).join(',');
if (gBoxUpdateCheckerJob) gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = new CronJob({
cronTime: '00 ' + random10MinPattern + ' * * * *', // every 10 minutes
onTick: updateChecker.checkBoxUpdates,
start: true,
timeZone: tz
});
if (gAppUpdateCheckerJob) gAppUpdateCheckerJob.stop();
gAppUpdateCheckerJob = new CronJob({
cronTime: '00 ' + random10MinPattern + ' * * * *', // every 10 minutes
onTick: updateChecker.checkAppUpdates,
start: true,
timeZone: tz
});
if (gCleanupTokensJob) gCleanupTokensJob.stop();
gCleanupTokensJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: janitor.cleanupTokens,
start: true,
timeZone: tz
});
if (gCleanupBackupsJob) gCleanupBackupsJob.stop();
gCleanupBackupsJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: janitor.cleanupBackups,
start: true,
timeZone: tz
});
if (gCleanupEventlogJob) gCleanupEventlogJob.stop();
gCleanupEventlogJob = new CronJob({
cronTime: '00 */30 * * * *', // every 30 minutes
onTick: eventlog.cleanup,
start: true,
timeZone: tz
});
if (gDockerVolumeCleanerJob) gDockerVolumeCleanerJob.stop();
gDockerVolumeCleanerJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: janitor.cleanupDockerVolumes,
start: true,
timeZone: tz
});
if (gSchedulerSyncJob) gSchedulerSyncJob.stop();
gSchedulerSyncJob = new CronJob({
cronTime: config.TEST ? '*/10 * * * * *' : '00 */1 * * * *', // every minute
onTick: scheduler.sync,
start: true,
timeZone: tz
});
if (gCertificateRenewJob) gCertificateRenewJob.stop();
gCertificateRenewJob = new CronJob({
cronTime: '00 00 */12 * * *', // every 12 hours
onTick: certificates.renewAll.bind(null, AUDIT_SOURCE, NOOP_CALLBACK),
start: true,
timeZone: tz
});
}
@@ -157,7 +176,7 @@ function autoupdatePatternChanged(pattern) {
if (gAutoupdaterJob) gAutoupdaterJob.stop();
if (pattern === 'never') return;
if (pattern === constants.AUTOUPDATE_PATTERN_NEVER) return;
gAutoupdaterJob = new CronJob({
cronTime: pattern,
@@ -178,11 +197,28 @@ function autoupdatePatternChanged(pattern) {
});
}
function dynamicDNSChanged(enabled) {
assert.strictEqual(typeof enabled, 'boolean');
assert(gBoxUpdateCheckerJob);
debug('Dynamic DNS setting changed to %s', enabled);
if (enabled) {
gDynamicDNSJob = new CronJob({
cronTime: '00 */10 * * * *',
onTick: cloudron.refreshDNS,
start: true,
timeZone: gBoxUpdateCheckerJob.cronTime.zone // hack
});
} else {
if (gDynamicDNSJob) gDynamicDNSJob.stop();
gDynamicDNSJob = null;
}
}
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
cloudron.events.removeListener(cloudron.EVENT_ACTIVATED, recreateJobs);
settings.events.removeListener(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.removeListener(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
@@ -198,12 +234,18 @@ function uninitialize(callback) {
if (gHeartbeatJob) gHeartbeatJob.stop();
gHeartbeatJob = null;
if (gAliveJob) gAliveJob.stop();
gAliveJob = null;
if (gBackupJob) gBackupJob.stop();
gBackupJob = null;
if (gCleanupTokensJob) gCleanupTokensJob.stop();
gCleanupTokensJob = null;
if (gCleanupBackupsJob) gCleanupBackupsJob.stop();
gCleanupBackupsJob = null;
if (gCleanupEventlogJob) gCleanupEventlogJob.stop();
gCleanupEventlogJob = null;
@@ -216,5 +258,8 @@ function uninitialize(callback) {
if (gCertificateRenewJob) gCertificateRenewJob.stop();
gCertificateRenewJob = null;
if (gDynamicDNSJob) gDynamicDNSJob.stop();
gDynamicDNSJob = null;
callback();
}
+9 -12
View File
@@ -15,9 +15,10 @@ exports = module.exports = {
var assert = require('assert'),
async = require('async'),
once = require('once'),
child_process = require('child_process'),
config = require('./config.js'),
mysql = require('mysql'),
once = require('once'),
util = require('util');
var gConnectionPool = null,
@@ -93,18 +94,14 @@ function reconnect(callback) {
function clear(callback) {
assert.strictEqual(typeof callback, 'function');
// the clear funcs don't completely clear the db, they leave the migration code defaults
var cmd = util.format('mysql --host=%s --user="%s" --password="%s" -Nse "SHOW TABLES" %s | grep -v "^migrations$" | while read table; do mysql --host=%s --user="%s" --password="%s" -e "SET FOREIGN_KEY_CHECKS = 0; TRUNCATE TABLE $table" %s; done',
config.database().hostname, config.database().username, config.database().password, config.database().name,
config.database().hostname, config.database().username, config.database().password, config.database().name);
async.series([
require('./appdb.js')._clear,
require('./authcodedb.js')._clear,
require('./backupdb.js')._clear,
require('./clientdb.js')._clear,
require('./tokendb.js')._clear,
require('./groupdb.js')._clear,
require('./userdb.js')._clear,
require('./settingsdb.js')._clear,
require('./eventlogdb.js')._clear,
require('./mailboxdb.js')._clear
child_process.exec.bind(null, cmd),
require('./clientdb.js')._addDefaultClients,
require('./groupdb.js')._addDefaultGroups
], callback);
}
+11 -17
View File
@@ -1,10 +1,11 @@
'use strict';
exports = module.exports = {
del: del,
upsert: upsert,
getChangeStatus: getChangeStatus,
get: get
get: get,
del: del,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
@@ -111,22 +112,15 @@ function del(dnsConfig, zoneName, subdomain, type, values, callback) {
});
}
function getChangeStatus(dnsConfig, changeId, callback) {
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
superagent
.get(config.apiServerOrigin() + '/api/v1/domains/' + config.fqdn() + '/status/' + changeId)
.query({ token: dnsConfig.token })
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null, result.body.status);
});
var credentials = {
provider: dnsConfig.provider
};
return callback(null, credentials);
}
+201
View File
@@ -0,0 +1,201 @@
'use strict';
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
async = require('async'),
debug = require('debug')('box:dns/digitalocean'),
dns = require('native-dns'),
SubdomainError = require('../subdomains.js').SubdomainError,
superagent = require('superagent'),
util = require('util');
var DIGITALOCEAN_ENDPOINT = 'https://api.digitalocean.com';
function getInternal(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
superagent.get(DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records')
.set('Authorization', 'Bearer ' + dnsConfig.token)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 404) return callback(new SubdomainError(SubdomainError.NOT_FOUND, util.format('%s %j', result.statusCode, result.body)));
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, util.format('%s %j', result.statusCode, result.body)));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
var tmp = result.body.domain_records.filter(function (record) {
return (record.type === type && record.name === subdomain);
});
debug('getInternal: %j', tmp);
return callback(null, tmp);
});
}
function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
subdomain = subdomain || '@';
debug('upsert: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
getInternal(dnsConfig, zoneName, subdomain, type, function (error, result) {
if (error) return callback(error);
// used to track available records to update instead of create
var i = 0;
async.eachSeries(values, function (value, callback) {
var priority = null;
if (type === 'MX') {
priority = value.split(' ')[0];
value = value.split(' ')[1];
}
var data = {
type: type,
name: subdomain,
data: value,
priority: priority
};
if (i >= result.length) {
superagent.post(DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records')
.set('Authorization', 'Bearer ' + dnsConfig.token)
.send(data)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, util.format('%s %j', result.statusCode, result.body)));
if (result.statusCode !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null);
});
} else {
superagent.put(DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records/' + result[i].id)
.set('Authorization', 'Bearer ' + dnsConfig.token)
.send(data)
.timeout(30 * 1000)
.end(function (error, result) {
// increment, as we have consumed the record
++i;
if (error && !error.response) return callback(error);
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, util.format('%s %j', result.statusCode, result.body)));
if (result.statusCode !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
return callback(null);
});
}
}, function (error) {
if (error) return callback(error);
callback(null, 'unused');
});
});
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
subdomain = subdomain || '@';
getInternal(dnsConfig, zoneName, subdomain, type, function (error, result) {
if (error) return callback(error);
// We only return the value string
var tmp = result.map(function (record) { return record.data; });
debug('get: %j', tmp);
return callback(null, tmp);
});
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
subdomain = subdomain || '@';
getInternal(dnsConfig, zoneName, subdomain, type, function (error, result) {
if (error) return callback(error);
if (result.length === 0) return callback(null);
var tmp = result.filter(function (record) { return values.some(function (value) { return value === record.data; }); });
debug('del: %j', tmp);
if (tmp.length === 0) return callback(null);
// FIXME we only handle the first one currently
superagent.del(DIGITALOCEAN_ENDPOINT + '/v2/domains/' + zoneName + '/records/' + tmp[0].id)
.set('Authorization', 'Bearer ' + dnsConfig.token)
.timeout(30 * 1000)
.end(function (error, result) {
if (error && !error.response) return callback(error);
if (result.statusCode === 404) return callback(null);
if (result.statusCode === 403 || result.statusCode === 401) return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, util.format('%s %j', result.statusCode, result.body)));
if (result.statusCode !== 204) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.statusCode, result.body)));
debug('del: done');
return callback(null);
});
});
}
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
var credentials = {
provider: dnsConfig.provider,
token: dnsConfig.token
};
if (process.env.BOX_ENV === 'test') return callback(null, credentials); // this shouldn't be here
dns.resolveNs(domain, function (error, nameservers) {
if (error && error.code === 'ENOTFOUND') return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Unable to resolve nameservers for this domain'));
if (error || !nameservers) return callback(new SubdomainError(SubdomainError.BAD_FIELD, error ? error.message : 'Unable to get nameservers'));
upsert(credentials, domain, 'my', 'A', [ ip ], function (error, changeId) {
if (error) return callback(error);
debug('verifyDnsConfig: A record added with change id %s', changeId);
callback(null, credentials);
});
});
}
+68
View File
@@ -0,0 +1,68 @@
'use strict';
// -------------------------------------------
// This file just describes the interface
//
// New backends can start from here
// -------------------------------------------
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
SubdomainError = require('../subdomains.js').SubdomainError,
util = require('util');
function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
// Result: backend specific change id, to be passed into getChangeStatus()
callback(new Error('not implemented'));
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
// Result: Array of matching DNS records in string format
callback(new Error('not implemented'));
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
// Result: none
callback(new Error('not implemented'));
}
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
// Result: dnsConfig object
callback(new Error('not implemented'));
}
+120
View File
@@ -0,0 +1,120 @@
'use strict';
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
async = require('async'),
debug = require('debug')('box:dns/noop'),
dns = require('native-dns'),
SubdomainError = require('../subdomains.js').SubdomainError,
util = require('util');
function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
debug('upsert: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
return callback(null, 'noop-record-id');
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
callback(null, [ ]); // returning ip confuses apptask into thinking the entry already exists
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
return callback();
}
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
var adminDomain = 'my.' + domain;
dns.resolveNs(domain, function (error, nameservers) {
if (error || !nameservers) return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Unable to get nameservers'));
// async.every only reports bools
var stashedError = null;
async.every(nameservers, function (nameserver, callback) {
// ns records cannot have cname
dns.resolve4(nameserver, function (error, nsIps) {
if (error || !nsIps || nsIps.length === 0) {
stashedError = new SubdomainError(SubdomainError.BAD_FIELD, 'Unable to resolve nameservers for this domain');
return callback(false);
}
async.every(nsIps, function (nsIp, callback) {
var req = dns.Request({
question: dns.Question({ name: adminDomain, type: 'A' }),
server: { address: nsIp },
timeout: 5000
});
req.on('timeout', function () {
debug('nameserver %s (%s) timed out when trying to resolve %s', nameserver, nsIp, adminDomain);
return callback(true); // should be ok if dns server is down
});
req.on('message', function (error, message) {
if (error) {
debug('nameserver %s (%s) returned error trying to resolve %s: %s', nameserver, nsIp, adminDomain, error);
return callback(false);
}
var answer = message.answer;
if (!answer || answer.length === 0) {
debug('bad answer from nameserver %s (%s) resolving %s (%s): %j', nameserver, nsIp, adminDomain, 'A', message);
return callback(false);
}
debug('verifyDnsConfig: ns: %s (%s), name:%s Actual:%j Expecting:%s', nameserver, nsIp, adminDomain, answer, ip);
var match = answer.some(function (a) {
return a.address === ip;
});
if (match) return callback(true); // done!
callback(false);
});
req.send();
}, callback);
});
}, function (success) {
if (stashedError) return callback(stashedError);
if (!success) return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'The domain ' + adminDomain + ' does not resolve to the server\'s IP ' + ip));
callback(null, { provider: dnsConfig.provider, wildcard: !!dnsConfig.wildcard });
});
});
}
+70
View File
@@ -0,0 +1,70 @@
'use strict';
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
waitForDns: waitForDns,
verifyDnsConfig: verifyDnsConfig
};
var assert = require('assert'),
debug = require('debug')('box:dns/noop'),
util = require('util');
function upsert(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
debug('upsert: %s for zone %s of type %s with values %j', subdomain, zoneName, type, values);
return callback(null, 'noop-record-id');
}
function get(dnsConfig, zoneName, subdomain, type, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof callback, 'function');
callback(null, [ ]); // returning ip confuses apptask into thinking the entry already exists
}
function del(dnsConfig, zoneName, subdomain, type, values, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert(util.isArray(values));
assert.strictEqual(typeof callback, 'function');
return callback();
}
function waitForDns(domain, value, type, options, callback) {
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof value, 'string');
assert(type === 'A' || type === 'CNAME');
assert(options && typeof options === 'object'); // { interval: 5000, times: 50000 }
assert.strictEqual(typeof callback, 'function');
callback();
}
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
var credentials = {
provider: dnsConfig.provider
};
return callback(null, credentials);
}
+38 -13
View File
@@ -1,10 +1,11 @@
'use strict';
exports = module.exports = {
upsert: upsert,
get: get,
del: del,
upsert: upsert,
getChangeStatus: getChangeStatus,
waitForDns: require('./waitfordns.js'),
verifyDnsConfig: verifyDnsConfig,
// not part of "dns" interface
getHostedZone: getHostedZone
@@ -13,8 +14,10 @@ exports = module.exports = {
var assert = require('assert'),
AWS = require('aws-sdk'),
debug = require('debug')('box:dns/route53'),
dns = require('native-dns'),
SubdomainError = require('../subdomains.js').SubdomainError,
util = require('util');
util = require('util'),
_ = require('underscore');
function getDnsCredentials(dnsConfig) {
assert.strictEqual(typeof dnsConfig, 'object');
@@ -54,7 +57,7 @@ function getHostedZone(dnsConfig, zoneName, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof callback, 'function');
getZoneByName(dnsConfig, zoneName, function (error, zone) {
if (error) return callback(error);
@@ -209,19 +212,41 @@ function del(dnsConfig, zoneName, subdomain, type, values, callback) {
});
}
function getChangeStatus(dnsConfig, changeId, callback) {
function verifyDnsConfig(dnsConfig, domain, ip, callback) {
assert.strictEqual(typeof dnsConfig, 'object');
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof domain, 'string');
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
var credentials = {
provider: dnsConfig.provider,
accessKeyId: dnsConfig.accessKeyId,
secretAccessKey: dnsConfig.secretAccessKey,
region: dnsConfig.region || 'us-east-1',
endpoint: dnsConfig.endpoint || null
};
var route53 = new AWS.Route53(getDnsCredentials(dnsConfig));
route53.getChange({ Id: changeId }, function (error, result) {
if (error && error.code === 'AccessDenied') return callback(new SubdomainError(SubdomainError.ACCESS_DENIED, error.message));
if (error) return callback(error);
if (process.env.BOX_ENV === 'test') return callback(null, credentials); // this shouldn't be here
callback(null, result.ChangeInfo.Status);
dns.resolveNs(domain, function (error, nameservers) {
if (error && error.code === 'ENOTFOUND') return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Unable to resolve nameservers for this domain'));
if (error || !nameservers) return callback(new SubdomainError(SubdomainError.BAD_FIELD, error ? error.message : 'Unable to get nameservers'));
getHostedZone(credentials, domain, function (error, zone) {
if (error) return callback(error);
if (!_.isEqual(zone.DelegationSet.NameServers.sort(), nameservers.sort())) {
debug('verifyDnsConfig: %j and %j do not match', nameservers, zone.DelegationSet.NameServers);
return callback(new SubdomainError(SubdomainError.BAD_FIELD, 'Domain nameservers are not set to Route53'));
}
upsert(credentials, domain, 'my', 'A', [ ip ], function (error, changeId) {
if (error) return callback(new SubdomainError(SubdomainError.INTERNAL_ERROR, error));
debug('verifyDnsConfig: A record added with change id %s', changeId);
callback(null, credentials);
});
});
});
}
+9 -7
View File
@@ -4,8 +4,9 @@ exports = module.exports = waitForDns;
var assert = require('assert'),
async = require('async'),
debug = require('debug')('box:src/waitfordns'),
debug = require('debug')('box:dns/waitfordns'),
dns = require('native-dns'),
SubdomainError = require('../subdomains.js').SubdomainError,
tld = require('tldjs');
// the first arg to callback is not an error argument; this is required for async.every
@@ -50,10 +51,11 @@ function isChangeSynced(domain, value, type, nameserver, callback) {
debug('isChangeSynced: ns: %s (%s), name:%s Actual:%j Expecting:%s', nameserver, nsIp, domain, answer, value);
if ((type === 'A' && answer[0].address === value) ||
(type === 'CNAME' && answer[0].data === value)) {
return iteratorCallback(true); // done!
}
var match = answer.some(function (a) {
return ((type === 'A' && a.address === value) || (type === 'CNAME' && a.data === value));
});
if (match) return iteratorCallback(true); // done!
iteratorCallback(false);
});
@@ -79,12 +81,12 @@ function waitForDns(domain, value, type, options, callback) {
debug('waitForDNS: %s attempt %s.', domain, attempt++);
dns.resolveNs(zoneName, function (error, nameservers) {
if (error || !nameservers) return retryCallback(error || new Error('Unable to get nameservers'));
if (error || !nameservers) return retryCallback(error || new SubdomainError(SubdomainError.EXTERNAL_ERROR, 'Unable to get nameservers'));
async.every(nameservers, isChangeSynced.bind(null, domain, value, type), function (synced) {
debug('waitForIp: %s %s ns: %j', domain, synced ? 'done' : 'not done', nameservers);
retryCallback(synced ? null : new Error('ETRYAGAIN'));
retryCallback(synced ? null : new SubdomainError(SubdomainError.EXTERNAL_ERROR, 'ETRYAGAIN'));
});
});
}, function retryDone(error) {
+9 -16
View File
@@ -54,19 +54,11 @@ function debugApp(app, args) {
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function targetBoxVersion(manifest) {
if ('targetBoxVersion' in manifest) return manifest.targetBoxVersion;
if ('minBoxVersion' in manifest) return manifest.minBoxVersion;
return '99999.99999.99999'; // compatible with the latest version
}
function pullImage(manifest, callback) {
var docker = exports.connection;
docker.pull(manifest.dockerImage, function (err, stream) {
if (err) return callback(new Error('Error connecting to docker. statusCode: %s' + err.statusCode));
if (err) return callback(new Error('Error connecting to docker. statusCode: ' + err.statusCode));
// https://github.com/dotcloud/docker/issues/1074 says each status message
// is emitted as a chunk
@@ -91,7 +83,7 @@ function pullImage(manifest, callback) {
if (!data || !data.Config) return callback(new Error('Missing Config in image:' + JSON.stringify(data, null, 4)));
if (!data.Config.Entrypoint && !data.Config.Cmd) return callback(new Error('Only images with entry point are allowed'));
debug('This image of %s exposes ports: %j', manifest.id, data.Config.ExposedPorts);
if (data.Config.ExposedPorts) debug('This image of %s exposes ports: %j', manifest.id, data.Config.ExposedPorts);
callback(null);
});
@@ -135,7 +127,6 @@ function createSubcontainer(app, name, cmd, options, callback) {
isAppContainer = !cmd; // non app-containers are like scheduler containers
var manifest = app.manifest;
var developmentMode = !!manifest.developmentMode;
var exposedPorts = {}, dockerPortBindings = { };
var domain = app.altDomain || config.appFqdn(app.location);
var stdEnv = [
@@ -165,13 +156,15 @@ function createSubcontainer(app, name, cmd, options, callback) {
// first check db record, then manifest
var memoryLimit = app.memoryLimit || manifest.memoryLimit || 0;
if (developmentMode) {
// developerMode does not restrict memory usage
if (memoryLimit === -1) { // unrestricted
memoryLimit = 0;
} else if (memoryLimit === 0 || memoryLimit < constants.DEFAULT_MEMORY_LIMIT) { // ensure we never go below minimum (in case we change the default)
memoryLimit = constants.DEFAULT_MEMORY_LIMIT;
}
// apparmor is disabled on few servers
var enableSecurityOpt = config.CLOUDRON && safe(function () { return child_process.spawnSync('aa-enabled').status === 0; }, false);
addons.getEnvironment(app, function (error, addonEnv) {
if (error) return callback(new Error('Error getting addon environment : ' + error));
@@ -184,7 +177,7 @@ function createSubcontainer(app, name, cmd, options, callback) {
name: name, // used for filtering logs
Tty: isAppContainer,
Image: app.manifest.dockerImage,
Cmd: (isAppContainer && developmentMode) ? [ '/bin/bash', '-c', 'echo "Development mode. Use cloudron exec to debug. Sleeping" && sleep infinity' ] : cmd,
Cmd: (isAppContainer && app.debugMode && app.debugMode.cmd) ? app.debugMode.cmd : cmd,
Env: stdEnv.concat(addonEnv).concat(portEnv),
ExposedPorts: isAppContainer ? exposedPorts : { },
Volumes: { // see also ReadonlyRootfs
@@ -202,7 +195,7 @@ function createSubcontainer(app, name, cmd, options, callback) {
MemorySwap: memoryLimit, // Memory + Swap
PortBindings: isAppContainer ? dockerPortBindings : { },
PublishAllPorts: false,
ReadonlyRootfs: !developmentMode, // see also Volumes in startContainer
ReadonlyRootfs: app.debugMode ? !!app.debugMode.readonlyRootfs : true,
RestartPolicy: {
"Name": isAppContainer ? "always" : "no",
"MaximumRetryCount": 0
@@ -210,7 +203,7 @@ function createSubcontainer(app, name, cmd, options, callback) {
CpuShares: 512, // relative to 1024 for system processes
VolumesFrom: isAppContainer ? null : [ app.containerId + ":rw" ],
NetworkMode: isAppContainer ? 'cloudron' : ('container:' + app.containerId), // share network namespace with parent
SecurityOpt: config.CLOUDRON ? [ "apparmor:docker-cloudron-app" ] : null // profile available only on cloudron
SecurityOpt: enableSecurityOpt ? [ "apparmor:docker-cloudron-app" ] : null // profile available only on cloudron
}
};
containerOptions = _.extend(containerOptions, options);
+10 -3
View File
@@ -105,10 +105,17 @@ function getAllPaged(action, search, page, perPage, callback) {
function cleanup(callback) {
callback = callback || NOOP_CALLBACK;
var d = new Date();
d.setDate(d.getDate() - 7); // 7 days ago
var d = new Date();
d.setDate(d.getDate() - 7); // 7 days ago
eventlogdb.delByCreationTime(d, function (error) {
// only cleanup high frequency events
var actions = [
exports.ACTION_USER_LOGIN,
exports.ACTION_BACKUP_START,
exports.ACTION_BACKUP_FINISH
];
eventlogdb.delByCreationTime(d, actions, function (error) {
if (error) return callback(new EventLogError(EventLogError.INTERNAL_ERROR, error));
callback(null);
+7 -3
View File
@@ -49,7 +49,7 @@ function getAllPaged(action, search, page, perPage, callback) {
var query = 'SELECT ' + EVENTLOGS_FIELDS + ' FROM eventlog';
if (action || search) query += ' WHERE';
if (search) query += ' data LIKE ' + mysql.escape('%' + search + '%');
if (search) query += ' (source LIKE ' + mysql.escape('%' + search + '%') + ' OR data LIKE ' + mysql.escape('%' + search + '%') + ')';
if (action && search) query += ' AND ';
if (action) {
@@ -104,11 +104,15 @@ function clear(callback) {
});
}
function delByCreationTime(creationTime, callback) {
function delByCreationTime(creationTime, actions, callback) {
assert(util.isDate(creationTime));
assert(Array.isArray(actions));
assert.strictEqual(typeof callback, 'function');
database.query('DELETE FROM eventlog WHERE creationTime < ?', [ creationTime ], function (error) {
var query = 'DELETE FROM eventlog WHERE creationTime < ? ';
if (actions.length) query += ' AND ( ' + actions.map(function () { return 'action != ?'; }).join(' AND ') + ' ) ';
database.query(query, [ creationTime ].concat(actions), function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(error);
+28 -2
View File
@@ -12,15 +12,18 @@ exports = module.exports = {
getMembers: getMembers,
addMember: addMember,
removeMember: removeMember,
setMembers: setMembers,
isMember: isMember,
getGroups: getGroups,
setGroups: setGroups,
_clear: clear
_clear: clear,
_addDefaultGroups: addDefaultGroups
};
var assert = require('assert'),
constants = require('./constants.js'),
database = require('./database.js'),
DatabaseError = require('./databaseerror');
@@ -30,7 +33,7 @@ function get(groupId, callback) {
assert.strictEqual(typeof groupId, 'string');
assert.strictEqual(typeof callback, 'function');
database.query('SELECT ' + GROUPS_FIELDS + ' FROM groups WHERE id = ?', [ groupId ], function (error, result) {
database.query('SELECT ' + GROUPS_FIELDS + ' FROM groups WHERE id = ? ORDER BY name', [ groupId ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
@@ -145,6 +148,25 @@ function getMembers(groupId, callback) {
});
}
function setMembers(groupId, userIds, callback) {
assert.strictEqual(typeof groupId, 'string');
assert(Array.isArray(userIds));
assert.strictEqual(typeof callback, 'function');
var queries = [];
queries.push({ query: 'DELETE FROM groupMembers WHERE groupId = ?', args: [ groupId ] });
for (var i = 0; i < userIds.length; i++) {
queries.push({ query: 'INSERT INTO groupMembers (groupId, userId) VALUES (?, ?)', args: [ groupId, userIds[i] ] });
}
database.transaction(queries, function (error) {
if (error && error.code === 'ER_NO_REFERENCED_ROW_2') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(error);
});
}
function getGroups(userId, callback) {
assert.strictEqual(typeof userId, 'string');
assert.strictEqual(typeof callback, 'function');
@@ -214,3 +236,7 @@ function isMember(groupId, userId, callback) {
callback(null, result.length !== 0);
});
}
function addDefaultGroups(callback) {
add(constants.ADMIN_GROUP_ID, 'admin', callback);
}
+46 -15
View File
@@ -1,5 +1,3 @@
/* jshint node:true */
'use strict';
exports = module.exports = {
@@ -14,19 +12,21 @@ exports = module.exports = {
getMembers: getMembers,
addMember: addMember,
setMembers: setMembers,
removeMember: removeMember,
isMember: isMember,
getGroups: getGroups,
setGroups: setGroups,
ADMIN_GROUP_ID: 'admin' // see db migration code and groupdb._clear
setGroups: setGroups
};
var assert = require('assert'),
constants = require('./constants.js'),
DatabaseError = require('./databaseerror.js'),
groupdb = require('./groupdb.js'),
util = require('util');
mailboxdb = require('./mailboxdb.js'),
util = require('util'),
uuid = require('node-uuid');
// http://dustinsenos.com/articles/customErrorsInNode
// http://code.google.com/p/v8/wiki/JavaScriptStackTraceApi
@@ -56,16 +56,20 @@ GroupError.BAD_FIELD = 'Field error';
GroupError.NOT_EMPTY = 'Not Empty';
GroupError.NOT_ALLOWED = 'Not Allowed';
// keep this in sync with validateUsername
function validateGroupname(name) {
assert.strictEqual(typeof name, 'string');
var RESERVED = [ 'admins', 'users' ]; // ldap code uses 'users' pseudo group
if (name.length <= 2) return new GroupError(GroupError.BAD_FIELD, 'name must be atleast 2 chars');
if (name.length < 2) return new GroupError(GroupError.BAD_FIELD, 'name must be atleast 2 chars');
if (name.length >= 200) return new GroupError(GroupError.BAD_FIELD, 'name too long');
if (!/^[A-Za-z0-9_-]*$/.test(name)) return new GroupError(GroupError.BAD_FIELD, 'name can only have A-Za-z0-9_-');
if (constants.RESERVED_NAMES.indexOf(name) !== -1) return new GroupError(GroupError.BAD_FIELD, 'name is reserved');
if (RESERVED.indexOf(name) !== -1) return new GroupError(GroupError.BAD_FIELD, 'name is reserved');
// +/- can be tricky in emails. also need to consider valid LDAP characters here (e.g '+' is reserved)
if (/[^a-zA-Z0-9.]/.test(name)) return new GroupError(GroupError.BAD_FIELD, 'name can only contain alphanumerals and dot');
// app emails are sent using the .app suffix
if (name.indexOf('.app') !== -1) return new GroupError(GroupError.BAD_FIELD, 'name pattern is reserved for apps');
return null;
}
@@ -74,14 +78,23 @@ function create(name, callback) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof callback, 'function');
// we store names in lowercase
name = name.toLowerCase();
var error = validateGroupname(name);
if (error) return callback(error);
groupdb.add(name /* id */, name, function (error) {
var id = 'gid-' + uuid.v4();
mailboxdb.add(name, id /* owner */, mailboxdb.TYPE_GROUP, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new GroupError(GroupError.ALREADY_EXISTS));
if (error) return callback(new GroupError(GroupError.INTERNAL_ERROR, error));
callback(null, { id: name, name: name });
groupdb.add(id, name, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new GroupError(GroupError.ALREADY_EXISTS));
if (error) return callback(new GroupError(GroupError.INTERNAL_ERROR, error));
callback(null, { id: id, name: name });
});
});
}
@@ -90,13 +103,18 @@ function remove(id, callback) {
assert.strictEqual(typeof callback, 'function');
// never allow admin group to be deleted
if (id === exports.ADMIN_GROUP_ID) return callback(new GroupError(GroupError.NOT_ALLOWED));
if (id === constants.ADMIN_GROUP_ID) return callback(new GroupError(GroupError.NOT_ALLOWED));
groupdb.del(id, function (error) {
mailboxdb.delByOwnerId(id, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new GroupError(GroupError.NOT_FOUND));
if (error) return callback(new GroupError(GroupError.INTERNAL_ERROR, error));
callback(null);
groupdb.del(id, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new GroupError(GroupError.NOT_FOUND));
if (error) return callback(new GroupError(GroupError.INTERNAL_ERROR, error));
callback(null);
});
});
}
@@ -194,6 +212,19 @@ function addMember(groupId, userId, callback) {
});
}
function setMembers(groupId, userIds, callback) {
assert.strictEqual(typeof groupId, 'string');
assert(Array.isArray(userIds));
assert.strictEqual(typeof callback, 'function');
groupdb.setMembers(groupId, userIds, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new GroupError(GroupError.NOT_FOUND, 'Invalid group or user id'));
if (error) return callback(new GroupError(GroupError.INTERNAL_ERROR, error));
return callback(null);
});
}
function removeMember(groupId, userId, callback) {
assert.strictEqual(typeof groupId, 'string');
assert.strictEqual(typeof userId, 'string');
+4 -4
View File
@@ -6,18 +6,18 @@
exports = module.exports = {
// a version bump means that all containers (apps and addons) are recreated
'version': 40,
'version': 45,
'baseImages': [ 'cloudron/base:0.8.1', 'cloudron/base:0.9.0' ],
'baseImages': [ 'cloudron/base:0.9.0' ],
// Note that if any of the databases include an upgrade, bump the infra version above
// This is because we upgrade using dumps instead of mysql_upgrade, pg_upgrade etc
'images': {
'mysql': { repo: 'cloudron/mysql', tag: 'cloudron/mysql:0.13.0' },
'postgresql': { repo: 'cloudron/postgresql', tag: 'cloudron/postgresql:0.13.0' },
'postgresql': { repo: 'cloudron/postgresql', tag: 'cloudron/postgresql:0.15.0' },
'mongodb': { repo: 'cloudron/mongodb', tag: 'cloudron/mongodb:0.11.0' },
'redis': { repo: 'cloudron/redis', tag: 'cloudron/redis:0.10.0' },
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:0.20.0' },
'mail': { repo: 'cloudron/mail', tag: 'cloudron/mail:0.29.0' },
'graphite': { repo: 'cloudron/graphite', tag: 'cloudron/graphite:0.10.0' }
}
};
+38 -1
View File
@@ -3,13 +3,16 @@
var assert = require('assert'),
async = require('async'),
authcodedb = require('./authcodedb.js'),
backups = require('./backups.js'),
debug = require('debug')('box:src/janitor'),
docker = require('./docker.js').connection,
settings = require('./settings.js'),
tokendb = require('./tokendb.js');
exports = module.exports = {
cleanupTokens: cleanupTokens,
cleanupDockerVolumes: cleanupDockerVolumes
cleanupDockerVolumes: cleanupDockerVolumes,
cleanupBackups: cleanupBackups
};
var NOOP_CALLBACK = function () { };
@@ -101,3 +104,37 @@ function cleanupDockerVolumes(callback) {
}, callback);
});
}
function cleanupBackups(callback) {
assert(!callback || typeof callback === 'function'); // callback is null when called from cronjob
callback = callback || NOOP_CALLBACK;
debug('Cleaning backups');
settings.getBackupConfig(function (error, backupConfig) {
if (error) return callback(error);
// nothing to do here
if (backupConfig.provider !== 'filesystem') return callback();
backups.getPaged(1, 1000, function (error, result) {
if (error) return callback(error);
// sort with latest backups first in the array and slice 2
var toCleanup = result.sort(function (a, b) { return b.creationTime.getTime() - a.creationTime.getTime(); }).slice(2);
debug('cleanupBackups: about to clean: ', toCleanup);
async.each(toCleanup, function (backup, callback) {
backups.removeBackup(backup.id, backup.dependsOn, function (error) {
if (error) console.error(error);
debug('cleanupBackups: %s, %s done', backup.id, backup.dependsOn.join(', '));
callback();
});
}, callback);
});
});
}
+138 -56
View File
@@ -8,27 +8,20 @@ exports = module.exports = {
var assert = require('assert'),
apps = require('./apps.js'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:ldap'),
eventlog = require('./eventlog.js'),
user = require('./user.js'),
UserError = user.UserError,
ldap = require('ldapjs'),
mailboxes = require('./mailboxes.js'),
MailboxError = mailboxes.MailboxError;
mailboxdb = require('./mailboxdb.js'),
safe = require('safetydance'),
util = require('util');
var gServer = null;
var NOOP = function () {};
var gLogger = {
trace: NOOP,
debug: NOOP,
info: debug,
warn: debug,
error: console.error,
fatal: console.error
};
var GROUP_USERS_DN = 'cn=users,ou=groups,dc=cloudron';
var GROUP_ADMINS_DN = 'cn=admins,ou=groups,dc=cloudron';
@@ -71,8 +64,7 @@ function userSearch(req, res, next) {
cn: entry.id,
uid: entry.id,
mail: entry.email,
// TODO: check mailboxes before we send this
mailAlternateAddress: entry.username + '@' + config.fqdn(),
mailAlternateAddress: entry.alternateEmail,
displayname: displayName,
givenName: firstName,
username: entry.username,
@@ -86,7 +78,8 @@ function userSearch(req, res, next) {
if (lastName.length !== 0) obj.attributes.sn = lastName;
// ensure all filter values are also lowercase
var lowerCaseFilter = ldap.parseFilter(req.filter.toString().toLowerCase());
var lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
res.send(obj);
@@ -125,7 +118,8 @@ function groupSearch(req, res, next) {
};
// ensure all filter values are also lowercase
var lowerCaseFilter = ldap.parseFilter(req.filter.toString().toLowerCase());
var lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
res.send(obj);
@@ -139,31 +133,95 @@ function groupSearch(req, res, next) {
function mailboxSearch(req, res, next) {
debug('mailbox search: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
mailboxes.getAll(function (error, result) {
if (!req.dn.rdns[0].attrs.cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
var name = req.dn.rdns[0].attrs.cn.value.toLowerCase();
// allow login via email
var parts = name.split('@');
if (parts[1] === config.fqdn()) {
name = parts[0];
}
mailboxdb.getMailbox(name, function (error, mailbox) {
if (error && error.reason === DatabaseError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error.toString()));
result.forEach(function (entry) {
var dn = ldap.parseDN('cn=' + entry.name + ',ou=mailboxes,dc=cloudron');
// TODO: send aliases
var obj = {
dn: dn.toString(),
attributes: {
objectclass: ['mailbox'],
objectcategory: 'mailbox',
cn: entry.name,
uid: entry.name,
mail: entry.name + '@' + config.fqdn()
}
};
// ensure all filter values are also lowercase
var lowerCaseFilter = ldap.parseFilter(req.filter.toString().toLowerCase());
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && lowerCaseFilter.matches(obj.attributes)) {
res.send(obj);
var obj = {
dn: req.dn.toString(),
attributes: {
objectclass: ['mailbox'],
objectcategory: 'mailbox',
cn: mailbox.name,
uid: mailbox.name,
mail: mailbox.name + '@' + config.fqdn(),
ownerType: mailbox.ownerType
}
});
};
// ensure all filter values are also lowercase
var lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if (lowerCaseFilter.matches(obj.attributes)) res.send(obj);
res.end();
});
}
function mailAliasSearch(req, res, next) {
debug('mail alias get: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
if (!req.dn.rdns[0].attrs.cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
mailboxdb.getAlias(req.dn.rdns[0].attrs.cn.value.toLowerCase(), function (error, alias) {
if (error && error.reason === DatabaseError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error.toString()));
// https://wiki.debian.org/LDAP/MigrationTools/Examples
// https://docs.oracle.com/cd/E19455-01/806-5580/6jej518pp/index.html
var obj = {
dn: req.dn.toString(),
attributes: {
objectclass: ['nisMailAlias'],
objectcategory: 'nisMailAlias',
cn: alias.name,
rfc822MailMember: alias.aliasTarget
}
};
// ensure all filter values are also lowercase
var lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if (lowerCaseFilter.matches(obj.attributes)) res.send(obj);
res.end();
});
}
function mailingListSearch(req, res, next) {
debug('mailing list get: dn %s, scope %s, filter %s (from %s)', req.dn.toString(), req.scope, req.filter.toString(), req.connection.ldap.id);
if (!req.dn.rdns[0].attrs.cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
mailboxdb.getGroup(req.dn.rdns[0].attrs.cn.value.toLowerCase(), function (error, group) {
if (error && error.reason === DatabaseError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error.toString()));
// http://ldapwiki.willeke.com/wiki/Original%20Mailgroup%20Schema%20From%20Netscape
var obj = {
dn: req.dn.toString(),
attributes: {
objectclass: ['mailGroup'],
objectcategory: 'mailGroup',
cn: group.name,
mail: group.name + '@' + config.fqdn(),
mgrpRFC822MailMember: group.members
}
};
// ensure all filter values are also lowercase
var lowerCaseFilter = safe(function () { return ldap.parseFilter(req.filter.toString().toLowerCase()); }, null);
if (!lowerCaseFilter) return next(new ldap.OperationsError(safe.error.toString()));
if (lowerCaseFilter.matches(obj.attributes)) res.send(obj);
res.end();
});
@@ -173,21 +231,15 @@ function authenticateUser(req, res, next) {
debug('user bind: %s (from %s)', req.dn.toString(), req.connection.ldap.id);
// extract the common name which might have different attribute names
var attributeName = Object.keys(req.dn.rdns[0])[0];
var commonName = req.dn.rdns[0][attributeName];
var attributeName = Object.keys(req.dn.rdns[0].attrs)[0];
var commonName = req.dn.rdns[0].attrs[attributeName].value;
if (!commonName) return next(new ldap.NoSuchObjectError(req.dn.toString()));
var api;
if (attributeName === 'mail') {
api = user.verifyWithEmail;
} else if (commonName.indexOf('@') !== -1) { // if mail is specified, enforce mail check
var parts = commonName.split('@');
if (parts[1] === config.fqdn()) { // internal email, verify with username
commonName = parts[0];
api = user.verifyWithUsername;
} else { // external email
api = user.verifyWithEmail;
}
api = user.verifyWithEmail;
} else if (commonName.indexOf('uid-') === 0) {
api = user.verify;
} else {
@@ -224,31 +276,61 @@ function authorizeUserForApp(req, res, next) {
});
}
function authorizeUserForMailbox(req, res, next) {
assert(req.user);
function authenticateMailbox(req, res, next) {
if (!req.dn.rdns[0].attrs.cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
// We simply authorize the user to access a mailbox by his own name
mailboxes.get(req.user.username, function (error) {
if (error && error.reason === MailboxError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
var name = req.dn.rdns[0].attrs.cn.value.toLowerCase();
// allow login via email
var parts = name.split('@');
if (parts[1] === config.fqdn()) {
name = parts[0];
}
mailboxdb.getMailbox(name, function (error, mailbox) {
if (error && error.reason === DatabaseError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error.message));
eventlog.add(eventlog.ACTION_USER_LOGIN, { authType: 'ldap', mailboxId: req.user.username }, { userId: req.user.username });
if (mailbox.ownerType === mailboxdb.TYPE_APP) {
if (req.credentials !== mailbox.ownerId) return next(new ldap.NoSuchObjectError(req.dn.toString()));
eventlog.add(eventlog.ACTION_USER_LOGIN, { authType: 'ldap', mailboxId: name }, { appId: mailbox.ownerId });
return res.end();
}
res.end();
assert.strictEqual(mailbox.ownerType, mailboxdb.TYPE_USER);
authenticateUser(req, res, function (error) {
if (error) return next(error);
eventlog.add(eventlog.ACTION_USER_LOGIN, { authType: 'ldap', mailboxId: name }, { userId: req.user.username });
res.end();
});
});
}
function start(callback) {
assert.strictEqual(typeof callback, 'function');
gServer = ldap.createServer({ log: gLogger });
var logger = {
trace: NOOP,
debug: NOOP,
info: debug,
warn: debug,
error: console.error,
fatal: console.error
};
gServer = ldap.createServer({ log: logger });
gServer.search('ou=users,dc=cloudron', userSearch);
gServer.search('ou=groups,dc=cloudron', groupSearch);
gServer.bind('ou=users,dc=cloudron', authenticateUser, authorizeUserForApp);
// http://www.ietf.org/proceedings/43/I-D/draft-srivastava-ldap-mail-00.txt
gServer.search('ou=mailboxes,dc=cloudron', mailboxSearch);
gServer.bind('ou=mailboxes,dc=cloudron', authenticateUser, authorizeUserForMailbox);
gServer.search('ou=mailaliases,dc=cloudron', mailAliasSearch);
gServer.search('ou=mailinglists,dc=cloudron', mailingListSearch);
gServer.bind('ou=mailboxes,dc=cloudron', authenticateMailbox);
// this is the bind for addons (after bind, they might search and authenticate)
gServer.bind('ou=addons,dc=cloudron', function(req, res, next) {
@@ -269,7 +351,7 @@ function start(callback) {
function stop(callback) {
assert.strictEqual(typeof callback, 'function');
gServer.close();
if (gServer) gServer.close();
callback();
}
Executable → Regular
+42 -3
View File
@@ -12,26 +12,65 @@ var assert = require('assert'),
var COLLECT_LOGS_CMD = path.join(__dirname, 'scripts/collectlogs.sh');
var CRASH_LOG_TIMESTAMP_OFFSET = 1000 * 60 * 60; // 60 min
var CRASH_LOG_TIMESTAMP_FILE = '/tmp/crashlog.timestamp';
var CRASH_LOG_STASH_FILE = '/tmp/crashlog';
var CRASH_LOG_FILE_LIMIT = 2 * 1024 * 1024; // 2mb
function collectLogs(unitName, callback) {
assert.strictEqual(typeof unitName, 'string');
assert.strictEqual(typeof callback, 'function');
var logs = safe.child_process.execSync('sudo ' + COLLECT_LOGS_CMD + ' ' + unitName, { encoding: 'utf8' });
logs = logs + '\n\n=====================================\n\n';
callback(null, logs);
}
function stashLogs(logs) {
var stat = safe.fs.statSync(CRASH_LOG_STASH_FILE);
if (stat && (stat.size > CRASH_LOG_FILE_LIMIT)) {
console.error('Dropping logs since crash file has become too big');
return;
}
// append here
safe.fs.writeFileSync(CRASH_LOG_STASH_FILE, logs, { flag: 'a' });
}
function sendFailureLogs(processName, options) {
assert.strictEqual(typeof processName, 'string');
assert.strictEqual(typeof options, 'object');
collectLogs(options.unit || processName, function (error, result) {
collectLogs(options.unit || processName, function (error, newLogs) {
if (error) {
console.error('Failed to collect logs.', error);
result = util.format('Failed to collect logs.', error);
newLogs = util.format('Failed to collect logs.', error);
}
console.log('Sending failure logs for', processName);
mailer.unexpectedExit(processName, result);
var timestamp = safe.fs.readFileSync(CRASH_LOG_TIMESTAMP_FILE, 'utf8');
// check if we already sent a mail in the last CRASH_LOG_TIME_OFFSET window
if (timestamp && (parseInt(timestamp) + CRASH_LOG_TIMESTAMP_OFFSET) > Date.now()) {
console.log('Crash log already sent within window. Stashing logs.');
return stashLogs(newLogs);
}
var stashedLogs = safe.fs.readFileSync(CRASH_LOG_STASH_FILE, 'utf8');
var compiledLogs = stashedLogs ? (stashedLogs + newLogs) : newLogs;
var mailSubject = processName + (stashedLogs ? ' and others' : '');
mailer.unexpectedExit(mailSubject, compiledLogs, function (error) {
if (error) {
console.log('Error sending crashlog. Stashing logs.');
return stashLogs(newLogs);
}
// write the new timestamp file and delete stash file
safe.fs.writeFileSync(CRASH_LOG_TIMESTAMP_FILE, String(Date.now()));
safe.fs.unlinkSync(CRASH_LOG_STASH_FILE);
});
});
}
+10 -7
View File
@@ -1,18 +1,21 @@
<%if (format === 'text') { %>
Dear Admin,
Dear Cloudron Admin,
The application titled '<%= title %>' that you installed at <%= appFqdn %>
is not responding.
The application '<%= title %>' installed at <%= appFqdn %> is not responding.
This is most likely a problem in the application.
You are receiving this email because you are an Admin of the Cloudron at <%= fqdn %>.
To resolve this, you can try the following:
* Restart the app in the app configuration dialog
* Restore the app to the latest backup
* Contact us via support@cloudron.io or https://chat.cloudron.io
Thank you,
Application WatchDog
Powered by https://cloudron.io
Sent at: <%= new Date().toUTCString() %>
<% } else { %>
<% } %>
+6 -5
View File
@@ -1,18 +1,19 @@
<%if (format === 'text') { %>
Dear Admin,
Dear Cloudron Admin,
A new version <%= updateInfo.manifest.version %> of the app '<%= app.manifest.title %>' installed at <%= app.fqdn %> is available!
a new version <%= updateInfo.manifest.version %> of the app '<%= app.manifest.title %>' installed at <%= app.fqdn %> is available!
The app will update automatically tonight. Alternately, update immediately at <%= webadminUrl %>.
Changes:
<%= updateInfo.manifest.changelog %>
Thank you,
your Cloudron
Powered by https://cloudron.io
Sent at: <%= new Date().toUTCString() %>
<% } else { %>
<% } %>
+20
View File
@@ -0,0 +1,20 @@
<%if (format === 'text') { %>
Dear Cloudron Admin,
creating a backup of <%= fqdn %> has failed.
-------------------------------------
<%- message %>
-------------------------------------
Powered by https://cloudron.io
Sent at: <%= new Date().toUTCString() %>
<% } else { %>
<% } %>
+37 -2
View File
@@ -1,8 +1,8 @@
<%if (format === 'text') { %>
Dear Admin,
Dear <%= cloudronName %> Admin,
Version <%= newBoxVersion %> of Cloudron <%= fqdn %> is now available!
Version <%= newBoxVersion %> for Cloudron <%= fqdn %> is now available!
Your Cloudron will update automatically tonight. Alternately, update immediately at <%= webadminUrl %>.
@@ -16,5 +16,40 @@ your Cloudron
<% } else { %>
<center>
<img src="<%= cloudronAvatarUrl %>" width="128px" height="128px"/>
<h3>Dear <%= cloudronName %> Admin,</h3>
<div style="width: 650px; text-align: left;">
<p>
Version <b><%= newBoxVersion %></b> for Cloudron <%= fqdn %> is now available!
</p>
<p>
Your Cloudron will update automatically tonight.<br/>
Alternately, update immediately <a href="<%= webadminUrl %>">here</a>.
</p>
<h5>Changelog:</h5>
<ul>
<% for (var i = 0; i < changelogHTML.length; i++) { %>
<li><%- changelogHTML[i] %></li>
<% } %>
</ul>
<br/>
<br/>
</div>
<div style="font-size: 10px; color: #333333; background: #ffffff;">
Powered by <a href="https://cloudron.io">Cloudron</a>.
</div>
</center>
<img src="https://analytics.cloudron.io/piwik.php?idsite=2&rec=1&e_c=CloudronEmail&e_a=update" style="border:0" alt="" />
<% } %>
@@ -1,13 +1,20 @@
<%if (format === 'text') { %>
Dear Cloudron Team,
<%= domain %> was not renewed.
Dear Cloudron Admin,
The certificate for <%= domain %> could not be renewed.
-------------------------------------
<%- message %>
Thank you,
Your Cloudron
-------------------------------------
Powered by https://cloudron.io
Sent at: <%= new Date().toUTCString() %>
<% } else { %>
<% } %>

Some files were not shown because too many files have changed in this diff Show More