Compare commits

..

408 Commits

Author SHA1 Message Date
Girish Ramakrishnan c95778178f make rootfs readonly based on targetBoxVersion 2015-10-08 11:48:33 -07:00
Girish Ramakrishnan 04870313b7 Launch apps with readonly rootfs
We explicitly mark /tmp, /run and /var/log as writable volumes.
Docker creates such volumes in it's own volumes directory. Note
that these volumes are separate from host binds (/app/data).

When removing the container the docker created volumes are
removed (but not host binds).

Fixes #196
2015-10-08 11:33:17 -07:00
Girish Ramakrishnan 6ca040149c run addons as readonly 2015-10-08 11:07:28 -07:00
Girish Ramakrishnan e487b9d46b update mail image 2015-10-08 11:06:29 -07:00
Girish Ramakrishnan 1375e16ad2 mongodb: readonly rootfs 2015-10-08 10:24:15 -07:00
Girish Ramakrishnan 312f1f0085 mysql: readonly rootfs 2015-10-08 09:43:05 -07:00
Girish Ramakrishnan 721900fc47 postgresql: readonly rootfs 2015-10-08 09:20:25 -07:00
Girish Ramakrishnan 2d815a92a3 redis: use readonly rootfs 2015-10-08 09:00:43 -07:00
Girish Ramakrishnan 1c192b7c11 pass options param in setup call 2015-10-08 02:08:27 -07:00
Girish Ramakrishnan 4a887336bc Do not send app down mails for dev mode apps
Fixes #501
2015-10-07 18:46:48 -07:00
Girish Ramakrishnan 8f6521f942 pass addon options to all functions 2015-10-07 16:10:08 -07:00
Girish Ramakrishnan fbdfaa4dc7 rename setup and teardown functions of oauth addon 2015-10-07 15:55:57 -07:00
Girish Ramakrishnan bf4290db3e remove token addon, its a relic of the past 2015-10-07 15:44:55 -07:00
Johannes Zellner 94ad633128 Also unset the returnTo after login 2015-10-01 16:26:17 +02:00
Johannes Zellner c552917991 Reset the target url after oauth login
This is required for the cloudron button to work for users
which are not logged in
2015-10-01 16:16:29 +02:00
Johannes Zellner a7ee8c853e Keep checkInstall in sync 2015-09-30 16:12:51 +02:00
Girish Ramakrishnan 29e4879451 fix test image version 2015-09-29 20:22:38 -07:00
Girish Ramakrishnan 8b92344808 redirect stderr 2015-09-29 19:23:39 -07:00
Girish Ramakrishnan 0877cec2e6 Fix EE leak warning 2015-09-29 14:40:23 -07:00
Girish Ramakrishnan b1ca577be7 use newer test image that dies immediately on stop/term 2015-09-29 14:33:07 -07:00
Girish Ramakrishnan 9b484f5ac9 new version of mysql prints error with -p 2015-09-29 14:13:58 -07:00
Girish Ramakrishnan b6a9fd81da refactor our test docker image details 2015-09-29 13:59:17 -07:00
Girish Ramakrishnan f19113f88e rename test iamge under cloudron/ 2015-09-29 12:52:54 -07:00
Girish Ramakrishnan 3837bee51f retry pulling image
fixes #497
2015-09-29 12:47:03 -07:00
Girish Ramakrishnan 89c3296632 debug the status code as well 2015-09-28 23:18:50 -07:00
Girish Ramakrishnan db55f0696e stringify object when appending to string 2015-09-28 23:10:09 -07:00
Girish Ramakrishnan 03d4ae9058 new base image 0.4.0 2015-09-28 19:33:58 -07:00
Girish Ramakrishnan f8b41b703c Use fqdn to generate domain name of txt records 2015-09-28 17:20:59 -07:00
Girish Ramakrishnan 2a989e455c Ensure TXT records are added as dotted domains
Fixes #498
2015-09-28 16:35:58 -07:00
Girish Ramakrishnan cd24decca0 Send dns status requests in series
And abort status checking after the first one fails. Otherwise, this
bombards the appstore unnecessarily. And checks for status of other
things unnecessarily.
2015-09-28 16:23:39 -07:00
Girish Ramakrishnan f39842a001 ldap: allow non-anonymous searches
Add LDAP_BIND_DN and LDAP_BIND_PASSWORD that allow
apps to bind before a search. There appear to be two kinds of
ldap flows:

1. App simply binds using cn=<username>,$LDAP_USERS_BASE_DN. This
   works swimmingly today.

2. App searches the username under a "bind_dn" using some admin
   credentials. It takes the result and uses the first dn in the
   result as the user dn. It then binds as step 1.

This commit tries to help out the case 2) apps. These apps really
insist on having some credentials for searching.
2015-09-25 21:28:47 -07:00
Girish Ramakrishnan 2a39526a4c Remove old app ids from updatechecker state
Fixes #472
2015-09-22 22:46:14 -07:00
Girish Ramakrishnan ded5d4c98b debug message when notification is skipped 2015-09-22 22:41:42 -07:00
Girish Ramakrishnan a0ca59c3f2 Fix typo 2015-09-22 20:22:17 -07:00
Girish Ramakrishnan 53cfc49807 Save version instead of boolean so we get notified when version changes
part of #472
2015-09-22 16:11:15 -07:00
Girish Ramakrishnan 942eb579e4 save/restore notification state of updatechecker
part of #472
2015-09-22 16:11:04 -07:00
Girish Ramakrishnan 5819cfe412 Fix progress message 2015-09-22 13:02:09 -07:00
Johannes Zellner 5cb62ca412 Remove start/stop buttons in webadmin
Fixes #495
2015-09-22 22:00:42 +02:00
Johannes Zellner df10c245de app.js is no more 2015-09-22 22:00:42 +02:00
Girish Ramakrishnan 4a804dc52b Do a complete backup for updates
The backup cron job ensures backups every 4 hours which simply does
a 'box' backup listing. If we do only a 'box' backup during update,
this means that this cron job skips doing a backup and thus the apps
are not backed up.

This results in the janitor on the CaaS side complaining that the
app backups are too old.

Since we don't stop apps anymore during updates, it makes sense
to simply backup everything for updates as well. This is probably
what the user wants anyway.
2015-09-22 12:51:58 -07:00
Girish Ramakrishnan ed2f25a998 better debugs 2015-09-21 16:02:58 -07:00
Girish Ramakrishnan 7510c9fe29 Fix typo 2015-09-21 15:57:06 -07:00
Girish Ramakrishnan 78a1d53728 copy old backup as failed/errored apps
This ensures that
a) we don't get emails from janitor about bad app backups
b) that the backups are persisted over the s3 lifecycle

Fixes #493
2015-09-21 15:03:10 -07:00
Girish Ramakrishnan e9b078cd58 add backups.copyLastBackup 2015-09-21 14:14:43 -07:00
Girish Ramakrishnan dd8b928684 aws: add copyObject 2015-09-21 14:02:00 -07:00
Girish Ramakrishnan 185b574bdc Add custom apparmor profile for cloudron apps
Docker generates an apparmor profile on the fly under /etc/apparmor.d/docker.
This profile gets overwritten on every docker daemon start.

This profile allows processes to ptrace themselves. This is required by
circus (python process manager) for reasons unknown to me. It floods the logs
with
    audit[7623]: <audit-1400> apparmor="DENIED" operation="ptrace" profile="docker-default" pid=7623 comm="python3.4" requested_mask="trace" denied_mask="trace" peer="docker-default"

This is easily tested using:
    docker run -it cloudron/base:0.3.3 /bin/bash
        a) now do ps
        b) journalctl should show error log as above

    docker run --security-opt=apparmor:docker-cloudron-app -it cloudron/base:0.3.3 /bin/bash
        a) now do ps
        b) no error!

Note that despite this, the process may not have ability to ptrace since it does not
have CAP_PTRACE. Also, security-opt is the profile name (inside the apparmor config file)
and not the filename.

References:
    https://groups.google.com/forum/#!topic/docker-user/xvxpaceTCyw
    https://github.com/docker/docker/issues/7276
    https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1320869

This is an infra update because we need to recreate containers to get the right profile.

Fixes #492
2015-09-21 11:01:44 -07:00
Girish Ramakrishnan a89726a8c6 Add custom debug.formatArgs to remove timestamp prefix in logs
Fixes #490

See also:
https://github.com/visionmedia/debug/issues/216
2015-09-21 09:05:14 -07:00
Girish Ramakrishnan c80aca27e6 remove unnecessary supererror call 2015-09-21 09:04:16 -07:00
Girish Ramakrishnan 029acab333 use correct timezone in updater
fixes #491
2015-09-18 14:46:44 -07:00
Girish Ramakrishnan 4f9f10e130 timezone detection is based on browser location/ip and not cloudron region intentionally 2015-09-18 13:40:22 -07:00
Girish Ramakrishnan 9ba11d2e14 print body on failure 2015-09-18 12:03:48 -07:00
Girish Ramakrishnan 23a5a1f79f timezone is already determined automatically using activation 2015-09-18 12:02:36 -07:00
Girish Ramakrishnan e8dc617d40 print tz for debugging 2015-09-18 10:51:52 -07:00
Girish Ramakrishnan d56794e846 clear backup progress when initiating backup
this ensures that tools can do:
1. backup
2. wait_for_backup

without the synchronous clear, we might get the progress state of
an earlier backup.
2015-09-17 21:17:59 -07:00
Girish Ramakrishnan 2663ec7da0 cloudron.backup does not wait for backup to complete 2015-09-17 16:35:59 -07:00
Girish Ramakrishnan eec4ae98cd add comment for purpose on internal server 2015-09-17 16:27:46 -07:00
Girish Ramakrishnan c31a0f4e09 Store dates as iso strings in database
ideally, the database schema should be TIMESTAMP
2015-09-17 13:51:55 -07:00
Girish Ramakrishnan 739db23514 Use the default timezone in settings
Fixes #485
2015-09-16 16:36:08 -07:00
Girish Ramakrishnan 8598fb444b store timezone in config.js (part of provision data) 2015-09-16 15:54:56 -07:00
Girish Ramakrishnan 0b630ff504 Remove debug that is flooding the logs 2015-09-16 10:50:15 -07:00
Girish Ramakrishnan 84169dea3d Do not set process.env.NODE_TLS_REJECT_UNAUTHORIZED
Doing so will affect all https requests which is dangerous.

We have these options to solve this:
1. Use superagent.ca(). Appstore already provides wildcard certs
   for dev, staging signed with appstore_ca. But we then need to
   send across the appstore_ca cert across in the provision call.
   This is a bit of work.

2. Convert superagent into https.request calls and use the
   rejectUnauthorized option.

3. Simply use http. This is what is done in this commit.

Fixes #488
2015-09-16 10:36:03 -07:00
Girish Ramakrishnan d83b5de47a reserve the ldap and oauthproxy port 2015-09-16 10:12:59 -07:00
Girish Ramakrishnan 2719c4240f Get oauth proxy port from the configs 2015-09-16 10:06:34 -07:00
Johannes Zellner d749756b53 Do not show the update action button in non mobile view 2015-09-16 09:36:46 +02:00
Johannes Zellner 0401c61c15 Add tooltip text for the app action icons 2015-09-16 09:36:22 +02:00
Johannes Zellner 34f45da2de Show indicator when app update is available
Fixes #489
2015-09-16 09:28:43 +02:00
Girish Ramakrishnan baecbf783c journalctl seems to barf on this debug 2015-09-15 20:50:22 -07:00
Girish Ramakrishnan 2f141cd6e0 Make the times absurdly high but that is how long in takes 2015-09-15 18:56:25 -07:00
Girish Ramakrishnan 1296299d02 error is undefined 2015-09-15 18:27:09 -07:00
Girish Ramakrishnan 998ac74d32 oldConfig.location can be null
If we had an update, location is not part of oldConfig. if we now do
an infra update, location is undefined.
2015-09-15 18:08:29 -07:00
Girish Ramakrishnan b4a34e6432 Explicity debug the fields
for some reason, journalctl barfs on this line
2015-09-15 14:55:20 -07:00
Girish Ramakrishnan e70c9d55db apptask: retry for external error as well 2015-09-14 21:45:27 -07:00
Girish Ramakrishnan 268aee6265 Return busy code for 420 response 2015-09-14 21:44:44 -07:00
Girish Ramakrishnan 1ba7b0e0fb context is raw text 2015-09-14 17:25:27 -07:00
Girish Ramakrishnan 72788fdb11 add note on how to test the oom 2015-09-14 17:20:30 -07:00
Girish Ramakrishnan 435afec13c Print OOM context 2015-09-14 17:18:11 -07:00
Girish Ramakrishnan 2cb1877669 Do not reconnect for now 2015-09-14 17:10:49 -07:00
Girish Ramakrishnan edd672cba7 fix typo 2015-09-14 17:07:44 -07:00
Girish Ramakrishnan 991f37fe05 Provide app information if possible 2015-09-14 17:06:04 -07:00
Girish Ramakrishnan c147d8004b Add appdb.getByContainerId 2015-09-14 17:01:04 -07:00
Girish Ramakrishnan cdcc4dfda8 Get notification on app oom
currently, oom events arrive a little late :
https://github.com/docker/docker/issues/16074

fixes #489
2015-09-14 16:51:32 -07:00
Girish Ramakrishnan 2eaba686fb apphealthmonitor.js is not executable 2015-09-14 16:51:32 -07:00
Girish Ramakrishnan 236032b4a6 Remove supererror setup in oauthproxy and apphealthmonitor 2015-09-14 16:49:10 -07:00
Girish Ramakrishnan 5fcba59b3e set memory limits for addons
mysql, postgresql, mongodb - 100m each
mail, graphite, redis (each instance) - 75m

For reference, in yellowtent:
mongo - 5m
postgresql - 33m
mysql - 3.5m
mail: 26m
graphite - 26m
redis - 32m
2015-09-14 13:47:45 -07:00
Girish Ramakrishnan 6efd8fddeb fix require paths 2015-09-14 13:00:03 -07:00
Girish Ramakrishnan 8aff2b9e74 remove oauthproxy systemd configs 2015-09-14 12:02:38 -07:00
Girish Ramakrishnan fbae432b98 merge oauthproxy server into box server 2015-09-14 11:58:28 -07:00
Girish Ramakrishnan 9cad7773ff refactor code to prepare for merge into box server 2015-09-14 11:28:49 -07:00
Girish Ramakrishnan 4adf122486 oauthproxy: refactor for readability 2015-09-14 11:22:33 -07:00
Girish Ramakrishnan ea47c26d3f apphealthmonitor is not a executable anymore 2015-09-14 11:09:58 -07:00
Girish Ramakrishnan f57aae9545 Fix typo in assert 2015-09-14 11:09:41 -07:00
Girish Ramakrishnan cdeb830706 Add apphealthmonitor.stop 2015-09-14 11:02:06 -07:00
Girish Ramakrishnan 0c9618f19a Add ldap.stop 2015-09-14 11:01:35 -07:00
Girish Ramakrishnan 1cd9d07d8c Merge apphealthtask into box server
We used to run this as a separate process but no amount of node/v8 tweaking
makes them run as standalone with 50M RSS.

Three solutions were considered for the memory issue:
1. Use systemd timer. apphealthtask needs to run quiet frequently (10 sec)
   for the ui to get the app health update immediately after install.

2. Merge into box server (this commit)

3. Increase memory to 80M. This seems to make apphealthtask run as-is.
2015-09-14 10:52:11 -07:00
Girish Ramakrishnan f028649582 Rename app.js to box.js 2015-09-14 10:43:47 -07:00
Johannes Zellner d57236959a choose aws subdomain backend for test purpose 2015-09-13 22:02:04 +02:00
Johannes Zellner ebe975f463 Also send data with the domain deletion 2015-09-13 22:02:04 +02:00
Johannes Zellner a94267fc98 Use caas.js for subdomain business 2015-09-13 22:02:04 +02:00
Johannes Zellner f186ea7cc3 Add initial caas.js 2015-09-13 22:02:04 +02:00
Girish Ramakrishnan 29e05b1caa make janitor a systemd timer
one process lesser
2015-09-11 18:43:51 -07:00
Girish Ramakrishnan 6945a712df limit node memory usage
node needs to be told how much space it can usage, otherwise it keeps
allocating and we cannot keep it under 50M. keeping old space to 30M,
lets the memory hover around 40M

there are many options to v8 but I haven't explored them all:
--expose_gc - allows scripts to call gc()
--max_old_space_size=30 --max_semi_space_size=2048 (old/new space)
    node first allocates new objects in new space. if these objects are in use
    around for some time, it moves them to old space. the idea here is that it
    runs gc aggressively on new space since new objects die more than old ones.

    the new space is split into two halves of equal size called semi spaces.

--gc_interval=100 --optimize_for_size --max_executable_size=5 --gc_global --stack_size=1024

http://erikcorry.blogspot.com/2012/11/memory-management-flags-in-v8.html
http://jayconrod.com/posts/55/a-tour-of-v8-garbage-collection
https://code.google.com/p/chromium/issues/detail?id=280984
http://stackoverflow.com/questions/30252905/nodejs-decrease-v8-garbage-collector-memory-usage
http://www.appfruits.com/2014/08/running-node-js-on-arduino-yun/

note: this is not part of shebang because linux shebang does not support args! so we cannot
pass node args as part of shebang.
2015-09-10 21:24:36 -07:00
Girish Ramakrishnan 03048d7d2f set memorylimit for crashnotifier as well 2015-09-10 14:19:44 -07:00
Girish Ramakrishnan 28b768b146 Fix app autoupdater logic
The main issue was that app.portBindings is never null but { }
2015-09-10 11:39:29 -07:00
Girish Ramakrishnan c1e4dceb01 ssh is now on port 919 2015-09-10 10:08:40 -07:00
Johannes Zellner 954d14cd66 Warn the user when he performs an upgrade instead of update
Fixes #481
2015-09-10 14:33:00 +02:00
Johannes Zellner 2f5e9e2e26 We do have global rest error handler which take care of re-login 2015-09-10 14:16:59 +02:00
Johannes Zellner b3c058593f Force reload page if version has changed
Fixes #480
2015-09-10 13:58:27 +02:00
Johannes Zellner 3e47e11992 Ensure the stylesheets are in correct order
Fixes #484
2015-09-10 13:32:33 +02:00
Girish Ramakrishnan 8c7dfdcef2 Wait upto 3 seconds for the app to quit
Otherwise systemd will kill us and we get crash emails.

Fixes #483
2015-09-09 16:57:43 -07:00
Girish Ramakrishnan c88591489d make apps test work 2015-09-09 15:51:56 -07:00
Girish Ramakrishnan 719404b6cf lint 2015-09-09 15:03:43 -07:00
Girish Ramakrishnan f2c27489c8 test: make unregister subdomain test work 2015-09-09 14:36:09 -07:00
Girish Ramakrishnan d6a0c93f2f test: make register subdomain work 2015-09-09 14:32:05 -07:00
Girish Ramakrishnan c64d5fd2e3 error is already Error 2015-09-09 14:26:53 -07:00
Girish Ramakrishnan 5b62aeb73a make aws endpoint configurable for tests 2015-09-09 12:03:47 -07:00
Girish Ramakrishnan 7e83f2dd4a intercept delete calls to test image 2015-09-09 11:32:09 -07:00
Girish Ramakrishnan ed48f84355 give taskmanager couple of seconds to kill all processes 2015-09-09 10:39:38 -07:00
Girish Ramakrishnan f3d15cd4a5 fix initialization of apps-test 2015-09-09 10:22:17 -07:00
Girish Ramakrishnan 8c270269db remove dead code 2015-09-09 09:28:06 -07:00
Johannes Zellner bea605310a Use memoryLimit from manifest for graphs if specified 2015-09-09 17:11:54 +02:00
Johannes Zellner 8184894563 Remove upgrade view altogether 2015-09-09 16:47:13 +02:00
Johannes Zellner 47a87cc298 Remove upgrade link in the menu 2015-09-09 16:46:28 +02:00
Johannes Zellner 553a6347e6 Actually hand the backupKey over in an update 2015-09-09 12:37:09 +02:00
Girish Ramakrishnan a35ebd57f9 call iteratorDone when finished 2015-09-09 00:43:42 -07:00
Girish Ramakrishnan 97174d7af0 make cloudron-test pass 2015-09-08 22:13:50 -07:00
Girish Ramakrishnan 659268c04a provide default backupPrefix for tests 2015-09-08 21:16:50 -07:00
Girish Ramakrishnan 67d06c5efa better debug messages 2015-09-08 21:11:46 -07:00
Girish Ramakrishnan 6e6d8c0bc5 awscredentials is now POST 2015-09-08 21:02:21 -07:00
Girish Ramakrishnan 658af3edcf disable failing subdomains test
This needs aws mock
2015-09-08 20:38:52 -07:00
Girish Ramakrishnan 9753d9dc7e removeUser takes a userId and not username 2015-09-08 16:38:02 -07:00
Girish Ramakrishnan 4e331cfb35 retry registering and unregistering subdomain 2015-09-08 12:51:25 -07:00
Girish Ramakrishnan a1fa94707b Remove ununsed error codes 2015-09-08 11:28:29 -07:00
Girish Ramakrishnan 88f1107ed6 Remove unused AWSError 2015-09-08 11:26:35 -07:00
Girish Ramakrishnan e97b9fcc60 Do not start apptask for apps that are installed and running 2015-09-08 10:24:39 -07:00
Girish Ramakrishnan 71fe643099 Check if we have reached concurrency limit before locking 2015-09-08 10:20:34 -07:00
Johannes Zellner 74874a459d Remove ... for labels while showing the progress bar 2015-09-08 15:49:10 +02:00
Johannes Zellner 7c5fc17500 Cleanup linter issues in updatechecker.js 2015-09-08 10:03:37 +02:00
Girish Ramakrishnan 26aefadfba systemd: fix crashnotifier 2015-09-07 21:40:01 -07:00
Girish Ramakrishnan 51a28842cf systemd: pass the instance name as argument 2015-09-07 21:16:22 -07:00
Girish Ramakrishnan 210c2f3cc1 Output some logs in crashnotifier 2015-09-07 21:10:00 -07:00
Girish Ramakrishnan 773c326eb7 systemd: just wait for 5 seconds for box to die 2015-09-07 20:58:14 -07:00
Girish Ramakrishnan cb2fb026c5 systemd: do not restart crashnotifier 2015-09-07 20:54:58 -07:00
Girish Ramakrishnan a4731ad054 200m is a more sane memory limit 2015-09-07 20:48:29 -07:00
Girish Ramakrishnan aa33938fb5 systemd: fix config files 2015-09-07 20:46:32 -07:00
Girish Ramakrishnan edfe8f1ad0 disable pager when collecting logs 2015-09-07 20:27:27 -07:00
Girish Ramakrishnan 41399a2593 Make crashnotifier.js executable 2015-09-07 20:15:13 -07:00
Girish Ramakrishnan 2a4c467ab8 systemd: Fix crashnotifier 2015-09-07 20:14:37 -07:00
Girish Ramakrishnan 6be6092c0e Add memory limits on services 2015-09-07 19:16:34 -07:00
Girish Ramakrishnan e76584b0da Move from supervisor to systemd
This removes logrotate as well since we use systemd logging
2015-09-07 14:31:25 -07:00
Girish Ramakrishnan b3816615db run upto 5 apptasks in parallel
fixes #482
2015-09-05 09:17:46 -07:00
Johannes Zellner 212d0bd55a Revert "Add hack for broken app backup tarballs"
This reverts commit 9723951bfc.
2015-08-31 21:44:24 -07:00
Girish Ramakrishnan 712ada940e Add hack for broken app backup tarballs 2015-08-31 18:58:38 -07:00
Johannes Zellner ba690c6346 Add missing records argument 2015-08-30 23:00:01 -07:00
Johannes Zellner e910e19f57 Fix debug tag 2015-08-30 22:54:52 -07:00
Johannes Zellner 0c2532b0b5 Give default value to config.dnsInSync 2015-08-30 22:35:44 -07:00
Johannes Zellner 9c9b17a5f0 Remove cloudron.config prior to every test run 2015-08-30 22:35:44 -07:00
Johannes Zellner 816dea91ec Assert for dns record values 2015-08-30 22:35:44 -07:00
Johannes Zellner c228f8d4d5 Merge admin dns and mail dns setup
This now also checks if the mail records are in sync
2015-08-30 22:35:43 -07:00
Johannes Zellner 05bb99fad4 give dns record changeIds as a result for addMany() 2015-08-30 22:35:43 -07:00
Johannes Zellner 51b2457b3d Setup webadmin domain on the box side 2015-08-30 22:35:43 -07:00
Girish Ramakrishnan ed71fca23e Fix css 2015-08-30 22:25:18 -07:00
Girish Ramakrishnan 20e8e72ac2 reserved blocks are used 2015-08-30 22:24:57 -07:00
Girish Ramakrishnan 13fe0eb882 Only display one donut for memory usage 2015-08-30 22:13:01 -07:00
Girish Ramakrishnan e0476c9030 Reboot is a post route 2015-08-30 21:38:54 -07:00
Girish Ramakrishnan fca82fd775 Display upto 600mb for apps 2015-08-30 17:21:44 -07:00
Johannes Zellner 37c8ba8ddd Reduce logging for aws credentials 2015-08-30 17:03:10 -07:00
Johannes Zellner f87011b5c2 Also always check for dns propagation 2015-08-30 17:00:23 -07:00
Johannes Zellner 7f149700f8 Remove wrong optimization for subdomain records 2015-08-30 16:54:33 -07:00
Johannes Zellner 78ba9070fc use config.appFqdn() to handle custom domains 2015-08-30 16:29:09 -07:00
Johannes Zellner e31e5e1f69 Reuse dnsRecordId for record status id 2015-08-30 15:58:54 -07:00
Johannes Zellner 31d9027677 Query dns status with aws statusId 2015-08-30 15:51:33 -07:00
Johannes Zellner debcd6f353 aws provides uppercase properties 2015-08-30 15:47:08 -07:00
Johannes Zellner 5cb1681922 Fixup the zonename comparison 2015-08-30 15:37:18 -07:00
Johannes Zellner 9074bccea0 Move subdomain management from appstore to box 2015-08-30 15:29:14 -07:00
Girish Ramakrishnan 291798f574 Pass along aws config for updates 2015-08-27 22:45:04 -07:00
Girish Ramakrishnan b104843ae1 Add missing quotes to cloudron.conf 2015-08-27 20:15:04 -07:00
Girish Ramakrishnan dd062c656f Fix failing test 2015-08-27 11:43:36 -07:00
Girish Ramakrishnan ae2eb718c6 check if response has credentials object 2015-08-27 11:43:02 -07:00
Girish Ramakrishnan 7ac26bb653 Fix backup response 2015-08-27 11:19:40 -07:00
Girish Ramakrishnan 41a726e8a7 Fix backup test 2015-08-27 11:17:36 -07:00
Girish Ramakrishnan 4b69216548 bash: quote the array expansion 2015-08-27 10:13:05 -07:00
Girish Ramakrishnan 99395ddf5a bash: quoting array expansion because thats how it is 2015-08-27 09:49:44 -07:00
Girish Ramakrishnan 5f9fa5c352 bash: empty array expansion barfs with set -u 2015-08-27 09:33:40 -07:00
Girish Ramakrishnan 9013331917 Fix coding style 2015-08-27 09:30:32 -07:00
Girish Ramakrishnan 3a8f80477b getSignedDownloadUrl must return an object with url and sessionToken 2015-08-27 09:26:19 -07:00
Johannes Zellner 813c680ed0 pass full box data to the update 2015-08-26 10:59:17 -07:00
Johannes Zellner a0eccd615f Send new version to update to to the installer 2015-08-26 09:42:48 -07:00
Johannes Zellner 59be539ecd make restoreapp.sh support aws session tokens 2015-08-26 09:14:15 -07:00
Johannes Zellner a04740114c Generate app restore urls locally 2015-08-26 09:11:28 -07:00
Johannes Zellner 60b5d71c74 appBackupIds are not needed for backup url generation 2015-08-26 09:06:45 -07:00
Johannes Zellner 0a8b4b0c43 Load our style sheet as early as possible 2015-08-25 21:59:01 -07:00
Johannes Zellner ec21105c47 use backupKey from userData 2015-08-25 18:44:52 -07:00
Girish Ramakrishnan 444258e7ee backupKey is a function 2015-08-25 18:37:51 -07:00
Johannes Zellner e6fd05c2bd Support optional aws related userData 2015-08-25 17:52:01 -07:00
Johannes Zellner 9fdcd452d0 Use locally generate signed urls for app backup 2015-08-25 17:52:01 -07:00
Johannes Zellner f39b9d5618 Support session tokens in backupapp.sh 2015-08-25 17:52:00 -07:00
Johannes Zellner 76e4c4919d Only federated tokens need session token 2015-08-25 17:52:00 -07:00
Johannes Zellner d1f159cdb4 Also send the restoreKey for the backup done webhook 2015-08-25 17:52:00 -07:00
Johannes Zellner c63065e460 Also send the sessionToken when using the pre-signed url 2015-08-25 17:52:00 -07:00
Johannes Zellner 124c1d94a4 Translate the federated credentials 2015-08-25 17:52:00 -07:00
Johannes Zellner e9161b726a AWS credential creation returns 201 2015-08-25 17:52:00 -07:00
Johannes Zellner fd0d27b192 AWS credentials are now dealt with a level down 2015-08-25 17:52:00 -07:00
Johannes Zellner 50064a40fe Use dev bucket for now as a default 2015-08-25 17:52:00 -07:00
Johannes Zellner c9bc5fc38e Use signed urls for upload on the box side 2015-08-25 17:52:00 -07:00
Johannes Zellner 58f533fe50 Add config.aws().backupPrefix 2015-08-25 17:52:00 -07:00
Johannes Zellner efcdffd8ff Add getSignedUploadUrl() to aws.js 2015-08-25 17:52:00 -07:00
Johannes Zellner 22793c3886 move aws-sdk from dev to normal dependencies 2015-08-25 17:52:00 -07:00
Johannes Zellner 797ddbacc0 Return aws credentials from config.js 2015-08-25 17:52:00 -07:00
Johannes Zellner e011962469 refactor backupBoxWithAppBackupIds() 2015-08-25 17:52:00 -07:00
Johannes Zellner b376ad9815 Add webhooks.js 2015-08-25 17:51:59 -07:00
Johannes Zellner 77248fe65c Construct backupUrl locally 2015-08-25 17:51:59 -07:00
Johannes Zellner 1dad115203 Add initial aws object to config.js 2015-08-25 17:51:59 -07:00
Johannes Zellner 8812d58031 Add backupKey to config 2015-08-25 17:51:59 -07:00
Johannes Zellner fff7568f7e Add aws.js 2015-08-25 17:51:59 -07:00
Johannes Zellner ff6662579d Fix typo in backupapp.sh help output 2015-08-25 17:51:59 -07:00
Girish Ramakrishnan 0cf9fbd909 Merge data into args 2015-08-25 15:55:52 -07:00
Girish Ramakrishnan 848b745fcb Fix boolean logic 2015-08-25 12:24:02 -07:00
Girish Ramakrishnan 9a35c40b24 Add force argument
This fixes crash when auto-updating apps
2015-08-25 10:01:20 -07:00
Girish Ramakrishnan 1f1e6124cd oldConfig can be null during a restore/upgrade 2015-08-25 09:59:44 -07:00
Girish Ramakrishnan 033df970ad Update manifestformat@1.7.0 2015-08-24 22:56:02 -07:00
Girish Ramakrishnan dd80a795a0 Read memoryLimit from manifest 2015-08-24 22:44:35 -07:00
Girish Ramakrishnan 1eec6a39c6 Show upto 200mb 2015-08-24 22:39:06 -07:00
Girish Ramakrishnan dd6b8face9 Set app memory limit to 200MB (includes 100 MB swap) 2015-08-24 21:58:19 -07:00
Girish Ramakrishnan 288de7e03a Add RSTATE_ERROR 2015-08-24 21:58:19 -07:00
Girish Ramakrishnan a760ef4d22 Rebase addons to use base image 0.3.3 2015-08-24 10:19:18 -07:00
Johannes Zellner 0dd745bce4 Fix form submit with enter for update form 2015-08-22 17:21:25 -07:00
Johannes Zellner d4d5d371ac Use POST heartbeat route instead of GET 2015-08-22 16:51:56 -07:00
Johannes Zellner 205bf4ddbd Offset the footer in apps view 2015-08-20 23:50:52 -07:00
Girish Ramakrishnan 4ab84d42c6 Delete image only if it changed
This optimization won't work if we have two dockerImage with same
image id....
2015-08-19 14:24:32 -07:00
Girish Ramakrishnan ee74badf3a Check for dockerImage in manifest in install/update/restore routes 2015-08-19 11:08:45 -07:00
Girish Ramakrishnan aa173ff74c restore without a backup is the same as re-install 2015-08-19 11:00:00 -07:00
Girish Ramakrishnan b584fc33f5 CN of admin group is admins 2015-08-18 16:35:52 -07:00
Girish Ramakrishnan 15c9d8682e Base image is now 0.3.3 2015-08-18 15:43:50 -07:00
Girish Ramakrishnan 361be8c26b containerId can be null 2015-08-18 15:43:50 -07:00
Girish Ramakrishnan 4db9a5edd6 Clean up the old image and not the current one 2015-08-18 10:01:15 -07:00
Johannes Zellner bcc878da43 Hide update input fields and update button if it is blocked by apps 2015-08-18 16:59:36 +02:00
Johannes Zellner 79f179fed4 Add note, why sendError() is required 2015-08-18 16:53:29 +02:00
Johannes Zellner a924a9a627 Revert "remove obsolete sendError() function"
This reverts commit 5d9b122dd5.
2015-08-18 16:49:53 +02:00
Girish Ramakrishnan 45d444df0e leave a note about force_update 2015-08-17 21:30:56 -07:00
Girish Ramakrishnan 92461a3366 Remove ununsed require 2015-08-17 21:23:32 -07:00
Girish Ramakrishnan 032a430c51 Fix debug message 2015-08-17 21:23:27 -07:00
Girish Ramakrishnan a6a3855e79 Do not remove icon for non-appstore installs
Fixes #466
2015-08-17 19:37:51 -07:00
Girish Ramakrishnan 2386545814 Add a note why oldConfig can be null 2015-08-17 10:05:07 -07:00
Johannes Zellner 2059152dd3 remove obsolete sendError() function 2015-08-17 14:55:56 +02:00
Johannes Zellner 32d2c260ab Move appstore badges out of the way for the app titles 2015-08-17 11:50:31 +02:00
Johannes Zellner 384c7873aa Correctly mark apps pending for approval
Fixes #339
2015-08-17 11:50:08 +02:00
Girish Ramakrishnan 9266302c4c Print graphite container id 2015-08-13 15:57:36 -07:00
Girish Ramakrishnan 755dce7bc4 fix graph issue finally 2015-08-13 15:54:27 -07:00
Girish Ramakrishnan dd3e38ae55 Use latest graphite 2015-08-13 15:53:36 -07:00
Girish Ramakrishnan 9dfaa2d20f Create symlink in start.sh (and not container setup) 2015-08-13 15:36:21 -07:00
Girish Ramakrishnan d6a4ff23e2 restart mysql in start.sh and not container setup 2015-08-13 15:16:01 -07:00
Girish Ramakrishnan c2ab7e2c1f restart collectd 2015-08-13 15:04:57 -07:00
Girish Ramakrishnan b9e4662dbb fix graphs again 2015-08-13 15:03:44 -07:00
Girish Ramakrishnan 10df0a527f Fix typo
remove thead_cache_size. it's dynamic anyways
2015-08-13 14:53:05 -07:00
Girish Ramakrishnan 9aad3688e1 Revert "Add hack to make graphs work with latest collectd"
This reverts commit a959418544.
2015-08-13 14:42:47 -07:00
Girish Ramakrishnan e78dbcb5d4 limit threads and max connections 2015-08-13 14:42:36 -07:00
Girish Ramakrishnan 5e8cd09f51 Bump infra version 2015-08-13 14:22:39 -07:00
Girish Ramakrishnan 22f65a9364 Add hack to make graphs work with latest collectd
For some reason df-vda1 is not being collected by carbon. I have tried
all sorts of things and nothing works. This is a hack to get it working.
2015-08-13 13:47:44 -07:00
Girish Ramakrishnan 81b7432044 Turn off performance_schema in mysql 5.6 2015-08-13 13:47:44 -07:00
Girish Ramakrishnan d49b90d9f2 Remove unused nodejs-disks 2015-08-13 10:34:06 -07:00
Girish Ramakrishnan 9face9cf35 systemd has moved around the cgroup hierarchy
https://github.com/docker/docker/issues/9902

There is some rationale here:
https://libvirt.org/cgroups.html
2015-08-13 10:21:33 -07:00
Girish Ramakrishnan 33ac34296e CpuShares is part of HostConfig 2015-08-12 23:47:35 -07:00
Girish Ramakrishnan 670ffcd489 Add warning 2015-08-12 19:52:23 -07:00
Girish Ramakrishnan ec7b365c31 Use BASE_IMAGE as well 2015-08-12 19:51:44 -07:00
Girish Ramakrishnan 433d78c7ff Fix graphite version 2015-08-12 19:51:08 -07:00
Girish Ramakrishnan ed041fdca6 Put image names in one place 2015-08-12 19:38:44 -07:00
Girish Ramakrishnan b8e4ed2369 Use latest images 2015-08-12 19:19:58 -07:00
Johannes Zellner d12f260d12 Prevent accessing oldConfig if it does not exist 2015-08-12 21:17:52 +02:00
Johannes Zellner ba7989b57b Add ldap 'users' group 2015-08-12 17:38:31 +02:00
Johannes Zellner 88df410f5b Add ldap search unit tests 2015-08-12 15:31:54 +02:00
Johannes Zellner 2436db3b1f Add ldap memberof attribute 2015-08-12 15:31:44 +02:00
Johannes Zellner d15874df63 Add initial ldap unit tests 2015-08-12 15:00:38 +02:00
Johannes Zellner 8fb90254cd Ensure the focus is properly set when restoring 2015-08-12 14:35:51 +02:00
Johannes Zellner cbd712c20e Better integrate the progress bar 2015-08-12 14:32:20 +02:00
Johannes Zellner 8c004798f2 Improve login form layout 2015-08-12 14:23:13 +02:00
Johannes Zellner c1b0cbe78d Give appstore hover a different color 2015-08-12 14:07:40 +02:00
Johannes Zellner 5ee72c8e98 Make webadmin pages a bit more streamlined with padding 2015-08-12 13:48:55 +02:00
Girish Ramakrishnan c125cc17dc Apps must only get 50% less cpu than system processes when there is a contention for cpu 2015-08-11 17:00:48 -07:00
Johannes Zellner 18feff1bfb Increase installed app title 2015-08-11 15:22:30 +02:00
Johannes Zellner f74f713bbd Hide geeky toolbar in apps icons 2015-08-11 13:04:50 +02:00
Girish Ramakrishnan 0ea14db172 Fix redis installation on 1.7 2015-08-10 23:00:24 -07:00
Girish Ramakrishnan 74785a40d5 r -> ro (docker 1.7) 2015-08-10 21:14:28 -07:00
Girish Ramakrishnan dcfcd5be84 Create docker volume directories since docker 1.7 does not create them 2015-08-10 21:00:56 -07:00
Girish Ramakrishnan 814674eac5 addons can be null in apps.backupApp
addons.backup already takes care of null.

a future commit will give defaults for all non-default manifest fields
at some point and document them as so
2015-08-10 13:47:51 -07:00
Girish Ramakrishnan 1a7fff9867 Keep linter happy 2015-08-10 13:42:04 -07:00
Johannes Zellner 30b248a0f6 Allow non published versions to be shown if explicitly requested
Fixes #468
2015-08-10 16:16:40 +02:00
Johannes Zellner 7168455de3 Do not use table layout for login view
Fixes #458
2015-08-10 15:26:45 +02:00
Johannes Zellner 085f63e3c7 Show cloudron name in login screen 2015-08-10 15:04:12 +02:00
Johannes Zellner 015be64923 Show cloudron avatar in login screen 2015-08-10 15:01:58 +02:00
Johannes Zellner 2c2471811d Restructure the login page 2015-08-10 14:51:04 +02:00
Johannes Zellner 1025249e93 Since addons are optional, ensure we have a valid empty object in the db 2015-08-10 10:37:55 +02:00
Johannes Zellner 41ffc4bcf3 If we have an empty app search show modal dialog link 2015-08-09 15:19:21 +02:00
Johannes Zellner 2739d54cc1 Make appstore feedback form a modal dialog 2015-08-09 14:48:00 +02:00
Girish Ramakrishnan c4c463cbc2 collect logs using a sudo script
docker logs can only be read by root
2015-08-08 19:04:59 -07:00
Girish Ramakrishnan 8cd13bd43f Update safetydance 2015-08-08 18:53:16 -07:00
Girish Ramakrishnan e4ef279759 Update safetydance and lastmile 2015-08-06 13:54:15 -07:00
Girish Ramakrishnan cf7fecb57b bump cloudron-manifestformat 2015-08-06 13:50:27 -07:00
Girish Ramakrishnan 226041dcb1 Display settings path
Fixes #465
2015-08-06 13:44:09 -07:00
Johannes Zellner 7548025561 If an app search is empty, show hint to give feedback 2015-08-06 18:35:08 +02:00
Johannes Zellner fdbee427ee Show app feedback form in appstore
Fixes #461
2015-08-06 18:30:49 +02:00
Johannes Zellner d861d6d6e4 Properly offset the footer in support view 2015-08-06 18:30:25 +02:00
Johannes Zellner 0a648edcaa Add app feedback category 2015-08-06 17:34:40 +02:00
Johannes Zellner 18850c1fba Cloudron prices are in cents 2015-08-06 16:24:19 +02:00
Girish Ramakrishnan f6df4cab67 Remove ADMIN_ORIGIN 2015-08-05 17:27:55 -07:00
Johannes Zellner 019d29c5b7 Use assert.strictEqual() to see the values 2015-08-05 17:49:19 +02:00
Johannes Zellner 0b4256a992 Unify feedback and ticket forms 2015-08-05 14:27:04 +02:00
Johannes Zellner 7d58d69389 Fix setup step on ng-enter 2015-08-04 22:17:58 +02:00
Johannes Zellner 864dd5bf26 New shrinkwrap for ldapjs without dtrace-provider
We have to install ldapjs with --no-optional

Fixes #460
2015-08-04 20:43:36 +02:00
Johannes Zellner abdde7a950 Put the correct faq and docs links 2015-08-04 19:36:05 +02:00
Johannes Zellner 8bcbd860be Add unit tests for feedback route and fix the route 2015-08-04 16:59:35 +02:00
Johannes Zellner be61c42fe8 Send feedback and tickets to support@cloudron.io 2015-08-04 16:05:20 +02:00
Johannes Zellner 6d5afc2d75 Give support form headers more space 2015-08-04 16:04:44 +02:00
Johannes Zellner 88d905e8cc Add support form feedback 2015-08-04 16:01:50 +02:00
Johannes Zellner d8ccc766b9 Add text-bold class 2015-08-04 16:01:33 +02:00
Johannes Zellner d22e0f0483 mailer functions only enqueue, respond immediately 2015-08-04 15:39:14 +02:00
Johannes Zellner c8f6973312 Do not send adminEmail for feedback mails 2015-08-04 14:56:43 +02:00
Johannes Zellner 3f0f0048bc add missing email format 2015-08-04 14:52:40 +02:00
Johannes Zellner 88643f0875 Add missing %> 2015-08-04 14:49:43 +02:00
Johannes Zellner e11bb10bb8 The requested function is in mailer 2015-08-04 14:45:42 +02:00
Johannes Zellner 7b9930c7f0 Do the feedback and ticket form plumbing 2015-08-04 14:44:39 +02:00
Johannes Zellner da48e32bcc Add feedback route 2015-08-04 14:31:40 +02:00
Johannes Zellner 57e2803bd2 Add feedback email template 2015-08-04 14:31:33 +02:00
Johannes Zellner 0d1ba01d65 Add initial support view 2015-08-04 11:33:36 +02:00
Girish Ramakrishnan 95cbec19af Copy the manifest because changes are made to it
Because of that, manifest verification fails (isNew property appears in manfiest)
2015-08-03 21:31:15 -07:00
Girish Ramakrishnan cc97654b23 Fix text 2015-08-02 19:02:45 -07:00
Girish Ramakrishnan 5bb983f175 Send docker log in crash email 2015-08-01 21:42:34 -07:00
Johannes Zellner 7cb6434de1 Move avatar name below the selected avatar preview 2015-07-30 16:38:10 +02:00
Johannes Zellner cb1b495da2 Revert "Actually remove dtrace dep"
This reverts commit 2b9bf6d019.
2015-07-30 14:53:53 +02:00
Girish Ramakrishnan e134136d59 previewAvatar seems to be defined in step1 and step2 2015-07-29 18:10:25 -07:00
Girish Ramakrishnan 85a681e330 There is no step4 2015-07-29 17:09:05 -07:00
Girish Ramakrishnan dc5c0fd830 setPreviewAvatar only in avatar selection step 2015-07-29 16:30:32 -07:00
Girish Ramakrishnan e7bf8452ab randomize default avatar 2015-07-29 16:11:37 -07:00
Girish Ramakrishnan 157f972b20 Decrease size of image preview 2015-07-29 16:11:20 -07:00
Girish Ramakrishnan b36028dc11 Pick -> Choose 2015-07-29 15:55:41 -07:00
Girish Ramakrishnan 70092ec559 Ensure image got loaded before setting the preview 2015-07-29 15:53:58 -07:00
Girish Ramakrishnan 56d740d597 Merge welcome step and step2 2015-07-29 15:11:34 -07:00
Girish Ramakrishnan ed55e52363 Actually remove dtrace dep
Use --no-optional when installing dtrace
2015-07-29 10:15:25 -07:00
Johannes Zellner 89c36ae6a9 Do not show the update page if update failed 2015-07-29 14:19:15 +02:00
Johannes Zellner 3027c119fe Use angular in update dialog and show errors 2015-07-29 14:02:31 +02:00
Johannes Zellner 4f129102a8 Use -1 for progress to indicate an error 2015-07-29 13:53:36 +02:00
Johannes Zellner 2dd6bb0c67 Rename upgradeError to updateError in update 2015-07-29 13:52:59 +02:00
Johannes Zellner b928b08a4c Reset update progress on update failure 2015-07-29 12:41:19 +02:00
Johannes Zellner 9dcc6e68a4 Use new avatar set
Fixes #456
2015-07-29 11:13:59 +02:00
Girish Ramakrishnan 452e67be54 This is probably obvious 2015-07-28 23:12:53 -07:00
Girish Ramakrishnan 9e0611f6d8 Improve wording of wizard 2015-07-28 23:09:06 -07:00
Girish Ramakrishnan ad3392ef2e model is queried from appstore 2015-07-28 17:08:32 -07:00
Girish Ramakrishnan 71e8abf081 define adminOrigin in splashpage.sh 2015-07-28 16:52:27 -07:00
Girish Ramakrishnan 46172e76c6 Keep updater arguments sorted for readability 2015-07-28 16:03:32 -07:00
Girish Ramakrishnan 7e639bd0e2 Release update/upgrade lock only on error 2015-07-28 15:28:10 -07:00
Girish Ramakrishnan 7a9af5373b Check percent value before redirecting to update.html 2015-07-28 14:43:49 -07:00
Girish Ramakrishnan 3ea7a11d97 Set progress completion error messages 2015-07-28 14:40:22 -07:00
Girish Ramakrishnan f582ba1ba7 console.error any backup error message for now 2015-07-28 14:30:40 -07:00
Girish Ramakrishnan b96fc2bc56 initialize percent 2015-07-28 14:28:53 -07:00
Girish Ramakrishnan 48c16277f0 Create error object properly 2015-07-28 14:22:34 -07:00
Girish Ramakrishnan 4ad4ff0b10 Use progress.set in upgrade/update code paths 2015-07-28 14:22:08 -07:00
Girish Ramakrishnan 25f05e5abd Add missing ; 2015-07-28 13:09:24 -07:00
Girish Ramakrishnan 7c214a9181 log update and upgrade errors 2015-07-28 10:03:52 -07:00
Johannes Zellner d66b1eef59 Better support for active directory clients 2015-07-28 18:39:16 +02:00
Girish Ramakrishnan 58f52b90f8 better debug on what is being autoupdated 2015-07-28 09:37:46 -07:00
Girish Ramakrishnan edb67db4ea Remove unnecessary debug making logs very verbose 2015-07-28 09:32:19 -07:00
Johannes Zellner 733014d8d9 No need to guess the apiOrigin anymore, we redirect now
Fixes #436
2015-07-28 14:03:48 +02:00
Johannes Zellner 4980f79688 Show link to referrer in appstatus 2015-07-28 14:01:51 +02:00
Johannes Zellner 3d8b90f5c8 Redirect on app error to webadmin appstatus page
Part of #436
2015-07-28 13:46:58 +02:00
Johannes Zellner eea547411b Show testing badges in appstore view 2015-07-28 13:21:23 +02:00
Johannes Zellner af682e5bb1 Fix the app icons in the install app grid 2015-07-28 13:06:55 +02:00
Johannes Zellner 739dcfde8b Show version and author in install dialog 2015-07-28 12:53:33 +02:00
Johannes Zellner 1db58dd78d Support ?version in direct appstore URLs
Fixes #454
2015-07-28 11:49:04 +02:00
Johannes Zellner 947137b3f9 Ensure we have a fallback avatar 2015-07-28 11:28:06 +02:00
Johannes Zellner 509e2caa83 Also show avatar in nakeddomain error page 2015-07-28 11:19:13 +02:00
Johannes Zellner a0e67daa52 Use avatar in error page 2015-07-28 11:18:55 +02:00
Johannes Zellner 32584f3a90 Fix long lasting navbar padding issue 2015-07-28 10:57:48 +02:00
Johannes Zellner 3513f321fb Reload webadmin in case the avatar changes
Fixes #452
2015-07-28 10:50:33 +02:00
Johannes Zellner 8aaccbba55 Show avatar in navbar 2015-07-28 10:49:56 +02:00
Johannes Zellner 31ab86a97f Show avatar as favicon 2015-07-28 10:40:10 +02:00
Girish Ramakrishnan 2c0786eb37 Use ldapjs from github directly
The 0.7.x ldapjs is over a year old and uses dtrace as a dep which
causes issues when rebuilding.
2015-07-27 13:06:30 -07:00
Johannes Zellner 3db8ebf97f Ensure the appstore ui can operate always on manifest.tags 2015-07-27 19:29:25 +02:00
Johannes Zellner 804105ce2b Add testing section in appstore and mark testing apps
This is not some final design to indicate which app is in testing
but the logistics are there, mainly css from now

Fixes #451
2015-07-27 17:09:59 +02:00
Johannes Zellner c4bb56dc95 Show non published apps in webadmin 2015-07-27 16:34:37 +02:00
Johannes Zellner 87c76a3eb3 Read apps from actual response body 2015-07-27 16:27:50 +02:00
Johannes Zellner 6bceff14ec Add proxy api to get non approved app listings 2015-07-27 14:00:44 +02:00
Girish Ramakrishnan 6b62561706 Add mandatory addons object 2015-07-24 06:59:34 -07:00
Girish Ramakrishnan d558c06803 Add missing semicolon 2015-07-24 06:53:07 -07:00
Girish Ramakrishnan ef9508ccc5 Use BOX_ENV instead of NODE_ENV
Let NODE_ENV be used by node modules and always be set to production

Fixes #453
2015-07-24 01:42:28 -07:00
Girish Ramakrishnan ec8342c2ce Better progress messages 2015-07-23 22:50:58 -07:00
Girish Ramakrishnan 6839f47f99 Fix typo 2015-07-23 14:30:15 -07:00
Girish Ramakrishnan d32990d0e5 Set server_names_hash_bucket_size
e2e tests fail like so when the hostnames are long

Thu, 23 Jul 2015 20:40:23 GMT box:apptask test8629 writing config to /home/yellowtent/data/nginx/applications/a3822f18-2f95-4b73-b8e9-2983dfcaae31.conf
Thu, 23 Jul 2015 20:40:23 GMT box:shell.js reloadNginx execFile: /usr/bin/sudo -S /home/yellowtent/box/src/scripts/reloadnginx.sh
Thu, 23 Jul 2015 20:40:24 GMT box:shell.js reloadNginx (stderr): nginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size: 64

Thu, 23 Jul 2015 20:40:24 GMT box:shell.js reloadNginx code: 1, signal: null
Thu, 23 Jul 2015 20:40:24 GMT box:apptask test8629 error installing app: Error: Exited with error 1 signal null
Thu, 23 Jul 2015 20:40:24 GMT box:apptask test8629 installationState: pending_install progress: 15, Configure nginx
^[[1m^[[31mERROR^[[39m^[[22m Exited with error 1 signal null ^[[1m[ /home/yellowtent/box/src/apptask.js:909:32 ]^[[22m
^[[32mstack: ^[[39m
  """
    Error: Exited with error 1 signal null
        at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:38:53)
        at ChildProcess.emit (events.js:110:17)
        at Process.ChildProcess._handle.onexit (child_process.js:1074:12)
  """
^[[32mmessage: ^[[39mExited with error 1 signal null
2015-07-23 13:55:46 -07:00
Johannes Zellner 71dbe21fc3 Set no-cache for the avatar 2015-07-23 16:34:44 +02:00
Johannes Zellner f36616abbb Remove developerMode from update provisioning data
Finally fixes #442
2015-07-23 13:31:39 +02:00
Johannes Zellner db6d6d565f Remove developerMode from config.js 2015-07-23 13:26:30 +02:00
Johannes Zellner 5f3fc68b5e Fixup developers test with new developer mode setting 2015-07-23 13:19:51 +02:00
Johannes Zellner bdca5e343b Fixup clients test with new developer mode setting 2015-07-23 13:16:36 +02:00
Johannes Zellner 58cf712e71 Fix apps-test to use settings developerMode 2015-07-23 12:59:47 +02:00
Johannes Zellner ca7e67ea4f Use developerMode from settings instead of config 2015-07-23 12:52:04 +02:00
Johannes Zellner b202043019 Add developerMode to settings
Part of #442
2015-07-23 12:42:56 +02:00
Johannes Zellner 19fef4c337 Add missing appId key to access app updateInfo 2015-07-23 07:21:05 +02:00
Johannes Zellner 7b864fed04 Only log error if NOOP_CALLBACK got an error 2015-07-23 07:17:30 +02:00
Johannes Zellner 553667557c Add polyfill for chrome for canvas.toBlob()
Fixes #448
2015-07-21 15:29:43 +02:00
Girish Ramakrishnan 3f732abbb3 Add debugs 2015-07-20 11:05:30 -07:00
Girish Ramakrishnan 1af3397898 Disable removeIcon is apptask for now 2015-07-20 11:01:52 -07:00
Girish Ramakrishnan 0d89612769 unusedAddons must be an object, not an array 2015-07-20 10:50:44 -07:00
Girish Ramakrishnan d71073ca6a Add text for force update 2015-07-20 10:43:19 -07:00
Girish Ramakrishnan 38c2c78633 Restoring app will lose all content if no backup
Maybe we should allow the user to force update if there is no backup?
We can add that as the need arises...
2015-07-20 10:39:30 -07:00
Girish Ramakrishnan 17b1f469d7 Handle forced updates 2015-07-20 10:09:02 -07:00
Girish Ramakrishnan 1e67241049 Return error on unknown installation command 2015-07-20 10:03:55 -07:00
Girish Ramakrishnan 173efa6920 Leave note on when lastBackupId can be null 2015-07-20 09:54:17 -07:00
Girish Ramakrishnan 0285562133 Revert the manifest and portBindings on a failed update
Fixes #443
2015-07-20 09:48:31 -07:00
Girish Ramakrishnan 26fbace897 During an update backup the old addons
Fixes #444
2015-07-20 00:50:36 -07:00
Girish Ramakrishnan df9d321ac3 app.portBindings and newManifest.tcpPorts may be null 2015-07-20 00:10:36 -07:00
489 changed files with 27862 additions and 174027 deletions
-4
View File
@@ -1,10 +1,6 @@
# Skip files when using git archive
.gitattributes export-ignore
.gitignore export-ignore
/release export-ignore
/scripts export-ignore
test export-ignore
/webadmin/src export-ignore
/webadmin/deploymentConfig.json export-ignore
/gulpfile.json export-ignore
+3 -14
View File
@@ -1,20 +1,9 @@
node_modules/
coverage/
docs/
webadmin/dist/
setup/splash/website/
# vim swam files
# vim swap files
*.swp
# supervisor
supervisord.pid
supervisord.log
# nginx
nginx/*.log
nginx/*.pid
nginx/naked_domain.conf
nginx/applications/
# release files
release/versions-dev.json
-7
View File
@@ -4,14 +4,7 @@ The Box
Development setup
-----------------
* sudo useradd -m yellowtent
** This dummy user is required for supervisor 'box' configs
** Add admin-localhost as 127.0.0.1 in /etc/hosts
** All apps will be installed as hypened-subdomains of localhost. You should add
hyphened-subdomains of your apps into /etc/hosts
Running
-------
* `./run.sh` - this starts up nginx to serve up the webadmin
* `DEBUG=box:* ./app.js` - this the main box code.
* Navigate to https://admin-localhost
-35
View File
@@ -1,35 +0,0 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
var server = require('./src/server.js'),
config = require('./config.js');
console.log();
console.log('==========================================');
console.log(' Cloudron will use the following settings ');
console.log('==========================================');
console.log();
console.log(' Environment: ', config.CLOUDRON ? 'CLOUDRON' : (config.LOCAL ? 'LOCAL' : 'TEST'));
console.log(' Admin Origin: ', config.adminOrigin());
console.log(' Appstore token: ', config.token());
console.log(' Appstore server origin: ', config.appServerUrl());
console.log();
console.log('==========================================');
console.log();
server.start(function (err) {
if (err) {
console.error('Error starting server', err);
process.exit(1);
}
console.log('Server listening on port ' + config.get('port'));
});
var NOOP_CALLBACK = function () { };
process.on('SIGINT', function () { server.stop(NOOP_CALLBACK); });
process.on('SIGTERM', function () { server.stop(NOOP_CALLBACK); });
-135
View File
@@ -1,135 +0,0 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
var appdb = require('./src/appdb.js'),
assert = require('assert'),
async = require('async'),
database = require('./src/database.js'),
DatabaseError = require('./src/databaseerror.js'),
debug = require('debug')('box:apphealthtask'),
docker = require('./src/docker.js'),
mailer = require('./src/mailer.js'),
os = require('os'),
superagent = require('superagent');
exports = module.exports = {
initialize: initialize,
run: run
};
var FATAL_CALLBACK = function (error) {
if (!error) return;
console.error(error);
process.exit(2);
};
var HEALTHCHECK_INTERVAL = 30000;
var gLastSeen = { }; // { time, emailSent }
function initialize(callback) {
async.series([
database.initialize,
mailer.initialize
], callback);
}
function setHealth(app, alive, runState, callback) {
assert(typeof app === 'object');
assert(typeof alive === 'boolean');
assert(typeof runState === 'string');
assert(typeof callback === 'function');
var healthy = true; // app is unhealthy if not alive for 2 mins
var now = new Date();
if (alive || !(app.id in gLastSeen)) { // give never seen apps 2 mins to come up
gLastSeen[app.id] = { time: now, emailSent: false };
} else if (Math.abs(now - gLastSeen[app.id].time) > 120 * 1000) { // not seen for 2 mins
debug('app %s not seen for more than 2 mins, marking as unhealthy', app.id);
healthy = false;
}
if (!healthy && !gLastSeen[app.id].emailSent) {
gLastSeen[app.id].emailSent = true;
mailer.appDied(app);
}
appdb.setHealth(app.id, healthy, runState, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null); // app uninstalled?
if (error) return callback(error);
app.healthy = healthy;
app.runState = runState;
callback(null);
});
}
// # TODO should probably poll from the outside network instead of the docker network?
// callback is called with error for fatal errors and not if health check failed
function checkAppHealth(app, callback) {
// only check status of installed apps. we could possibly optimize more by checking runState as well
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(null);
var container = docker.getContainer(app.containerId),
manifest = app.manifest;
container.inspect(function (err, data) {
if (err || !data || !data.State) {
debug('Error inspecting container');
return setHealth(app, false, appdb.RSTATE_ERROR, callback);
}
if (data.State.Running !== true) {
debug('app %s has exited', app.id);
return setHealth(app, false, appdb.RSTATE_DEAD, callback);
}
var healthCheckUrl = 'http://127.0.0.1:' + app.httpPort + manifest.healthCheckPath;
superagent
.get(healthCheckUrl)
.timeout(HEALTHCHECK_INTERVAL)
.end(function (error, res) {
if (error || res.status !== 200) {
debug('app %s is not alive ', app.id);
setHealth(app, false, appdb.RSTATE_RUNNING, callback);
} else {
debug('app %s is alive', app.id);
setHealth(app, true, appdb.RSTATE_RUNNING, callback);
}
});
});
}
function processApps(callback) {
appdb.getAll(function (error, apps) {
if (error) return callback(error);
async.each(apps, checkAppHealth, function (error) {
if (error) console.error(error);
callback(null);
});
});
}
function run(callback) {
processApps(function (error) {
if (error) return callback(error);
setTimeout(run.bind(null, callback), HEALTHCHECK_INTERVAL);
});
}
if (require.main === module) {
initialize();
run(function (error) {
console.error('apphealth task exiting with error.', error);
process.exit(error ? 1 : 0);
});
}
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.4 KiB

Executable
+61
View File
@@ -0,0 +1,61 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
require('debug').formatArgs = function formatArgs() {
arguments[0] = this.namespace + ' ' + arguments[0];
return arguments;
};
var appHealthMonitor = require('./src/apphealthmonitor.js'),
async = require('async'),
config = require('./src/config.js'),
ldap = require('./src/ldap.js'),
oauthproxy = require('./src/oauthproxy.js'),
server = require('./src/server.js');
console.log();
console.log('==========================================');
console.log(' Cloudron will use the following settings ');
console.log('==========================================');
console.log();
console.log(' Environment: ', config.CLOUDRON ? 'CLOUDRON' : 'TEST');
console.log(' Version: ', config.version());
console.log(' Admin Origin: ', config.adminOrigin());
console.log(' Appstore token: ', config.token());
console.log(' Appstore API server origin: ', config.apiServerOrigin());
console.log(' Appstore Web server origin: ', config.webServerOrigin());
console.log();
console.log('==========================================');
console.log();
async.series([
server.start,
ldap.start,
appHealthMonitor.start,
oauthproxy.start
], function (error) {
if (error) {
console.error('Error starting server', error);
process.exit(1);
}
});
var NOOP_CALLBACK = function () { };
process.on('SIGINT', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
process.on('SIGTERM', function () {
server.stop(NOOP_CALLBACK);
ldap.stop(NOOP_CALLBACK);
oauthproxy.stop(NOOP_CALLBACK);
setTimeout(process.exit.bind(process), 3000);
});
-143
View File
@@ -1,143 +0,0 @@
/* jslint node: true */
'use strict';
var path = require('path'),
fs = require('fs'),
safe = require('safetydance'),
assert = require('assert'),
_ = require('underscore'),
path = require('path'),
mkdirp = require('mkdirp');
exports = module.exports = {
baseDir: baseDir,
get: get,
set: set,
// ifdefs to check environment
CLOUDRON: process.env.NODE_ENV === 'cloudron',
TEST: process.env.NODE_ENV === 'test',
LOCAL: process.env.NODE_ENV === 'local' || !process.env.NODE_ENV,
// convenience getters
appServerUrl: appServerUrl,
fqdn: fqdn,
token: token,
version: version,
isCustomDomain: isCustomDomain,
// these values are derived
adminOrigin: adminOrigin,
appFqdn: appFqdn,
zoneName: zoneName
};
var homeDir = process.env.HOME || process.env.HOMEPATH || process.env.USERPROFILE;
var data = { };
function baseDir() {
if (exports.CLOUDRON) return homeDir;
if (exports.TEST) return path.join(homeDir, '.yellowtenttest');
if (exports.LOCAL) return path.join(homeDir, '.yellowtent');
}
var cloudronConfigFileName = path.join(baseDir(), 'configs/cloudron.conf');
function saveSync() {
fs.writeFileSync(cloudronConfigFileName, JSON.stringify(data, null, 4)); // functions are ignored by JSON.stringify
}
(function initConfig() {
// setup defaults
if (exports.CLOUDRON) {
data.port = 3000;
data.appServerUrl = process.env.APP_SERVER_URL || null; // APP_SERVER_URL is set during bootstrap in the box's supervisor manifest
} else if (exports.TEST) {
data.port = 5454;
data.appServerUrl = 'http://localhost:6060'; // hock doesn't support https
} else if (exports.LOCAL) {
data.port = 3000;
data.appServerUrl = 'http://localhost:5050';
} else {
assert(false, 'Unknown environment. This should not happen!');
}
data.fqdn = 'localhost';
data.token = null;
data.mailServer = null;
data.mailUsername = null;
data.mailDnsRecordIds = [ ];
data.boxVersionsUrl = null;
data.version = null;
data.isCustomDomain = false;
if (safe.fs.existsSync(cloudronConfigFileName)) {
var existingData = safe.JSON.parse(safe.fs.readFileSync(cloudronConfigFileName, 'utf8'));
_.extend(data, existingData); // overwrite defaults with saved config
return;
}
mkdirp.sync(path.dirname(cloudronConfigFileName));
saveSync();
})();
// set(obj) or set(key, value)
function set(key, value) {
if (typeof key === 'object') {
var obj = key;
for (var k in obj) {
assert(k in data, 'config.js is missing key "' + k + '"');
data[k] = obj[k];
}
} else {
assert(key in data, 'config.js is missing key "' + key + '"');
data[key] = value;
}
saveSync();
}
function get(key) {
assert(typeof key === 'string');
return safe.query(data, key);
}
function appServerUrl() {
return get('appServerUrl');
}
function fqdn() {
return get('fqdn');
}
function appFqdn(location) {
assert(typeof location === 'string');
return isCustomDomain() ? location + '.' + fqdn() : location + '-' + fqdn();
}
function adminOrigin() {
return 'https://' + appFqdn('admin');
}
function token() {
return get('token');
}
function version() {
return get('version');
}
function isCustomDomain() {
return get('isCustomDomain');
}
function zoneName() {
if (isCustomDomain()) return fqdn(); // the appstore sets up the custom domain as a zone
// for shared domain name, strip out the hostname
return fqdn().substr(fqdn().indexOf('.') + 1);
}
+47
View File
@@ -0,0 +1,47 @@
#!/usr/bin/env node
'use strict';
var assert = require('assert'),
mailer = require('./src/mailer.js'),
safe = require('safetydance'),
path = require('path'),
util = require('util');
var COLLECT_LOGS_CMD = path.join(__dirname, 'src/scripts/collectlogs.sh');
function collectLogs(program, callback) {
assert.strictEqual(typeof program, 'string');
assert.strictEqual(typeof callback, 'function');
var logs = safe.child_process.execSync('sudo ' + COLLECT_LOGS_CMD + ' ' + program, { encoding: 'utf8' });
callback(null, logs);
}
function sendCrashNotification(processName) {
collectLogs(processName, function (error, result) {
if (error) {
console.error('Failed to collect logs.', error);
result = util.format('Failed to collect logs.', error);
}
console.log('Sending crash notification email for', processName);
mailer.sendCrashNotification(processName, result);
});
}
function main() {
if (process.argv.length !== 3) return console.error('Usage: crashnotifier.js <processName>');
var processName = process.argv[2];
console.log('Started crash notifier for', processName);
mailer.initialize(function (error) {
if (error) return console.error(error);
sendCrashNotification(processName);
});
}
main();
View File
+125 -40
View File
@@ -2,73 +2,158 @@
'use strict';
var _ejs = require('ejs'),
ejs = require('gulp-ejs'),
var ejs = require('gulp-ejs'),
gulp = require('gulp'),
del = require('del'),
path = require('path'),
concat = require('gulp-concat'),
uglify = require('gulp-uglify'),
serve = require('gulp-serve'),
sass = require('gulp-sass'),
sourcemaps = require('gulp-sourcemaps'),
fs = require('fs');
_ejs.filters.basename = function (obj) {
return path.basename(obj);
};
minifyCSS = require('gulp-minify-css'),
autoprefixer = require('gulp-autoprefixer'),
argv = require('yargs').argv;
gulp.task('3rdparty', function () {
return gulp.src([
'webadmin/src/3rdparty/**/*.js',
'webadmin/src/3rdparty/**/*.css',
'webadmin/src/3rdparty/**/*.otf',
'webadmin/src/3rdparty/**/*.eot',
'webadmin/src/3rdparty/**/*.svg',
'webadmin/src/3rdparty/**/*.ttf',
'webadmin/src/3rdparty/**/*.woff',
'webadmin/src/3rdparty/**/*.js'
gulp.src([
'webadmin/src/3rdparty/**/*.js',
'webadmin/src/3rdparty/**/*.map',
'webadmin/src/3rdparty/**/*.css',
'webadmin/src/3rdparty/**/*.otf',
'webadmin/src/3rdparty/**/*.eot',
'webadmin/src/3rdparty/**/*.svg',
'webadmin/src/3rdparty/**/*.ttf',
'webadmin/src/3rdparty/**/*.woff',
'webadmin/src/3rdparty/**/*.woff2'
])
.pipe(gulp.dest('webadmin/dist/3rdparty/'));
.pipe(gulp.dest('webadmin/dist/3rdparty/'))
.pipe(gulp.dest('setup/splash/website/3rdparty'));
gulp.src('node_modules/bootstrap-sass/assets/javascripts/bootstrap.min.js')
.pipe(gulp.dest('webadmin/dist/3rdparty/js'))
.pipe(gulp.dest('setup/splash/website/3rdparty/js'));
});
// --------------
// JavaScript
// --------------
gulp.task('js', ['js-index', 'js-setup', 'js-update', 'js-error'], function () {});
var oauth = {
clientId: argv.clientId || 'cid-webadmin',
clientSecret: argv.clientSecret || 'unused',
apiOrigin: argv.apiOrigin || ''
};
console.log();
console.log('Using OAuth credentials:');
console.log(' ClientId: %s', oauth.clientId);
console.log(' ClientSecret: %s', oauth.clientSecret);
console.log(' Cloudron API: %s', oauth.apiOrigin || 'default');
console.log();
gulp.task('js-index', function () {
return gulp.src(['webadmin/src/js/index.js', 'webadmin/src/js/client.js', 'webadmin/src/js/appstore.js', 'webadmin/src/js/main.js', 'webadmin/src/views/*.js'])
gulp.src([
'webadmin/src/js/index.js',
'webadmin/src/js/client.js',
'webadmin/src/js/appstore.js',
'webadmin/src/js/main.js',
'webadmin/src/views/*.js'
])
.pipe(ejs({ oauth: oauth }, { ext: '.js' }))
.pipe(sourcemaps.init())
.pipe(concat('index.js'))
.pipe(concat('index.js', { newLine: ';' }))
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js-setup', function () {
return gulp.src(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'])
gulp.src(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'])
.pipe(ejs({ oauth: oauth }, { ext: '.js' }))
.pipe(sourcemaps.init())
.pipe(concat('setup.js'))
.pipe(concat('setup.js', { newLine: ';' }))
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('js', ['js-index', 'js-setup'], function () {});
gulp.task('htmlViews', function () {
return gulp.src('webadmin/src/views/*.html')
.pipe(gulp.dest('webadmin/dist/views'));
gulp.task('js-error', function () {
gulp.src(['webadmin/src/js/error.js'])
.pipe(sourcemaps.init())
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'));
});
gulp.task('html_templates', function () {
var config = JSON.parse(fs.readFileSync('./webadmin/deploymentConfig.json'));
return gulp.src('webadmin/src/*.ejs')
.pipe(ejs(config, { ext: '.html' }))
.pipe(gulp.dest('webadmin/dist'));
gulp.task('js-update', function () {
gulp.src(['webadmin/src/js/update.js'])
.pipe(sourcemaps.init())
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist/js'))
.pipe(gulp.dest('setup/splash/website/js'));
});
gulp.task('html', ['html_templates', 'htmlViews'], function () {
return gulp.src('webadmin/src/*.html')
.pipe(gulp.dest('webadmin/dist'));
// --------------
// HTML
// --------------
gulp.task('html', ['html-views', 'html-update'], function () {
return gulp.src('webadmin/src/*.html').pipe(gulp.dest('webadmin/dist'));
});
gulp.task('clean', function (callback) {
del(['webadmin/dist'], callback);
gulp.task('html-update', function () {
return gulp.src(['webadmin/src/update.html']).pipe(gulp.dest('setup/splash/website'));
});
gulp.task('default', ['clean'], function () {
gulp.start('html', 'js', '3rdparty');
gulp.task('html-views', function () {
return gulp.src('webadmin/src/views/**/*.html').pipe(gulp.dest('webadmin/dist/views'));
});
// --------------
// CSS
// --------------
gulp.task('css', function () {
return gulp.src('webadmin/src/*.scss')
.pipe(sourcemaps.init())
.pipe(sass({ includePaths: ['node_modules/bootstrap-sass/assets/stylesheets/'] }).on('error', sass.logError))
.pipe(autoprefixer())
.pipe(minifyCSS())
.pipe(sourcemaps.write())
.pipe(gulp.dest('webadmin/dist'))
.pipe(gulp.dest('setup/splash/website'));
});
gulp.task('images', function () {
return gulp.src('webadmin/src/img/**')
.pipe(gulp.dest('webadmin/dist/img'));
});
// --------------
// Utilities
// --------------
gulp.task('watch', ['default'], function () {
gulp.watch(['webadmin/src/*.scss'], ['css']);
gulp.watch(['webadmin/src/img/*'], ['images']);
gulp.watch(['webadmin/src/**/*.html'], ['html']);
gulp.watch(['webadmin/src/views/*.html'], ['html-views']);
gulp.watch(['webadmin/src/js/update.js'], ['js-update']);
gulp.watch(['webadmin/src/js/error.js'], ['js-error']);
gulp.watch(['webadmin/src/js/setup.js', 'webadmin/src/js/client.js'], ['js-setup']);
gulp.watch(['webadmin/src/js/index.js', 'webadmin/src/js/client.js', 'webadmin/src/js/appstore.js', 'webadmin/src/js/main.js', 'webadmin/src/views/*.js'], ['js-index']);
gulp.watch(['webadmin/src/3rdparty/**/*'], ['3rdparty']);
});
gulp.task('clean', function () {
del.sync(['webadmin/dist', 'setup/splash/website']);
});
gulp.task('default', ['clean', 'html', 'js', '3rdparty', 'images', 'css'], function () {});
gulp.task('develop', ['watch'], serve({ root: 'webadmin/dist', port: 4000 }));
Executable
+74
View File
@@ -0,0 +1,74 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
// remove timestamp from debug() based output
require('debug').formatArgs = function formatArgs() {
arguments[0] = this.namespace + ' ' + arguments[0];
return arguments;
};
var assert = require('assert'),
debug = require('debug')('box:janitor'),
async = require('async'),
tokendb = require('./src/tokendb.js'),
authcodedb = require('./src/authcodedb.js'),
database = require('./src/database.js');
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
async.series([
database.initialize
], callback);
}
function cleanupExpiredTokens(callback) {
assert.strictEqual(typeof callback, 'function');
tokendb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired tokens.', result);
callback(null);
});
}
function cleanupExpiredAuthCodes(callback) {
assert.strictEqual(typeof callback, 'function');
authcodedb.delExpired(function (error, result) {
if (error) return callback(error);
debug('Cleaned up %s expired authcodes.', result);
callback(null);
});
}
function run() {
cleanupExpiredTokens(function (error) {
if (error) console.error(error);
cleanupExpiredAuthCodes(function (error) {
if (error) console.error(error);
process.exit(0);
});
});
}
if (require.main === module) {
initialize(function (error) {
if (error) {
console.error('janitor task exiting with error', error);
process.exit(1);
}
run();
});
}
+5 -1
View File
@@ -1,8 +1,12 @@
var dbm = require('db-migrate');
var type = dbm.dataType;
var url = require('url');
exports.up = function(db, callback) {
callback();
var dbName = url.parse(process.env.DATABASE_URL).path.substr(1); // remove slash
// by default, mysql collates case insensitively. 'utf8_general_cs' is not available
db.runSql('ALTER DATABASE ' + dbName + ' DEFAULT CHARACTER SET=utf8 DEFAULT COLLATE utf8_bin', callback);
};
exports.down = function(db, callback) {
@@ -1,21 +0,0 @@
var dbm = require('db-migrate');
var type = dbm.dataType;
var uuid = require('node-uuid');
exports.up = function(db, callback) {
var scopes = 'root,profile,users,apps,settings,roleAdmin';
var adminOrigin = 'https://admin-localhost';
// postinstall.sh creates the webadmin entry in production mode
if (process.env.NODE_ENV !== 'test') return callback(null);
db.runSql('INSERT INTO clients (id, appId, clientId, clientSecret, name, redirectURI, scope) ' +
'VALUES (?, ?, ?, ?, ?, ?, ?)', [ uuid.v4(), 'webadmin', 'cid-webadmin', 'unused', 'WebAdmin', adminOrigin, scopes ],
callback);
};
exports.down = function(db, callback) {
// not sure what is meaningful here
callback(null);
};
@@ -1,10 +0,0 @@
var dbm = require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('INSERT INTO settings (key, value) VALUES (?, ?)', [ 'naked_domain', null ], callback);
};
exports.down = function(db, callback) {
db.runSql('DELETE FROM settings WHERE key=?', [ 'naked_domain' ], callback);
};
@@ -1,15 +0,0 @@
var dbm = require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('CREATE TABLE appAddonConfigs(' +
' appId VARCHAR(512) NOT NULL,' +
' addonId VARCHAR(32) NOT NULL,' +
' value VARCHAR(512) NOT NULL,' +
' FOREIGN KEY(appId) REFERENCES apps(id))', callback);
};
exports.down = function(db, callback) {
db.runSql('DROP TABLE appAddonConfigs', callback);
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE users ADD COLUMN resetToken VARCHAR(128) DEFAULT ""', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE users DROP COLUMN resetToken', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,20 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('DELETE FROM tokens', [], function (error) {
if (error) console.error(error);
db.runSql('ALTER TABLE tokens MODIFY expires BIGINT', [], function (error) {
if (error) console.error(error);
callback(error);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE tokens MODIFY expires VARCHAR(512)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,16 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE authcodes ADD COLUMN expiresAt BIGINT NOT NULL', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE authcodes DROP COLUMN expiresAt', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE appPortBindings ADD COLUMN environmentVariable VARCHAR(128) NOT NULL', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE appPortBindings DROP COLUMN environmentVariable', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE appPortBindings DROP COLUMN containerPort', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE appPortBindings ADD COLUMN containerPort VARCHAR(5) NOT NULL', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,20 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('DELETE FROM tokens', [], function (error) {
if (error) console.error(error);
db.runSql('ALTER TABLE tokens CHANGE userId identifier VARCHAR(128) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE tokens CHANGE identifier userId VARCHAR(128) NOT NULL', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN version', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN version VARCHAR(32)', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,16 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN healthy, ADD COLUMN health VARCHAR(128)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN health, ADD COLUMN healthy INTEGER', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN lastBackupId VARCHAR(128)', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN lastBackupId', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN createdAt TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN createdAt', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,12 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
// everyday at 1am
db.runSql('INSERT settings (name, value) VALUES("autoupdate_pattern", ?)', [ '00 00 1 * * *' ], callback);
};
exports.down = function(db, callback) {
db.runSql('DELETE * FROM settings WHERE name="autoupdate_pattern"', [ ], callback);
}
@@ -0,0 +1,15 @@
dbm = dbm || require('db-migrate');
var safe = require('safetydance');
var type = dbm.dataType;
exports.up = function(db, callback) {
var tz = safe.fs.readFileSync('/etc/timezone', 'utf8');
tz = tz ? tz.trim() : 'America/Los_Angeles';
db.runSql('INSERT settings (name, value) VALUES("time_zone", ?)', [ tz ], callback);
};
exports.down = function(db, callback) {
db.runSql('DELETE * FROM settings WHERE name="time_zone"', [ ], callback);
};
@@ -0,0 +1,24 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
var async = require('async');
exports.up = function(db, callback) {
// http://stackoverflow.com/questions/386294/what-is-the-maximum-length-of-a-valid-email-address
async.series([
db.runSql.bind(db, 'ALTER TABLE users MODIFY username VARCHAR(254)'),
db.runSql.bind(db, 'ALTER TABLE users ADD CONSTRAINT users_username UNIQUE (username)'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY email VARCHAR(254)'),
db.runSql.bind(db, 'ALTER TABLE users ADD CONSTRAINT users_email UNIQUE (email)'),
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE users DROP INDEX users_username'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY username VARCHAR(512)'),
db.runSql.bind(db, 'ALTER TABLE users DROP INDEX users_email'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY email VARCHAR(512)'),
], callback);
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
var async = require('async');
exports.up = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE users MODIFY username VARCHAR(254) NOT NULL'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY email VARCHAR(254) NOT NULL'),
], callback);
};
exports.down = function(db, callback) {
async.series([
db.runSql.bind(db, 'ALTER TABLE users MODIFY username VARCHAR(254)'),
db.runSql.bind(db, 'ALTER TABLE users MODIFY email VARCHAR(254)'),
], callback);
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN lastManifestJson VARCHAR(2048)', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN lastManifestJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE lastManifestJson lastBackupConfigJson VARCHAR(2048)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps CHANGE lastBackupConfigJson lastManifestJson VARCHAR(2048)', [], function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,17 @@
dbm = dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('ALTER TABLE apps ADD COLUMN oldConfigJson VARCHAR(2048)', function (error) {
if (error) console.error(error);
callback(error);
});
};
exports.down = function(db, callback) {
db.runSql('ALTER TABLE apps DROP COLUMN oldConfigJson', function (error) {
if (error) console.error(error);
callback(error);
});
};
@@ -0,0 +1,10 @@
var dbm = global.dbm || require('db-migrate');
var type = dbm.dataType;
exports.up = function(db, callback) {
db.runSql('DELETE FROM settings', [ ], callback);
};
exports.down = function(db, callback) {
callback();
};
+24 -22
View File
@@ -2,64 +2,66 @@ CREATE TABLE IF NOT EXISTS users(
id VARCHAR(128) NOT NULL UNIQUE,
username VARCHAR(512) NOT NULL,
email VARCHAR(512) NOT NULL,
_password VARCHAR(512) NOT NULL,
publicPem VARCHAR(2048) NOT NULL,
_privatePemCipher VARCHAR(2048) NOT NULL,
_salt VARCHAR(512) NOT NULL,
password VARCHAR(1024) NOT NULL,
salt VARCHAR(512) NOT NULL,
createdAt VARCHAR(512) NOT NULL,
modifiedAt VARCHAR(512) NOT NULL,
admin INTEGER NOT NULL,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS tokens(
accessToken VARCHAR(512) NOT NULL UNIQUE,
userId VARCHAR(512) NOT NULL,
clientId VARCHAR(512),
accessToken VARCHAR(128) NOT NULL UNIQUE,
userId VARCHAR(128) NOT NULL,
clientId VARCHAR(128),
scope VARCHAR(512) NOT NULL,
expires VARCHAR(512) NOT NULL,
PRIMARY KEY(accessToken));
CREATE TABLE IF NOT EXISTS clients(
id VARCHAR(512) NOT NULL UNIQUE,
appId VARCHAR(512) NOT NULL,
clientId VARCHAR(512) NOT NULL,
id VARCHAR(128) NOT NULL UNIQUE,
appId VARCHAR(128) NOT NULL,
clientSecret VARCHAR(512) NOT NULL,
name VARCHAR(512) NOT NULL,
redirectURI VARCHAR(512) NOT NULL,
scope VARCHAR(512) NOT NULL,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS apps(
id VARCHAR(512) NOT NULL UNIQUE,
appStoreId VARCHAR(512) NOT NULL,
id VARCHAR(128) NOT NULL UNIQUE,
appStoreId VARCHAR(128) NOT NULL,
version VARCHAR(32),
installationState VARCHAR(512) NOT NULL,
installationProgress VARCHAR(512),
runState VARCHAR(512),
healthy INTEGER,
containerId VARCHAR(128),
manifestJson VARCHAR,
manifestJson VARCHAR(2048),
httpPort INTEGER,
location VARCHAR(512) NOT NULL UNIQUE,
location VARCHAR(128) NOT NULL UNIQUE,
dnsRecordId VARCHAR(512),
accessRestriction VARCHAR(512),
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS appPortBindings(
hostPort VARCHAR(5) NOT NULL UNIQUE,
hostPort INTEGER NOT NULL UNIQUE,
containerPort VARCHAR(5) NOT NULL,
appId VARCHAR(512) NOT NULL,
appId VARCHAR(128) NOT NULL,
FOREIGN KEY(appId) REFERENCES apps(id),
PRIMARY KEY(hostPort));
CREATE TABLE IF NOT EXISTS authcodes(
authCode VARCHAR(512) NOT NULL UNIQUE,
userId VARCHAR(512) NOT NULL,
clientId VARCHAR(512) NOT NULL,
authCode VARCHAR(128) NOT NULL UNIQUE,
userId VARCHAR(128) NOT NULL,
clientId VARCHAR(128) NOT NULL,
PRIMARY KEY(authCode));
CREATE TABLE IF NOT EXISTS settings(
key VARCHAR(512) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL UNIQUE,
value VARCHAR(512),
PRIMARY KEY(key));
PRIMARY KEY(name));
CREATE TABLE IF NOT EXISTS appAddonConfigs(
appId VARCHAR(128) NOT NULL,
addonId VARCHAR(32) NOT NULL,
value VARCHAR(512) NOT NULL,
FOREIGN KEY(appId) REFERENCES apps(id));
+43 -35
View File
@@ -1,74 +1,82 @@
#### WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
#### This file is not used by any code and is here to document the latest schema
#### General ideas
#### Default char set is utf8 and DEFAULT COLLATE is utf8_bin. Collate affects comparisons in WHERE and ORDER
#### Strict mode is enabled
#### VARCHAR - stored as part of table row (use for strings)
#### TEXT - stored offline from table row (use for strings)
#### BLOB - stored offline from table row (use for binary data)
#### https://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
CREATE TABLE IF NOT EXISTS users(
id VARCHAR(128) NOT NULL UNIQUE,
username VARCHAR(512) NOT NULL,
email VARCHAR(512) NOT NULL,
_password VARCHAR(512) NOT NULL,
publicPem VARCHAR(2048) NOT NULL,
_privatePemCipher VARCHAR(2048) NOT NULL,
_salt VARCHAR(512) NOT NULL,
username VARCHAR(254) NOT NULL UNIQUE,
email VARCHAR(254) NOT NULL UNIQUE,
password VARCHAR(1024) NOT NULL,
salt VARCHAR(512) NOT NULL,
createdAt VARCHAR(512) NOT NULL,
modifiedAt VARCHAR(512) NOT NULL,
admin INTEGER NOT NULL,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS tokens(
accessToken VARCHAR(512) NOT NULL UNIQUE,
userId VARCHAR(512) NOT NULL,
clientId VARCHAR(512),
accessToken VARCHAR(128) NOT NULL UNIQUE,
identifier VARCHAR(128) NOT NULL,
clientId VARCHAR(128),
scope VARCHAR(512) NOT NULL,
expires VARCHAR(512) NOT NULL,
expires BIGINT NOT NULL,
PRIMARY KEY(accessToken));
CREATE TABLE IF NOT EXISTS clients(
id VARCHAR(512) NOT NULL UNIQUE,
appId VARCHAR(512) NOT NULL,
clientId VARCHAR(512) NOT NULL,
id VARCHAR(128) NOT NULL UNIQUE,
appId VARCHAR(128) NOT NULL,
clientSecret VARCHAR(512) NOT NULL,
name VARCHAR(512) NOT NULL,
redirectURI VARCHAR(512) NOT NULL,
scope VARCHAR(512) NOT NULL,
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS apps(
id VARCHAR(512) NOT NULL UNIQUE,
appStoreId VARCHAR(512) NOT NULL,
version VARCHAR(32),
id VARCHAR(128) NOT NULL UNIQUE,
appStoreId VARCHAR(128) NOT NULL,
installationState VARCHAR(512) NOT NULL,
installationProgress VARCHAR(512),
runState VARCHAR(512),
healthy INTEGER,
health VARCHAR(128),
containerId VARCHAR(128),
manifestJson VARCHAR,
httpPort INTEGER,
location VARCHAR(512) NOT NULL UNIQUE,
manifestJson VARCHAR(2048),
httpPort INTEGER, // this is the nginx proxy port and not manifest.httpPort
location VARCHAR(128) NOT NULL UNIQUE,
dnsRecordId VARCHAR(512),
accessRestriction VARCHAR(512),
createdAt TIMESTAMP(2) NOT NULL DEFAULT CURRENT_TIMESTAMP,
lastBackupId VARCHAR(128),
lastBackupConfigJson VARCHAR(2048), // used for appstore and non-appstore installs. it's here so it's easy to do REST validation
PRIMARY KEY(id));
CREATE TABLE IF NOT EXISTS appPortBindings(
hostPort VARCHAR(5) NOT NULL UNIQUE,
containerPort VARCHAR(5) NOT NULL,
appId VARCHAR(512) NOT NULL,
hostPort INTEGER NOT NULL UNIQUE,
environmentVariable VARCHAR(128) NOT NULL,
appId VARCHAR(128) NOT NULL,
FOREIGN KEY(appId) REFERENCES apps(id),
PRIMARY KEY(hostPort));
CREATE TABLE IF NOT EXISTS appAddonConfigs(
appId VARCHAR(512) NOT NULL,
addonId VARCHAR(32) NOT NULL,
value VARCHAR(512),
FOREIGN KEY(appId) REFERENCES apps(id));
CREATE TABLE IF NOT EXISTS authcodes(
authCode VARCHAR(512) NOT NULL UNIQUE,
userId VARCHAR(512) NOT NULL,
clientId VARCHAR(512) NOT NULL,
authCode VARCHAR(128) NOT NULL UNIQUE,
userId VARCHAR(128) NOT NULL,
clientId VARCHAR(128) NOT NULL,
expiresAt BIGINT NOT NULL,
PRIMARY KEY(authCode));
CREATE TABLE IF NOT EXISTS settings(
key VARCHAR(512) NOT NULL UNIQUE,
name VARCHAR(128) NOT NULL UNIQUE,
value VARCHAR(512),
PRIMARY KEY(key));
PRIMARY KEY(name));
CREATE TABLE IF NOT EXISTS appAddonConfigs(
appId VARCHAR(128) NOT NULL,
addonId VARCHAR(32) NOT NULL,
value VARCHAR(512) NOT NULL,
FOREIGN KEY(appId) REFERENCES apps(id));
+1955 -981
View File
File diff suppressed because it is too large Load Diff
-119
View File
@@ -1,119 +0,0 @@
#!/usr/bin/env node
'use strict';
require('supererror')({ splatchError: true });
var express = require('express'),
url = require('url'),
async = require('async'),
assert = require('assert'),
debug = require('debug')('box:proxy'),
proxy = require('proxy-middleware'),
session = require('cookie-session'),
database = require('./src/database.js'),
appdb = require('./src/appdb.js'),
clientdb = require('./src/clientdb.js'),
config = require('./config.js'),
http = require('http');
var gSessions = {};
var gProxyMiddlewareCache = {};
var gApp = express();
var gHttpServer = http.createServer(gApp);
var CALLBACK_URI = '/callback';
var PORT = 4000;
function startServer(callback) {
assert(typeof callback === 'function');
gHttpServer.on('error', console.error);
gApp.use(session({
keys: ['blue', 'cheese', 'is', 'something']
}));
gApp.use(function (req, res, next) {
if (req.session && gSessions[req.session.sessid]) return next();
if (req.path === CALLBACK_URI) {
// FIXME we need to exchange the authCode and verify it
req.session.sessid = req.query.authCode;
// this is a simple in memory auth store
gSessions[req.session.sessid] = 'ok';
debug('user verified.');
// now redirect to the actual initially requested URL
res.redirect(req.session.returnTo);
} else {
var port = parseInt(req.headers['x-cloudron-proxy-port'], 10);
if (!Number.isFinite(port)) {
console.error('Failed to parse nginx proxy header to get app port.');
return res.send(500, 'Routing error. No forwarded port.');
}
debug('begin verifying user for app on port %s.', port);
appdb.getByHttpPort(port, function (error, result) {
if (error) {
console.error('Unknown app.', error);
return res.send(500, 'Unknown app.');
}
clientdb.getByAppId('proxy-' + result.id, function (error, result) {
if (error) {
console.error('Unkonwn OAuth client.', error);
return res.send(500, 'Unknown OAuth client.');
}
req.session.port = port;
req.session.returnTo = result.redirectURI + req.path;
var callbackURL = result.redirectURI + CALLBACK_URI;
var scope = 'profile,roleUser';
var clientId = result.clientId;
var oauthLogin = config.adminOrigin() + '/api/v1/oauth/dialog/authorize?response_type=code&client_id=' + clientId + '&redirect_uri=' + callbackURL + '&scope=' + scope;
debug('begin OAuth flow for client %s.', result.name);
// begin the OAuth flow
res.redirect(oauthLogin);
});
});
}
});
gApp.use(function (req, res, next) {
var port = req.session.port;
debug('proxy request for port %s with path %s.', port, req.path);
var proxyMiddleware = gProxyMiddlewareCache[port];
if (!proxyMiddleware) {
console.log('Adding proxy middleware for port %d', port);
proxyMiddleware = proxy(url.parse('http://127.0.0.1:' + port));
gProxyMiddlewareCache[port] = proxyMiddleware;
}
proxyMiddleware(req, res, next);
});
gHttpServer.listen(PORT, callback);
}
async.series([
database.initialize,
startServer
], function (error) {
if (error) {
console.error('Failed to start proxy server.', error);
process.exit(1);
}
console.log('Proxy server listening...');
});
+65 -65
View File
@@ -1,96 +1,96 @@
{
"name": "yellowtent",
"description": "Yellow tent",
"name": "Cloudron",
"description": "Main code for a cloudron",
"version": "0.0.1",
"private": "true",
"author": {
"name": "Yellow tent authors",
"email": "girish@forwardbias.in"
"name": "Cloudron authors"
},
"repository": {
"type": "git"
},
"engines": [
"node >= 0.10.0"
"node >= 0.12.0"
],
"bin": {
"yellowtent": "./server.js"
},
"dependencies": {
"async": "^0.6.2",
"body-parser": "~1.9.3",
"commander": "^2.2.0",
"async": "^1.2.1",
"aws-sdk": "^2.1.46",
"body-parser": "^1.13.1",
"cloudron-manifestformat": "^1.7.0",
"connect-ensure-login": "^0.1.1",
"connect-lastmile": "0.0.8",
"connect-timeout": "~1.4.0",
"cookie-parser": "1.1.0",
"connect-lastmile": "0.0.13",
"connect-timeout": "^1.5.0",
"cookie-parser": "^1.3.5",
"cookie-session": "^1.1.0",
"csurf": "^1.6.1",
"db-migrate": "~0.7.1",
"debug": "~0.8.1",
"dockerode": "~2.0.5",
"ejs": "^1.0.0",
"encfs": "^0.1.1",
"express": "~4.2.0",
"express-session": "~1.1.0",
"js-yaml": "~3.2.2",
"json": "~9.0.1",
"memorystream": "~0.2.0",
"mime": "^1.2.11",
"mkdirp": "~0.3.5",
"morgan": "~1.0.1",
"multiparty": "http://registry.npmjs.org/multiparty/-/multiparty-4.0.0.tgz",
"native-dns": "~0.6.1",
"node-uuid": "^1.4.1",
"nodejs-disks": "~0.2.1",
"nodemailer": "~1.3.0",
"nodemailer-smtp-transport": "~0.1.13",
"cron": "^1.0.9",
"csurf": "^1.6.6",
"db-migrate": "^0.9.2",
"debug": "^2.2.0",
"dockerode": "^2.2.2",
"ejs": "^2.2.4",
"ejs-cli": "^1.0.1",
"express": "^4.12.4",
"express-session": "^1.11.3",
"hat": "0.0.3",
"json": "^9.0.3",
"ldapjs": "^0.7.1",
"memorystream": "^0.3.0",
"mime": "^1.3.4",
"morgan": "^1.6.0",
"multiparty": "^4.1.2",
"mysql": "^2.7.0",
"native-dns": "^0.7.0",
"node-uuid": "^1.4.3",
"nodemailer": "^1.3.0",
"nodemailer-smtp-transport": "^1.0.3",
"oauth2orize": "^1.0.1",
"once": "^1.3.0",
"passport": "~0.2.1",
"once": "^1.3.2",
"passport": "^0.2.2",
"passport-http": "^0.2.2",
"passport-http-bearer": "^1.0.1",
"passport-local": "^1.0.0",
"passport-oauth2-client-password": "~0.1.2",
"password-generator": "~0.2.3",
"proxy-middleware": "~0.5.1",
"readdirp": "^1.0.1",
"rimraf": "^2.2.6",
"safetydance": "0.0.12",
"semver": "~4.2.0",
"serve-favicon": "~2.1.7",
"split": "^0.3.0",
"sqlite3": "^3.0.0",
"superagent": "~0.17.0",
"supererror": "~0.6.0",
"underscore": "~1.7.0",
"ursa": "^0.8.0",
"validator": "~3.22.1"
"passport-oauth2-client-password": "^0.1.2",
"password-generator": "^1.0.0",
"proxy-middleware": "^0.13.0",
"safetydance": "0.0.19",
"semver": "^4.3.6",
"serve-favicon": "^2.2.0",
"split": "^1.0.0",
"superagent": "~0.21.0",
"supererror": "^0.7.0",
"tail-stream": "https://registry.npmjs.org/tail-stream/-/tail-stream-0.2.1.tgz",
"underscore": "^1.7.0",
"valid-url": "^1.0.9",
"validator": "^3.30.0"
},
"devDependencies": {
"apidoc": "*",
"aws-sdk": "~2.0.23",
"bootstrap-sass": "^3.3.3",
"del": "^1.1.1",
"expect.js": "*",
"gulp": "^3.8.10",
"gulp": "^3.8.11",
"gulp-autoprefixer": "^2.3.0",
"gulp-concat": "^2.4.3",
"gulp-ejs": "^1.0.0",
"gulp-sourcemaps": "^1.3.0",
"hock": "~0.2.5",
"husky": "~0.6.2",
"gulp-minify-css": "^1.1.3",
"gulp-sass": "^2.0.1",
"gulp-serve": "^1.0.0",
"gulp-sourcemaps": "^1.5.2",
"gulp-uglify": "^1.1.0",
"hock": "~1.2.0",
"istanbul": "*",
"js2xmlparser": "^1.0.0",
"mocha": "*",
"nock": "~0.43.1",
"redis": "~0.12.1",
"s3-cli": "~0.11.1",
"semver": "~4.2.0",
"sinon": "~1.10.3"
"nock": "^2.6.0",
"node-sass": "^3.0.0-alpha.0",
"redis": "^0.12.1",
"sinon": "^1.12.2",
"yargs": "^3.15.0"
},
"scripts": {
"create_testdb": "rm -rf $HOME/.yellowtenttest/*; mkdir -p $HOME/.yellowtenttest/data; NODE_ENV=test DATABASE_URL=sqlite3:///$HOME/.yellowtenttest/data/cloudron.sqlite node_modules/.bin/db-migrate up",
"migrate": "mkdir -p $HOME/.yellowtent/data; DATABASE_URL=sqlite3:///$HOME/.yellowtent/data/cloudron.sqlite node_modules/.bin/db-migrate up",
"migrate_data": "DATABASE_URL=sqlite3:///home/yellowtent/data/cloudron.sqlite db-migrate up",
"test": "scripts/checkInstall && npm run-script create_testdb && NODE_ENV=test ./node_modules/istanbul/lib/cli.js test $1 ./node_modules/mocha/bin/_mocha -- -R spec ./src/test ./src/routes/test",
"migrate_local": "DATABASE_URL=mysql://root:@localhost/box node_modules/.bin/db-migrate up",
"migrate_test": "BOX_ENV=test DATABASE_URL=mysql://root:@localhost/boxtest node_modules/.bin/db-migrate up",
"test": "npm run migrate_test && src/test/setupTest && BOX_ENV=test ./node_modules/istanbul/lib/cli.js test $1 ./node_modules/mocha/bin/_mocha -- -R spec ./src/test ./src/routes/test",
"postmerge": "/bin/true",
"precommit": "/bin/true",
"prepush": "npm test",
-144
View File
@@ -1,144 +0,0 @@
#!/bin/bash
set -eu
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
readonly JSON="${SOURCE_DIR}/node_modules/.bin/json"
[ "$(uname -s)" == "Darwin" ] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
readonly VERSIONS_URL_DEV="https://s3.amazonaws.com/cloudron-releases/versions-dev.json"
readonly VERSIONS_S3_URL_DEV="s3://cloudron-releases/versions-dev.json"
readonly VERSIONS_URL_STAGING="https://s3.amazonaws.com/cloudron-releases/versions-staging.json"
readonly VERSIONS_S3_URL_STAGING="s3://cloudron-releases/versions-staging.json"
if [[ ! -f "${SOURCE_DIR}/../installer/scripts/digitalOceanFunctions.sh" ]]; then
echo "Could not locate digitalOceanFunctions.sh"
exit 1
fi
source "${SOURCE_DIR}/../installer/scripts/digitalOceanFunctions.sh"
new_versions_file=""
source_tarball_url=""
image_id=""
cmd=""
new_version=""
changelog="If I told you, I'd have to kill you"
upgrade="autodetect"
versions_url="${VERSIONS_URL_DEV}"
versions_s3_url="${VERSIONS_S3_URL_DEV}"
args=$($GNU_GETOPT -o "" -l "dev,staging,code:,image:,rerelease,new,list,revert,changelog:,release:,upgrade" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--dev) shift;;
--staging) versions_url="${VERSIONS_URL_STAGING}"; versions_s3_url="${VERSIONS_S3_URL_STAGING}"; shift;;
--code) source_tarball_url="$2"; shift 2;;
--image) image_id="$2"; shift 2;;
--rerelease) cmd="rerelease"; shift;;
--new) cmd="new"; shift;;
--release) cmd="release"; new_versions_file="$2"; shift 2;;
--list) cmd="list"; shift;;
--revert) cmd="revert"; shift;;
--changelog) changelog="$2"; shift 2;;
--upgrade) upgrade="true"; shift;;
--) shift; break;;
*) echo "Unknown option $2"; exit;;
esac
done
shift $(expr $OPTIND - 1)
download_current() {
versions_url="$1"
# download the existing version file if the user hasn't provided one
local current_versions_file=$(mktemp -t box-versions 2>/dev/null || mktemp)
if ! wget -q -O "${current_versions_file}" "${versions_url}"; then
echo "Error downloading versions file"
exit 1
fi
echo "${current_versions_file}"
}
if [[ "${cmd}" == "list" ]]; then
cat "$(download_current "${versions_url}")"
exit 0
elif [[ "${cmd}" == "release" ]]; then
if [[ ! -f "${new_versions_file}" ]]; then
echo "${new_versions_file} cannot be found"
exit 1
fi
elif [[ "${cmd}" == "new" ]]; then
if [[ -z "${source_tarball_url}" || -z "${image_id}" ]]; then
echo "--code and --image is required"
exit 1
fi
new_version="0.0.1"
image_name=$(get_image_name "${image_id}")
new_versions_file=$(mktemp -t box-versions 2>/dev/null || mktemp)
cat > "${new_versions_file}" <<EOF
{
"0.0.1": {
"sourceTarballUrl": "${source_tarball_url}",
"imageId": ${image_id},
"imageName": "${image_name}",
"changelog": [ "Let's start at the very beginning, a very good way to start" ],
"date": "$(date -u)",
"next": null
}
}
EOF
elif [[ "${cmd}" == "revert" ]]; then
new_versions_file=$(download_current "${versions_url}")
last_version=$(cat "${new_versions_file}" | $JSON -ka | tail -n 1)
second_last_version=$(cat "${new_versions_file}" | $JSON -ka | tail -n 2 | head -n 1)
echo "Removing $last_version and making $second_last_version the last release"
$JSON -q -I -f "${new_versions_file}" -e "delete this['${last_version}']"
$JSON -q -I -f "${new_versions_file}" -e "this['${second_last_version}'].next = null"
else
new_versions_file=$(download_current "${versions_url}")
# modify existing versions.json
if [[ -z "${source_tarball_url}" && -z "${image_id}" && "${cmd}" != "rerelease" ]]; then
echo "--code or --image is required"
exit 1
fi
readonly last_version=$(cat "${new_versions_file}" | $JSON -ka | tail -n 1)
if [[ -z "${source_tarball_url}" ]]; then
source_tarball_url=$($JSON -f "${new_versions_file}" -D, "${last_version},sourceTarballUrl")
echo "Using the previous code url : ${source_tarball_url}"
fi
if [[ -z "${image_id}" ]]; then
image_id=$($JSON -f "${new_versions_file}" -D, "${last_version},imageId")
echo "Using the previous image id : ${image_id}"
fi
if [[ "${upgrade}" == "autodetect" ]]; then
old_image_id=$($JSON -f "${new_versions_file}" -D, "${last_version},imageId")
upgrade=$([[ "${old_image_id}" != "${image_id}" ]] && echo "true" || echo "false")
fi
new_version=$($SOURCE_DIR/node_modules/.bin/semver -i "${last_version}")
echo "Releasing version ${new_version}"
image_name=$(get_image_name "${image_id}")
$JSON -q -I -f "${new_versions_file}" -e "this['${last_version}'].next = '${new_version}'"
$JSON -q -I -f "${new_versions_file}" -e "this['${new_version}'] = { 'sourceTarballUrl': '${source_tarball_url}', 'imageId': ${image_id}, 'imageName': '${image_name}', 'changelog': [ \"${changelog}\" ], 'upgrade': ${upgrade}, 'date': '$(date -u)', 'next': null }"
fi
echo "Verifying new versions file"
$SOURCE_DIR/release/verify.js "${new_versions_file}"
echo "Uploading new versions file"
$SOURCE_DIR/node_modules/.bin/s3-cli put --acl-public --default-mime-type "application/json" "${new_versions_file}" "${versions_s3_url}"
cat "${new_versions_file}"
-102
View File
@@ -1,102 +0,0 @@
#!/bin/bash
set -eu
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
readonly JSON="${SOURCE_DIR}/node_modules/.bin/json"
readonly SEMVER="${SOURCE_DIR}/node_modules/.bin/semver"
[ $(uname -s) == "Darwin" ] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
readonly VERSIONS_URL_DEV="https://s3.amazonaws.com/cloudron-releases/versions-dev.json"
readonly VERSIONS_URL_STAGING="https://s3.amazonaws.com/cloudron-releases/versions-staging.json"
readonly VERSIONS_S3_URL_STAGING="s3://cloudron-releases/versions-staging.json"
verify_tag() {
tag="$1"
git rev-parse --verify "tags/$1" 2>/dev/null
}
download() {
# download the existing version file if the user hasn't provided one
local tmp_file=$(mktemp -t stage 2>/dev/null || mktemp)
if wget -q -O "${tmp_file}" "${1}"; then
echo "${tmp_file}"
fi
}
read_changelog() {
version="$1"
changelog_file="${SOURCE_DIR}/release/changelogs/changelog-$1"
if [[ -f "${changelog_file}" ]]; then
cat "${changelog_file}" | grep -v "^#"
fi
}
if [[ $# -lt 1 ]]; then
echo "Usage: stage.sh <dev-version>"
exit 1
fi
dev_version="$1"
dev_versions_file=$(download "${VERSIONS_URL_DEV}")
if [[ -z "${dev_versions_file}" ]]; then
echo "Error downloading dev versions file"
exit 1
fi
dev_version_info=$($JSON -f "${dev_versions_file}" -D, "${dev_version}")
if [[ -z "${dev_version_info}" ]]; then
echo "No such version in dev ${dev_versions_file} ${dev_version}"
exit 1
fi
staging_versions_file=$(download "${VERSIONS_URL_STAGING}") ## TODO: this can fail
if [[ -z "${staging_versions_file}" ]]; then
echo "Creating new staging release file"
staging_versions_file=$(mktemp -t stage 2>/dev/null || mktemp)
echo "{}" > ${staging_versions_file}
readonly staging_last_version="0.0.0"
staging_new_version="0.0.1"
upgrade="false"
else
readonly staging_last_version=$(cat "${staging_versions_file}" | $JSON -ka | tail -n 1)
staging_new_version=$($SEMVER -i "${staging_last_version}")
$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_last_version}'].next = '${staging_new_version}'"
last_image_id=$($JSON -f "${staging_versions_file}" -D, "${staging_last_version},imageId")
new_image_id=$($JSON -f "${dev_versions_file}" -D, "${dev_version},imageId")
upgrade=$([[ "${last_image_id}" != "${new_image_id}" ]] && echo "true" || echo "false")
fi
#TODO: check if the tag matches the sha1 in the sourceTarballUrl
if ! verify_tag "v${staging_new_version}"; then
echo "No git tag named v${staging_new_version} found"
exit 1
fi
changelog=$(read_changelog "${staging_new_version}")
if [[ -z "${changelog}" ]]; then
echo "Missing changelog file or empty change log"
exit 1
fi
echo "Releasing version ${staging_new_version}"
$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_new_version}'] = ${dev_version_info}"
#$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_new_version}'].changelog = '[ "${changelog}" ]'"
$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_new_version}'].upgrade = ${upgrade}"
$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_new_version}'].date = '$(date -u)'"
$JSON -q -I -f "${staging_versions_file}" -e "this['${staging_new_version}'].next = null"
echo "Verifying new versions file"
$SOURCE_DIR/release/verify.js "${staging_versions_file}"
echo "Uploading new versions file"
$SOURCE_DIR/node_modules/.bin/s3-cli put --acl-public --default-mime-type "application/json" "${staging_versions_file}" "${VERSIONS_S3_URL_STAGING}"
cat "${staging_versions_file}" | tee $SOURCE_DIR/release/versions-staging.json
-55
View File
@@ -1,55 +0,0 @@
#!/usr/bin/env node
var AWS = require('aws-sdk'),
fs = require('fs'),
path = require('path'),
safe = require('safetydance'),
semver = require('semver'),
url = require('url');
function die(msg) {
console.error(msg);
process.exit(1);
}
function verify(versionsFileName) {
// check if the json is valid
var versionsJson = safe.JSON.parse(fs.readFileSync(versionsFileName));
if (!versionsJson) {
die(versionsFileName + ' is not valid json : ' + safe.error);
}
// check all the keys
var sortedVersions = Object.keys(versionsJson).sort();
sortedVersions.forEach(function (version, index) {
if (typeof versionsJson[version].imageId !== 'number') die('version ' + version + ' does not have proper imageId');
if (typeof versionsJson[version].imageName !== 'string' || !versionsJson[version].imageName.length) die('version ' + version + ' does not have proper imageName');
if ('changeLog' in versionsJson[version] && !util.isArray(versionsJson[version].changeLog)) die('version ' + version + ' does not have proper changeLog');
if (typeof versionsJson[version].date !== 'string' || ((new Date(versionsJson[version].date)).toString() === 'Invalid Date')) die('invalid date or missing date');
if (versionsJson[version].next !== null && typeof versionsJson[version].next !== 'string') die('version ' + version + ' does not have proper next');
if (typeof versionsJson[version].sourceTarballUrl !== 'string') die('version ' + version + ' does not have proper sourceTarballUrl');
var tarballUrl = url.parse(versionsJson[version].sourceTarballUrl);
if (tarballUrl.protocol !== 'https:') die('sourceTarballUrl must be https');
if (!/.tar.gz$/.test(tarballUrl.path)) die('sourceTarballUrl must be tar.gz');
var nextVersion = versionsJson[version].next;
// despite having the 'next' field, the appstore code currently relies on all versions being sorted based on semver.compare (see boxversions.js)
if (nextVersion && semver.gt(version, nextVersion)) die('next version cannot be less than current @' + version);
});
// check that package.json version is in versions.json
var currentVersion = require('../package.json').version;
if (sortedVersions.indexOf(currentVersion) === -1) {
die('package.json version is not present in versions.json');
}
}
if (process.argv.length === 3) {
verify(process.argv[2]);
process.exit(0);
} else {
console.log('verify.js <versions_file>');
}
-7
View File
@@ -1,7 +0,0 @@
{
"0.0.1": {
"revision": "9f09f8e7a8633e8b3341bb9c610f5f631ccd288c",
"imageId": 7531071,
"next": null
}
}
-37
View File
@@ -1,37 +0,0 @@
#!/bin/bash
echo
echo "Starting Cloudron at port 443"
echo
readonly BOX_SRC_DIR="$(cd $(dirname "$0"); pwd)"
readonly NGINX_ROOT=~/.yellowtent/nginx
readonly PROVISION_VERSION=0.1
readonly PROVISION_BOX_VERSIONS_URL=0.1
readonly DATA_DIR=~/.yellowtent/data
readonly FQDN=admin-localhost
mkdir -p "${NGINX_ROOT}/applications"
mkdir -p "${NGINX_ROOT}/cert"
mkdir -p "${DATA_DIR}"
# get the database current
npm run-script migrate
cp setup/start/nginx/nginx.conf "${NGINX_ROOT}/nginx.conf"
cp setup/start/nginx/mime.types "${NGINX_ROOT}/mime.types"
cp setup/start/nginx/cert/* "${NGINX_ROOT}/cert/"
# adjust the generated nginx config for local use
touch "${NGINX_ROOT}/naked_domain.conf"
sed -e "s/##ADMIN_FQDN##/${FQDN}/" -e "s|##BOX_SRC_DIR##|${BOX_SRC_DIR}|" setup/start/nginx/admin.conf_template > "${NGINX_ROOT}/applications/admin.conf"
sed -e "s/user www-data/user ${USER}/" -i "${NGINX_ROOT}/nginx.conf"
# add webadmin oauth client
readonly WEBADMIN_ID=abcdefg
readonly WEBADMIN_SCOPES="root,profile,users,apps,settings,roleAdmin"
sqlite3 "${DATA_DIR}/cloudron.sqlite" "INSERT OR REPLACE INTO clients (id, appId, clientId, clientSecret, name, redirectURI, scope) VALUES (\"${WEBADMIN_ID}\", \"webadmin\", \"cid-webadmin\", \"secret-webadmin\", \"WebAdmin\", \"https://${FQDN}\", \"${WEBADMIN_SCOPES}\")"
# start nginx
sudo nginx -c nginx.conf -p "${NGINX_ROOT}"
-40
View File
@@ -1,40 +0,0 @@
#!/bin/bash
set -eu
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
# reset sudo timestamp to avoid wrong success
sudo -k || sudo --reset-timestamp
# checks if all scripts are sudo access
scripts=("${SOURCE_DIR}/src/scripts/rmappdir.sh" \
"${SOURCE_DIR}/src/scripts/reloadnginx.sh" \
"${SOURCE_DIR}/src/scripts/backup.sh" \
"${SOURCE_DIR}/src/scripts/reboot.sh" \
"${SOURCE_DIR}/src/scripts/reloadcollectd.sh")
for script in "${scripts[@]}"; do
if [[ $(sudo -n "${script}" --check 2>/dev/null) != "OK" ]]; then
echo ""
echo "${script} does not have sudo access."
echo "You have to add the lines below to /etc/sudoers.d/yellowtent."
echo ""
echo "Defaults!${script} env_keep=HOME"
echo "${USER} ALL=(ALL) NOPASSWD: ${script}"
echo ""
exit 1
fi
done
if ! docker inspect girish/test:0.6 >/dev/null 2>/dev/null; then
echo "docker pull girish/test:0.6 for tests to run"
exit 1
fi
if ! docker inspect girish/redis:0.1 >/dev/null 2>/dev/null; then
echo "docker pull girish/redis:0.1 for tests to run"
exit 1
fi
exit 0
-46
View File
@@ -1,46 +0,0 @@
#!/bin/bash
set -eu
[[ ! -f "${HOME}/.s3cfg" ]] && echo "~/.s3cfg missing" && exit 1
# Only GNU getopt supports long options. OS X comes bundled with the BSD getopt
# brew install gnu-getopt to get the GNU getopt on OS X
[[ $(uname -s) == "Darwin" ]] && GNU_GETOPT="/usr/local/opt/gnu-getopt/bin/getopt" || GNU_GETOPT="getopt"
readonly GNU_GETOPT
args=$(${GNU_GETOPT} -o "" -l "revision:" -n "$0" -- "$@")
eval set -- "${args}"
commitish="HEAD"
while true; do
case "$1" in
--revision) commitish="$2"; shift 2;;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
done
readonly SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
readonly TMPDIR=${TMPDIR:-/tmp} # why is this not set on mint?
version=$(cd "${SOURCE_DIR}" && git rev-parse "${commitish}")
bundle_dir=$(mktemp -d -t box 2>/dev/null || mktemp -d box-XXXXXXXXXX --tmpdir=$TMPDIR)
bundle_file="${TMPDIR}/box-${version}.tar.gz"
chmod "o+rx,g+rx" "${bundle_dir}" # otherwise extracted tarball director won't be readable by others/group
echo "Checking out code [${version}] into ${bundle_dir}"
(cd "${SOURCE_DIR}" && git archive --format=tar HEAD | (cd "${bundle_dir}" && tar xf -))
echo "Installing modules"
cd "${bundle_dir}" && npm install --production
cd "${bundle_dir}" && tar czvf "${bundle_file}" .
echo "Uploading bundle to S3"
${SOURCE_DIR}/node_modules/.bin/s3-cli put --acl-public "${bundle_file}" "s3://cloudron-releases/box-${version}.tar.gz"
echo "Cleaning up ${bundle_dir}"
rm -rf "${bundle_dir}" "${bundle_file}"
+57
View File
@@ -0,0 +1,57 @@
This document gives the design of this setup code.
box code should be delivered in the form of a (docker) container.
This is not the case currently but we want to do structure the code
in spirit that way.
### container.sh
This contains code that essential goes into Dockerfile.
This file contains static configuration over a base image. Currently,
the yellowtent user is created in the installer base image but it
could very well be placed here.
The idea is that the installer would simply remove the old box container
and replace it with a new one for an update.
Because we do not package things as Docker yet, we should be careful
about the code here. We have to expect remains of an older setup code.
For example, older systemd or nginx configs might be around.
The config directory is _part_ of the container and is not a VOLUME.
Which is to say that the files will be nuked from one update to the next.
The data directory is a VOLUME. Contents of this directory are expected
to survive an update. This is a good place to place config files that
are "dynamic" and need to survive restarts. For example, the infra
version (see below) or the mysql/postgresql data etc.
### start.sh
* It is called in 3 modes - new, update, restore.
* The first thing this does is to do the static container.sh setup.
* It then downloads any box restore data and restores the box db from the
backup.
* It then proceeds to call the db-migrate script.
* It then does dynamic configuration like setting up nginx, collectd.
* It then setups up the cloud infra (setup_infra.sh) and creates cloudron.conf.
* box services are then started
setup_infra.sh
This setups containers like graphite, mail and the addons containers.
Containers are relaunched based on the INFRA_VERSION. The script compares
the version here with the version in the file DATA_DIR/INFRA_VERSION.
If they match, the containers are not recreated and nothing is to be done.
nginx, collectd configs are part of data already and containers are running.
If they do not match, it deletes all containers (including app containers) and starts
them all afresh. Important thing here is that, DATA_DIR is never removed across
updates. So, it is only the containers being recreated and not the data.
+17
View File
@@ -0,0 +1,17 @@
#!/bin/bash
# If you change the infra version, be sure to put a warning
# in the change log
INFRA_VERSION=12
# WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
# These constants are used in the installer script as well
BASE_IMAGE=cloudron/base:0.5.1
MYSQL_IMAGE=cloudron/mysql:0.5.0
POSTGRESQL_IMAGE=cloudron/postgresql:0.5.0
MONGODB_IMAGE=cloudron/mongodb:0.5.0
REDIS_IMAGE=cloudron/redis:0.5.0 # if you change this, fix src/addons.js as well
MAIL_IMAGE=cloudron/mail:0.5.0
GRAPHITE_IMAGE=cloudron/graphite:0.4.0
+45 -11
View File
@@ -3,34 +3,68 @@
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
json="${script_dir}/../node_modules/.bin/json"
# IMPORTANT: Fix cloudron.js:doUpdate if you add/remove any arg. keep these sorted for readability
arg_api_server_origin=""
arg_box_versions_url=""
arg_fqdn=""
arg_is_custom_domain="false"
arg_restore_key=""
arg_restore_url=""
arg_retire="false"
arg_tls_cert=""
arg_tls_key=""
arg_app_server_url=""
arg_fqdn=""
arg_token=""
arg_version=""
arg_is_custom_domain="false"
arg_web_server_origin=""
arg_backup_key=""
arg_aws=""
args=$(getopt -o "" -l "boxversionsurl:,data:,version:" -n "$0" -- "$@")
args=$(getopt -o "" -l "data:,retire" -n "$0" -- "$@")
eval set -- "${args}"
while true; do
case "$1" in
--boxversionsurl) arg_box_versions_url="$2";;
--retire)
arg_retire="true"
shift
;;
--data)
read -r arg_app_server_url arg_fqdn arg_token arg_is_custom_domain <<EOF
$(echo "$2" | $json appServerUrl fqdn token isCustomDomain | tr '\n' ' ')
# only read mandatory non-empty parameters here
read -r arg_api_server_origin arg_web_server_origin arg_fqdn arg_token arg_is_custom_domain arg_box_versions_url arg_version <<EOF
$(echo "$2" | $json apiServerOrigin webServerOrigin fqdn token isCustomDomain boxVersionsUrl version | tr '\n' ' ')
EOF
# read possibly empty parameters here
arg_tls_cert=$(echo "$2" | $json tlsCert)
arg_tls_key=$(echo "$2" | $json tlsKey)
arg_restore_url=$(echo "$2" | $json restoreUrl)
[[ "${arg_restore_url}" == "null" ]] && arg_restore_url=""
arg_restore_key=$(echo "$2" | $json restoreKey)
[[ "${arg_restore_key}" == "null" ]] && arg_restore_key=""
arg_backup_key=$(echo "$2" | $json backupKey)
[[ "${arg_backup_key}" == "null" ]] && arg_backup_key=""
arg_aws=$(echo "$2" | $json aws)
[[ "${arg_aws}" == "null" ]] && arg_aws=""
shift 2
;;
--version) arg_version="$2";;
--) break;;
*) echo "Unknown option $1"; exit 1;;
esac
shift 2
done
echo "Parsed arguments:"
echo "api server: ${arg_api_server_origin}"
echo "box versions url: ${arg_box_versions_url}"
echo "fqdn: ${arg_fqdn}"
echo "custom domain: ${arg_is_custom_domain}"
echo "restore key: ${arg_restore_key}"
echo "restore url: ${arg_restore_url}"
echo "tls cert: ${arg_tls_cert}"
echo "tls key: ${arg_tls_key}"
echo "token: ${arg_token}"
echo "version: ${arg_version}"
echo "web server: ${arg_web_server_origin}"
+43
View File
@@ -0,0 +1,43 @@
#!/bin/bash
set -eu -o pipefail
# This file can be used in Dockerfile
readonly container_files="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/container"
readonly CONFIG_DIR="/home/yellowtent/configs"
readonly DATA_DIR="/home/yellowtent/data"
########## create config directory
rm -rf "${CONFIG_DIR}"
sudo -u yellowtent mkdir "${CONFIG_DIR}"
########## systemd
cp -r "${container_files}/systemd/." /etc/systemd/system/
systemctl daemon-reload
systemctl enable cloudron.target
########## sudoers
rm /etc/sudoers.d/*
cp "${container_files}/sudoers" /etc/sudoers.d/yellowtent
########## collectd
rm -rf /etc/collectd
ln -sfF "${DATA_DIR}/collectd" /etc/collectd
########## apparmor docker profile
cp "${container_files}/docker-cloudron-app.apparmor" /etc/apparmor.d/docker-cloudron-app
systemctl restart apparmor
########## nginx
# link nginx config to system config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${DATA_DIR}/nginx" /etc/nginx
########## mysql
cp "${container_files}/mysql.cnf" /etc/mysql/mysql.cnf
########## Enable services
update-rc.d -f collectd defaults
@@ -0,0 +1,32 @@
#include <tunables/global>
profile docker-cloudron-app flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
ptrace peer=@{profile_name},
network,
capability,
file,
umount,
deny @{PROC}/sys/fs/** wklx,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny @{PROC}/sys/kernel/[^s][^h][^m]* wklx,
deny @{PROC}/sys/kernel/*/** wklx,
deny mount,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/efi/efivars/** rwklx,
deny /sys/kernel/security/** rwklx,
}
+7
View File
@@ -0,0 +1,7 @@
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
# http://bugs.mysql.com/bug.php?id=68514
[mysqld]
performance_schema=OFF
max_connection=50
+29
View File
@@ -0,0 +1,29 @@
Defaults!/home/yellowtent/box/src/scripts/createappdir.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/createappdir.sh
Defaults!/home/yellowtent/box/src/scripts/rmappdir.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/rmappdir.sh
Defaults!/home/yellowtent/box/src/scripts/reloadnginx.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reloadnginx.sh
Defaults!/home/yellowtent/box/src/scripts/backupbox.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/backupbox.sh
Defaults!/home/yellowtent/box/src/scripts/backupapp.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/backupapp.sh
Defaults!/home/yellowtent/box/src/scripts/restoreapp.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/restoreapp.sh
Defaults!/home/yellowtent/box/src/scripts/reboot.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reboot.sh
Defaults!/home/yellowtent/box/src/scripts/reloadcollectd.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/reloadcollectd.sh
Defaults!/home/yellowtent/box/src/scripts/backupswap.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/backupswap.sh
Defaults!/home/yellowtent/box/src/scripts/collectlogs.sh env_keep="HOME BOX_ENV"
yellowtent ALL=(root) NOPASSWD: /home/yellowtent/box/src/scripts/collectlogs.sh
+17
View File
@@ -0,0 +1,17 @@
[Unit]
Description=Cloudron Admin
OnFailure=crashnotifier@%n.service
StopWhenUnneeded=true
[Service]
Type=idle
WorkingDirectory=/home/yellowtent/box
Restart=always
ExecStart=/usr/bin/node --max_old_space_size=150 /home/yellowtent/box/box.js
Environment="HOME=/home/yellowtent" "USER=yellowtent" "DEBUG=box*,connect-lastmile" "BOX_ENV=cloudron" "NODE_ENV=production"
KillMode=process
User=yellowtent
Group=yellowtent
MemoryLimit=200M
TimeoutStopSec=5s
+10
View File
@@ -0,0 +1,10 @@
[Unit]
Description=Cloudron Smart Cloud
Documentation=https://cloudron.io/documentation.html
StopWhenUnneeded=true
Requires=box.service janitor.timer
After=box.service janitor.timer
# AllowIsolate=yes
[Install]
WantedBy=multi-user.target
@@ -0,0 +1,15 @@
# http://northernlightlabs.se/systemd.status.mail.on.unit.failure
[Unit]
Description=Cloudron Crash Notifier for %i
# otherwise, systemd will kill this unit immediately as nobody requires it
StopWhenUnneeded=false
[Service]
Type=idle
WorkingDirectory=/home/yellowtent/box
ExecStart="/home/yellowtent/box/crashnotifier.js" %I
Environment="HOME=/home/yellowtent" "USER=yellowtent" "DEBUG=box*,connect-lastmile" "BOX_ENV=cloudron" "NODE_ENV=production"
KillMode=process
User=yellowtent
Group=yellowtent
MemoryLimit=50M
+15
View File
@@ -0,0 +1,15 @@
[Unit]
Description=Cloudron Janitor
OnFailure=crashnotifier@%n.service
[Service]
Type=simple
WorkingDirectory=/home/yellowtent/box
Restart=no
ExecStart=/usr/bin/node /home/yellowtent/box/janitor.js
Environment="HOME=/home/yellowtent" "USER=yellowtent" "DEBUG=box*,connect-lastmile" "BOX_ENV=cloudron" "NODE_ENV=production"
KillMode=process
User=yellowtent
Group=yellowtent
MemoryLimit=50M
WatchdogSec=30
+10
View File
@@ -0,0 +1,10 @@
[Unit]
Description=Cloudron Janitor
StopWhenUnneeded=true
[Timer]
# this activates it immediately
OnBootSec=0
OnUnitActiveSec=30min
Unit=janitor.service
-80
View File
@@ -1,80 +0,0 @@
types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
application/x-javascript js;
application/atom+xml atom;
application/rss+xml rss;
text/mathml mml;
text/plain txt;
text/vnd.sun.j2me.app-descriptor jad;
text/vnd.wap.wml wml;
text/x-component htc;
image/png png;
image/tiff tif tiff;
image/vnd.wap.wbmp wbmp;
image/x-icon ico;
image/x-jng jng;
image/x-ms-bmp bmp;
image/svg+xml svg svgz;
image/webp webp;
application/java-archive jar war ear;
application/mac-binhex40 hqx;
application/msword doc;
application/pdf pdf;
application/postscript ps eps ai;
application/rtf rtf;
application/vnd.ms-excel xls;
application/vnd.ms-powerpoint ppt;
application/vnd.wap.wmlc wmlc;
application/vnd.google-earth.kml+xml kml;
application/vnd.google-earth.kmz kmz;
application/x-7z-compressed 7z;
application/x-cocoa cco;
application/x-java-archive-diff jardiff;
application/x-java-jnlp-file jnlp;
application/x-makeself run;
application/x-perl pl pm;
application/x-pilot prc pdb;
application/x-rar-compressed rar;
application/x-redhat-package-manager rpm;
application/x-sea sea;
application/x-shockwave-flash swf;
application/x-stuffit sit;
application/x-tcl tcl tk;
application/x-x509-ca-cert der pem crt;
application/x-xpinstall xpi;
application/xhtml+xml xhtml;
application/zip zip;
application/octet-stream bin exe dll;
application/octet-stream deb;
application/octet-stream dmg;
application/octet-stream eot;
application/octet-stream iso img;
application/octet-stream msi msp msm;
audio/midi mid midi kar;
audio/mpeg mp3;
audio/ogg ogg;
audio/x-m4a m4a;
audio/x-realaudio ra;
video/3gpp 3gpp 3gp;
video/mp4 mp4;
video/mpeg mpeg mpg;
video/quicktime mov;
video/webm webm;
video/x-flv flv;
video/x-m4v m4v;
video/x-mng mng;
video/x-ms-asf asx asf;
video/x-ms-wmv wmv;
video/x-msvideo avi;
}
-63
View File
@@ -1,63 +0,0 @@
user www-data;
worker_processes 1;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root ##SETUP_WEBSITE_DIR##;
index index.html;
server {
listen 80;
location / {
# redirect everything to HTTPS
return 301 https://$host$request_uri;
}
}
# HTTPS server
server {
listen 443;
error_page 503 /index.html;
location /index.html {
# allow access to this page
add_header Cache-Control no-cache;
}
location /3rdparty/bootstrap.min.css {
# allow access to this page
add_header Cache-Control no-cache;
}
location /progress.json {
# allow access to this page
add_header Cache-Control no-cache;
}
location / {
return 503;
}
ssl on;
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
}
}
File diff suppressed because one or more lines are too long
-36
View File
@@ -1,36 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height" />
<title> Cloudron Webadmin </title>
<link href="3rdparty/bootstrap.min.css" rel="stylesheet">
</head>
<body>
<!-- Modal update progress -->
<div class="modal show" id="updateProgressModal" tabindex="-1" role="dialog" aria-labelledby="updateProgressModalLabel" aria-hidden="true" data-keyboard ="false" data-backdrop="static">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="updateProgressModalLabel">Update in progress...</h4>
</div>
<div class="modal-body">
<div class="progress progress-striped active">
<div class="progress-bar progress-bar-success" role="progressbar" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100" style="width: 100%"></div>
</div>
</div>
</div>
<!-- /.modal-content -->
</div>
<!-- /.modal-dialog -->
</div>
<script>
setTimeout(location.reload.bind(location, true /* forceGet from server */), 10000);
</script>
</body>
</html>
+21 -13
View File
@@ -1,32 +1,40 @@
#!/bin/bash
set -eu
set -eu -o pipefail
readonly NGINX_CONFIG_DIR="/home/yellowtent/setup/configs/nginx" # do not reuse configs since it will be removed by installer
readonly SETUP_WEBSITE_DIR="/home/yellowtent/setup/website"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly BOX_SRC_DIR="/home/yellowtent/box"
readonly DATA_DIR="/home/yellowtent/data"
readonly ADMIN_LOCATION="my" # keep this in sync with constants.js
source "${script_dir}/INFRA_VERSION" # this injects INFRA_VERSION
echo "Setting up nginx update page"
source "${script_dir}/argparser.sh" "$@" # this injects the arg_* variables used below
# keep this is sync with config.js appFqdn()
admin_fqdn=$([[ "${arg_is_custom_domain}" == "true" ]] && echo "${ADMIN_LOCATION}.${arg_fqdn}" || echo "${ADMIN_LOCATION}-${arg_fqdn}")
admin_origin="https://${admin_fqdn}"
# copy the website
rm -rf "${SETUP_WEBSITE_DIR}" && mkdir -p "${SETUP_WEBSITE_DIR}"
cp -r "${script_dir}/splash/website/"* "${SETUP_WEBSITE_DIR}"
# create nginx config
rm -rf "${NGINX_CONFIG_DIR}" && mkdir -p "${NGINX_CONFIG_DIR}"
sed -e "s|##SETUP_WEBSITE_DIR##|${SETUP_WEBSITE_DIR}|" "${script_dir}/splash/nginx/nginx.conf" > "${NGINX_CONFIG_DIR}/nginx.conf"
cp "${script_dir}/splash/nginx/mime.types" "${NGINX_CONFIG_DIR}/mime.types"
mkdir -p "${NGINX_CONFIG_DIR}/cert"
echo "${arg_tls_cert}" > "${NGINX_CONFIG_DIR}/cert/host.cert"
echo "${arg_tls_key}" > "${NGINX_CONFIG_DIR}/cert/host.key"
infra_version="none"
[[ -f "${DATA_DIR}/INFRA_VERSION" ]] && infra_version=$(cat "${DATA_DIR}/INFRA_VERSION")
if [[ "${arg_retire}" == "true" || "${infra_version}" != "${INFRA_VERSION}" ]]; then
rm -f ${DATA_DIR}/nginx/applications/*
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"~^(.+)\$\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
else
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"splash\", \"sourceDir\": \"${SETUP_WEBSITE_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
fi
# link in the new nginx config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${NGINX_CONFIG_DIR}" /etc/nginx
touch "${SETUP_WEBSITE_DIR}/progress.json"
echo '{ "update": { "percent": "10", "message": "Updating cloudron software" }, "backup": null }' > "${SETUP_WEBSITE_DIR}/progress.json"
nginx -s reload
+120 -139
View File
@@ -1,196 +1,177 @@
#!/bin/bash
# Count installer files so that we can correlate install and postinstall logs
install_count=$(find /var/log/cloudron -name "installer*" | wc -l)
exec > >(tee "/var/log/cloudron/start-$install_count.log")
exec 2>&1
set -eux
set -eu -o pipefail
echo "==== Cloudron Start ===="
readonly USER="yellowtent"
# NOTE: Do NOT use BOX_SRC_DIR for accessing code and config files. This script will be run from a temp directory
# and the whole code will relocated to BOX_SRC_DIR by the installer. Use paths relative to script_dir or box_src_tmp_dir
readonly BOX_SRC_DIR="/home/${USER}/box"
readonly DATA_DIR="/home/${USER}/data"
readonly CONFIG_DIR="/home/${USER}/configs"
readonly MAIL_SERVER_IP="172.17.120.120" # hardcoded in haraka container
readonly SETUP_PROGRESS_JSON="/home/yellowtent/setup/website/progress.json"
readonly ADMIN_LOCATION="my" # keep this in sync with constants.js
readonly curl="curl --fail --connect-timeout 20 --retry 10 --retry-delay 2 --max-time 2400"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
box_src_tmp_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
source "${script_dir}/argparser.sh" "$@" # this injects the arg_* variables used below
admin_fqdn=$([[ "${arg_is_custom_domain}" == "true" ]] && echo "admin.${arg_fqdn}" || echo "admin-${arg_fqdn}")
# keep this is sync with config.js appFqdn()
admin_fqdn=$([[ "${arg_is_custom_domain}" == "true" ]] && echo "${ADMIN_LOCATION}.${arg_fqdn}" || echo "${ADMIN_LOCATION}-${arg_fqdn}")
admin_origin="https://${admin_fqdn}"
readonly is_update=$([[ -d "${DATA_DIR}/box" ]] && echo "true" || echo "false")
set_progress() {
local progress="$1"
local percent="$1"
local message="$2"
echo "==== ${message} ===="
(echo "{ \"progress\": \"${progress}\", \"message\": \"${message}\" }" > "${SETUP_PROGRESS_JSON}") 2> /dev/null || true # as this will fail in non-update mode
echo "==== ${percent} - ${message} ===="
(echo "{ \"update\": { \"percent\": \"${percent}\", \"message\": \"${message}\" }, \"backup\": {} }" > "${SETUP_PROGRESS_JSON}") 2> /dev/null || true # as this will fail in non-update mode
}
set_progress "5" "Configuring Sudoers file"
cat > /etc/sudoers.d/yellowtent <<EOF
Defaults!${BOX_SRC_DIR}/src/scripts/rmappdir.sh env_keep=HOME
${USER} ALL=(root) NOPASSWD: ${BOX_SRC_DIR}/src/scripts/rmappdir.sh
set_progress "1" "Create container"
$script_dir/container.sh
Defaults!${BOX_SRC_DIR}/src/scripts/reloadnginx.sh env_keep=HOME
${USER} ALL=(root) NOPASSWD: ${BOX_SRC_DIR}/src/scripts/reloadnginx.sh
set_progress "10" "Ensuring directories"
# keep these in sync with paths.js
[[ "${is_update}" == "false" ]] && btrfs subvolume create "${DATA_DIR}/box"
mkdir -p "${DATA_DIR}/box/appicons"
mkdir -p "${DATA_DIR}/box/mail"
mkdir -p "${DATA_DIR}/graphite"
Defaults!${BOX_SRC_DIR}/src/scripts/backup.sh env_keep=HOME
${USER} ALL=(root) NOPASSWD: ${BOX_SRC_DIR}/src/scripts/backup.sh
mkdir -p "${DATA_DIR}/mysql"
mkdir -p "${DATA_DIR}/postgresql"
mkdir -p "${DATA_DIR}/mongodb"
mkdir -p "${DATA_DIR}/snapshots"
mkdir -p "${DATA_DIR}/addons"
mkdir -p "${DATA_DIR}/collectd/collectd.conf.d"
Defaults!${BOX_SRC_DIR}/src/scripts/reboot.sh env_keep=HOME
${USER} ALL=(root) NOPASSWD: ${BOX_SRC_DIR}/src/scripts/reboot.sh
# bookkeep the version as part of data
echo "{ \"version\": \"${arg_version}\", \"boxVersionsUrl\": \"${arg_box_versions_url}\" }" > "${DATA_DIR}/box/version"
Defaults!${BOX_SRC_DIR}/src/scripts/reloadcollectd.sh env_keep=HOME
${USER} ALL=(root) NOPASSWD: ${BOX_SRC_DIR}/src/scripts/reloadcollectd.sh
# remove old snapshots. if we do want to keep this around, we will have to fix the chown -R below
# which currently fails because these are readonly fs
echo "Cleaning up snapshots"
find "${DATA_DIR}/snapshots" -mindepth 1 -maxdepth 1 | xargs --no-run-if-empty btrfs subvolume delete
EOF
# restart mysql to make sure it has latest config
service mysql restart
set_progress "10" "Migrating data"
readonly mysql_root_password="password"
mysqladmin -u root -ppassword password password # reset default root password
mysql -u root -p${mysql_root_password} -e 'CREATE DATABASE IF NOT EXISTS box'
if [[ -n "${arg_restore_url}" ]]; then
set_progress "15" "Downloading restore data"
echo "Downloading backup: ${arg_restore_url} and key: ${arg_restore_key}"
while true; do
if $curl -L "${arg_restore_url}" | openssl aes-256-cbc -d -pass "pass:${arg_restore_key}" | tar -zxf - -C "${DATA_DIR}/box"; then break; fi
echo "Failed to download data, trying again"
done
set_progress "21" "Setting up MySQL"
if [[ -f "${DATA_DIR}/box/box.mysqldump" ]]; then
echo "Importing existing database into MySQL"
mysql -u root -p${mysql_root_password} box < "${DATA_DIR}/box/box.mysqldump"
fi
fi
set_progress "25" "Migrating data"
sudo -u "${USER}" -H bash <<EOF
set -eux
cd "${box_src_tmp_dir}"
PATH="${PATH}:${box_src_tmp_dir}/node_modules/.bin" npm run-script migrate_data
set -eu
cd "${BOX_SRC_DIR}"
BOX_ENV=cloudron DATABASE_URL=mysql://root:${mysql_root_password}@localhost/box "${BOX_SRC_DIR}/node_modules/.bin/db-migrate" up
EOF
set_progress "15" "Setup nginx"
nginx_config_dir="${CONFIG_DIR}/nginx"
nginx_appconfig_dir="${CONFIG_DIR}/nginx/applications"
set_progress "28" "Setup collectd"
cp "${script_dir}/start/collectd.conf" "${DATA_DIR}/collectd/collectd.conf"
# collectd 5.4.1 has some bug where we simply cannot get it to create df-vda1
mkdir -p "${DATA_DIR}/graphite/whisper/collectd/localhost/"
vda1_id=$(blkid -s UUID -o value /dev/vda1)
ln -sfF "df-disk_by-uuid_${vda1_id}" "${DATA_DIR}/graphite/whisper/collectd/localhost/df-vda1"
service collectd restart
# copy nginx config
mkdir -p "${nginx_appconfig_dir}"
cp "${script_dir}/start/nginx/nginx.conf" "${nginx_config_dir}/nginx.conf"
cp "${script_dir}/start/nginx/mime.types" "${nginx_config_dir}/mime.types"
touch "${nginx_config_dir}/naked_domain.conf"
sed -e "s/##ADMIN_FQDN##/${admin_fqdn}/" -e "s|##BOX_SRC_DIR##|${BOX_SRC_DIR}|" "${script_dir}/start/nginx/admin.conf_template" > "${nginx_appconfig_dir}/admin.conf"
set_progress "30" "Setup nginx"
# setup naked domain to use admin by default. app restoration will overwrite this config
mkdir -p "${DATA_DIR}/nginx/applications"
cp "${script_dir}/start/nginx/mime.types" "${DATA_DIR}/nginx/mime.types"
certificate_dir="${nginx_config_dir}/cert"
mkdir -p "${certificate_dir}"
echo "${arg_tls_cert}" > ${certificate_dir}/host.cert
echo "${arg_tls_key}" > ${certificate_dir}/host.key
# generate the main nginx config file
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/nginx.ejs" \
-O "{ \"sourceDir\": \"${BOX_SRC_DIR}\" }" > "${DATA_DIR}/nginx/nginx.conf"
# link nginx config to system config
unlink /etc/nginx 2>/dev/null || rm -rf /etc/nginx
ln -s "${nginx_config_dir}" /etc/nginx
# generate these for update code paths as well to overwrite splash
${BOX_SRC_DIR}/node_modules/.bin/ejs-cli -f "${script_dir}/start/nginx/appconfig.ejs" \
-O "{ \"vhost\": \"${admin_fqdn}\", \"adminOrigin\": \"${admin_origin}\", \"endpoint\": \"admin\", \"sourceDir\": \"${BOX_SRC_DIR}\" }" > "${DATA_DIR}/nginx/applications/admin.conf"
chown "${USER}:${USER}" -R "/home/${USER}"
mkdir -p "${DATA_DIR}/nginx/cert"
echo "${arg_tls_cert}" > ${DATA_DIR}/nginx/cert/host.cert
echo "${arg_tls_key}" > ${DATA_DIR}/nginx/cert/host.key
set_progress "20" "Removing existing container"
# removing containers ensures containers are launched with latest config updates
# restore code in appatask does not delete old containers
existing_containers=$(docker ps -qa)
echo "Remove containers: ${existing_containers}"
if [[ -n "${existing_containers}" ]]; then
echo "${existing_containers}" | xargs docker rm -f
fi
set_progress "33" "Changing ownership"
chown "${USER}:${USER}" -R "${DATA_DIR}/box" "${DATA_DIR}/nginx" "${DATA_DIR}/collectd" "${DATA_DIR}/addons"
set_progress "30" "Setup collectd and graphite"
${script_dir}/start/setup_collectd.sh
set_progress "40" "Setting up infra"
${script_dir}/start/setup_infra.sh "${arg_fqdn}"
set_progress "40" "Setup haraka mail relay"
docker rm -f haraka || true
docker pull girish/haraka:0.1 || true # this line is for dev convenience since it's already part of base image
haraka_container_id=$(docker run --restart=always -d --name="haraka" --cap-add="NET_ADMIN"\
-p 127.0.0.1:25:25 \
-h "${arg_fqdn}" \
-e "DOMAIN_NAME=${arg_fqdn}" \
-v "${CONFIG_DIR}/haraka:/app/data" \
girish/haraka:0.1)
echo "Haraka container id: ${haraka_container_id}"
# Every docker restart results in a new IP. Give our mail server a
# static IP. Alternately, we need to link the mail container with
# all our apps
# This IP is set by the haraka container on every start and the firewall
# allows connect to port 25. The ping gets the ARP lookup working
echo "Checking connectivity to haraka(${MAIL_SERVER_IP})"
if ! ping -c 20 "${MAIL_SERVER_IP}"; then
echo "Could not connect to mail server"
fi
set_progress "50" "Setup MySQL addon"
docker rm -f mysql || true
mysql_root_password=$(pwgen -1 -s)
docker0_ip=$(/sbin/ifconfig docker0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}')
docker pull girish/mysql:0.1 || true # this line for dev convenience since it's already part of base image
mysql_container_id=$(docker run --restart=always -d --name="mysql" \
-p 127.0.0.1:3306:3306 \
-h "${arg_fqdn}" \
-e "MYSQL_ROOT_PASSWORD=${mysql_root_password}" \
-e "MYSQL_ROOT_HOST=${docker0_ip}" \
-v "${DATA_DIR}/mysql:/var/lib/mysql" \
girish/mysql:0.1)
echo "MySQL container id: ${mysql_container_id}"
set_progress "60" "Setup Postgres addon"
docker rm -f postgresql || true
postgresql_root_password=$(pwgen -1 -s)
docker pull girish/postgresql:0.1 || true # this line for dev convenience since it's already part of base image
postgresql_container_id=$(docker run --restart=always -d --name="postgresql" \
-p 127.0.0.1:5432:5432 \
-h "${arg_fqdn}" \
-e "POSTGRESQL_ROOT_PASSWORD=${postgresql_root_password}" \
-v "${DATA_DIR}/postgresql:/var/lib/mysql" \
girish/postgresql:0.1)
echo "PostgreSQL container id: ${postgresql_container_id}"
set_progress "70" "Pulling Redis addon"
docker pull girish/redis:0.1 || true # this line for dev convenience since it's already part of base image
set_progress "80" "Creating cloudron.conf"
cloudron_sqlite="${DATA_DIR}/cloudron.sqlite"
admin_origin="https://${admin_fqdn}"
set_progress "65" "Creating cloudron.conf"
sudo -u yellowtent -H bash <<EOF
set -eux
set -eu
echo "Creating cloudron.conf"
# note that arg_aws is a javascript object and intentionally unquoted below
cat > "${CONFIG_DIR}/cloudron.conf" <<CONF_END
{
"version": "${arg_version}",
"token": "${arg_token}",
"appServerUrl": "${arg_app_server_url}",
"apiServerOrigin": "${arg_api_server_origin}",
"webServerOrigin": "${arg_web_server_origin}",
"fqdn": "${arg_fqdn}",
"isCustomDomain": ${arg_is_custom_domain},
"boxVersionsUrl": "${arg_box_versions_url}",
"mailServer": "${MAIL_SERVER_IP}",
"mailUsername": "admin@${arg_fqdn}",
"addons": {
"mysql": {
"rootPassword": "${mysql_root_password}"
},
"postgresql": {
"rootPassword": "${postgresql_root_password}"
}
}
"adminEmail": "admin@${arg_fqdn}",
"database": {
"hostname": "localhost",
"username": "root",
"password": "${mysql_root_password}",
"port": 3306,
"name": "box"
},
"backupKey": "${arg_backup_key}",
"aws": ${arg_aws}
}
CONF_END
echo "Marking apps for restore"
# TODO: do not auto-start stopped containers (httpPort might need fixing to start them)
sqlite3 "${cloudron_sqlite}" 'UPDATE apps SET installationState = "pending_restore", healthy = NULL, runState = NULL, containerId = NULL, httpPort = NULL, installationProgress = NULL'
# Add webadmin oauth client
echo "Add webadmin oauth cient"
ADMIN_SCOPES="root,profile,users,apps,settings,roleAdmin"
ADMIN_ID=$(cat /proc/sys/kernel/random/uuid)
sqlite3 "${cloudron_sqlite}" "INSERT OR REPLACE INTO clients (id, appId, clientId, clientSecret, name, redirectURI, scope) VALUES (\"\$ADMIN_ID\", \"webadmin\", \"cid-webadmin\", \"secret-webadmin\", \"WebAdmin\", \"${admin_origin}\", \"\$ADMIN_SCOPES\")"
echo "Creating config.json for webadmin"
cat > "${BOX_SRC_DIR}/webadmin/dist/config.json" <<CONF_END
{
"webServerOrigin": "${arg_web_server_origin}"
}
CONF_END
EOF
# bookkeep the version as part of data
echo "{ \"version\": \"${arg_version}\", \"boxVersionsUrl\": \"${arg_box_versions_url}\" }" > "${DATA_DIR}/version"
# Add webadmin oauth client
# The domain might have changed, therefor we have to update the record
# !!! This needs to be in sync with the webadmin, specifically login_callback.js
echo "Add webadmin oauth cient"
ADMIN_SCOPES="root,developer,profile,users,apps,settings,roleUser"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (\"cid-webadmin\", \"webadmin\", \"secret-webadmin\", \"${admin_origin}\", \"${ADMIN_SCOPES}\")" box
set_progress "90" "Setup supervisord"
${script_dir}/start/setup_supervisord.sh
echo "Add localhost test oauth cient"
ADMIN_SCOPES="root,developer,profile,users,apps,settings,roleUser"
mysql -u root -p${mysql_root_password} \
-e "REPLACE INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (\"cid-test\", \"test\", \"secret-test\", \"http://127.0.0.1:5000\", \"${ADMIN_SCOPES}\")" box
set_progress "95" "Reloading supervisor"
${script_dir}/start/reload_supervisord.sh
set_progress "80" "Starting Cloudron"
systemctl start cloudron.target
set_progress "99" "Reloading nginx"
sleep 2 # give systemd sometime to start the processes
set_progress "85" "Reloading nginx"
nginx -s reload
set_progress "100" "Done"
@@ -52,20 +52,20 @@ Interval 20
# accessed. #
##############################################################################
#LoadPlugin logfile
LoadPlugin syslog
LoadPlugin logfile
#LoadPlugin syslog
#<Plugin logfile>
# LogLevel "info"
# File STDOUT
# Timestamp true
# PrintSeverity false
#</Plugin>
<Plugin syslog>
LogLevel info
<Plugin logfile>
LogLevel "info"
File "/var/log/collectd.log"
Timestamp true
PrintSeverity false
</Plugin>
#<Plugin syslog>
# LogLevel info
#</Plugin>
##############################################################################
# LoadPlugin section #
#----------------------------------------------------------------------------#
@@ -96,7 +96,7 @@ LoadPlugin df
#LoadPlugin entropy
#LoadPlugin ethstat
#LoadPlugin exec
LoadPlugin filecount
#LoadPlugin filecount
#LoadPlugin fscache
#LoadPlugin gmond
#LoadPlugin hddtemp
@@ -193,12 +193,11 @@ LoadPlugin write_graphite
</Plugin>
<Plugin df>
Device "/dev/vda1"
Device "/dev/loop0"
Device "/dev/loop1"
FSType "tmpfs"
MountPoint "/dev"
ReportByDevice true
IgnoreSelected false
IgnoreSelected true
ValuesAbsolute true
ValuesPercentage true
@@ -221,7 +220,7 @@ LoadPlugin write_graphite
</Plugin>
<Plugin processes>
ProcessMatch "app" "node app.js"
ProcessMatch "app" "node box.js"
</Plugin>
<Plugin swap>
-33
View File
@@ -1,33 +0,0 @@
server {
listen 443;
server_name ##ADMIN_FQDN##;
ssl on;
# paths are relative to prefix and not to this file
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location /api/ {
proxy_pass http://127.0.0.1:3000;
client_max_body_size 1m;
}
# graphite paths
location ~ ^/(graphite|content|metrics|dashboard|render|browser|composer)/ {
proxy_pass http://127.0.0.1:8000;
client_max_body_size 1m;
}
location / {
root ##BOX_SRC_DIR##/webadmin/dist/;
index index.html index.htm;
}
}
+113
View File
@@ -0,0 +1,113 @@
# http://nginx.org/en/docs/http/websocket.html
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443;
server_name <%= vhost %>;
ssl on;
# paths are relative to prefix and not to this file
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
# https://bettercrypto.org/static/applied-crypto-hardening.pdf
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# https://cipherli.st/
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers 'AES128+EECDH:AES128+EDH';
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains";
proxy_http_version 1.1;
proxy_intercept_errors on;
proxy_read_timeout 3500;
proxy_connect_timeout 3250;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
# upgrade is a hop-by-hop header (http://nginx.org/en/docs/http/websocket.html)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
error_page 500 502 503 504 @appstatus;
location @appstatus {
return 307 <%= adminOrigin %>/appstatus.html?referrer=https://$host$request_uri;
}
location / {
# increase the proxy buffer sizes to not run into buffer issues (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers)
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Disable check to allow unlimited body sizes
client_max_body_size 0;
<% if ( endpoint === 'admin' ) { %>
location /api/ {
proxy_pass http://127.0.0.1:3000;
client_max_body_size 1m;
}
# graphite paths
location ~ ^/(graphite|content|metrics|dashboard|render|browser|composer)/ {
proxy_pass http://127.0.0.1:8000;
client_max_body_size 1m;
}
location / {
root <%= sourceDir %>/webadmin/dist;
index index.html index.htm;
}
<% } else if ( endpoint === 'oauthproxy' ) { %>
proxy_pass http://127.0.0.1:3003;
proxy_set_header X-Cloudron-Proxy-Port <%= port %>;
<% } else if ( endpoint === 'app' ) { %>
proxy_pass http://127.0.0.1:<%= port %>;
<% } else if ( endpoint === 'splash' ) { %>
root <%= sourceDir %>;
error_page 503 /update.html;
location /update.html {
add_header Cache-Control no-cache;
}
location /theme.css {
add_header Cache-Control no-cache;
}
location /3rdparty/ {
add_header Cache-Control no-cache;
}
location /js/ {
add_header Cache-Control no-cache;
}
location /progress.json {
add_header Cache-Control no-cache;
}
location /api/v1/cloudron/progress {
add_header Cache-Control no-cache;
default_type application/json;
alias <%= sourceDir %>/progress.json;
}
location / {
return 503;
}
<% } %>
}
}
@@ -17,6 +17,9 @@ http {
'"$request" $status $body_bytes_sent $request_time '
'"$http_referer" "$http_user_agent"';
# required for long host names
server_names_hash_bucket_size 128;
access_log access.log combined2;
sendfile on;
@@ -49,11 +52,16 @@ http {
ssl_certificate cert/host.cert;
ssl_certificate_key cert/host.key;
error_page 404 = @fallback;
location @fallback {
internal;
root <%= sourceDir %>/webadmin/dist;
rewrite ^/$ /nakeddomain.html break;
}
return 404;
}
include naked_domain.conf;
include applications/*.conf;
}
-20
View File
@@ -1,20 +0,0 @@
#!/bin/bash
set -eu
# looks like restarting supervisor completely is the only way to reload it
service supervisor stop || true
echo -n "Waiting for supervisord to stop"
while test -e "/var/run/supervisord.pid" && kill -0 `cat /var/run/supervisord.pid`; do
echo -n "."
sleep 1
done
echo ""
echo "Starting supervisor"
service supervisor start
sleep 2 # give supervisor sometime to start the processes
-29
View File
@@ -1,29 +0,0 @@
#!/bin/bash
set -eu
readonly GRAPHITE_DIR="/home/yellowtent/data/graphite"
readonly COLLECTD_CONFIG_DIR="/home/yellowtent/configs/collectd"
readonly COLLECTD_APPCONFIG_DIR="${COLLECTD_CONFIG_DIR}/collectd.conf.d"
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
mkdir -p "${GRAPHITE_DIR}"
docker rm -f graphite || true
docker pull girish/graphite:0.2
docker run --restart=always -d --name="graphite" \
-p 127.0.0.1:2003:2003 \
-p 127.0.0.1:2004:2004 \
-p 127.0.0.1:8000:8000 \
-v "${GRAPHITE_DIR}:/app/data" girish/graphite:0.2
mkdir -p "${COLLECTD_APPCONFIG_DIR}"
cp -r "${script_dir}/collectd/collectd.conf" "${COLLECTD_CONFIG_DIR}/collectd.conf"
rm -rf /etc/collectd
ln -sfF "${COLLECTD_CONFIG_DIR}" /etc/collectd
chown -R yellowtent.yellowtent "${COLLECTD_CONFIG_DIR}"
update-rc.d -f collectd defaults
/etc/init.d/collectd restart
+110
View File
@@ -0,0 +1,110 @@
#!/bin/bash
set -eu -o pipefail
readonly DATA_DIR="/home/yellowtent/data"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${script_dir}/../INFRA_VERSION" # this injects INFRA_VERSION
arg_fqdn="$1"
# removing containers ensures containers are launched with latest config updates
# restore code in appatask does not delete old containers
infra_version="none"
[[ -f "${DATA_DIR}/INFRA_VERSION" ]] && infra_version=$(cat "${DATA_DIR}/INFRA_VERSION")
if [[ "${infra_version}" == "${INFRA_VERSION}" ]]; then
echo "Infrastructure is upto date"
exit 0
fi
echo "Upgrading infrastructure from ${infra_version} to ${INFRA_VERSION}"
existing_containers=$(docker ps -qa)
echo "Remove containers: ${existing_containers}"
if [[ -n "${existing_containers}" ]]; then
echo "${existing_containers}" | xargs docker rm -f
fi
# graphite
graphite_container_id=$(docker run --restart=always -d --name="graphite" \
-m 75m \
--memory-swap 150m \
-p 127.0.0.1:2003:2003 \
-p 127.0.0.1:2004:2004 \
-p 127.0.0.1:8000:8000 \
-v "${DATA_DIR}/graphite:/app/data" \
--read-only -v /tmp -v /run -v /var/log \
"${GRAPHITE_IMAGE}")
echo "Graphite container id: ${graphite_container_id}"
# mail
mail_container_id=$(docker run --restart=always -d --name="mail" \
-m 75m \
--memory-swap 150m \
-p 127.0.0.1:25:25 \
-h "${arg_fqdn}" \
-e "DOMAIN_NAME=${arg_fqdn}" \
-v "${DATA_DIR}/box/mail:/app/data" \
--read-only -v /tmp -v /run -v /var/log \
"${MAIL_IMAGE}")
echo "Mail container id: ${mail_container_id}"
# mysql
mysql_addon_root_password=$(pwgen -1 -s)
docker0_ip=$(/sbin/ifconfig docker0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}')
cat > "${DATA_DIR}/addons/mysql_vars.sh" <<EOF
readonly MYSQL_ROOT_PASSWORD='${mysql_addon_root_password}'
readonly MYSQL_ROOT_HOST='${docker0_ip}'
EOF
mysql_container_id=$(docker run --restart=always -d --name="mysql" \
-m 100m \
--memory-swap 200m \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/mysql:/var/lib/mysql" \
-v "${DATA_DIR}/addons/mysql_vars.sh:/etc/mysql/mysql_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
"${MYSQL_IMAGE}")
echo "MySQL container id: ${mysql_container_id}"
# postgresql
postgresql_addon_root_password=$(pwgen -1 -s)
cat > "${DATA_DIR}/addons/postgresql_vars.sh" <<EOF
readonly POSTGRESQL_ROOT_PASSWORD='${postgresql_addon_root_password}'
EOF
postgresql_container_id=$(docker run --restart=always -d --name="postgresql" \
-m 100m \
--memory-swap 200m \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/postgresql:/var/lib/postgresql" \
-v "${DATA_DIR}/addons/postgresql_vars.sh:/etc/postgresql/postgresql_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
"${POSTGRESQL_IMAGE}")
echo "PostgreSQL container id: ${postgresql_container_id}"
# mongodb
mongodb_addon_root_password=$(pwgen -1 -s)
cat > "${DATA_DIR}/addons/mongodb_vars.sh" <<EOF
readonly MONGODB_ROOT_PASSWORD='${mongodb_addon_root_password}'
EOF
mongodb_container_id=$(docker run --restart=always -d --name="mongodb" \
-m 100m \
--memory-swap 200m \
-h "${arg_fqdn}" \
-v "${DATA_DIR}/mongodb:/var/lib/mongodb" \
-v "${DATA_DIR}/addons/mongodb_vars.sh:/etc/mongodb_vars.sh:ro" \
--read-only -v /tmp -v /run -v /var/log \
"${MONGODB_IMAGE}")
echo "Mongodb container id: ${mongodb_container_id}"
if [[ "${infra_version}" == "none" ]]; then
# if no existing infra was found (for new and restoring cloudons), download app backups
echo "Marking installed apps for restore"
mysql -u root -ppassword -e 'UPDATE apps SET installationState = "pending_restore" WHERE installationState = "installed"' box
else
# if existing infra was found, just mark apps for reconfiguration
mysql -u root -ppassword -e 'UPDATE apps SET installationState = "pending_configure" WHERE installationState = "installed"' box
fi
echo -n "${INFRA_VERSION}" > "${DATA_DIR}/INFRA_VERSION"
-54
View File
@@ -1,54 +0,0 @@
#!/bin/bash
set -eu
readonly BOX_SRC_DIR="/home/yellowtent/box"
readonly DATA_DIR="/home/yellowtent/data"
readonly script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
rm -rf /etc/supervisor
mkdir -p /etc/supervisor/conf.d
cp "${script_dir}/supervisord/supervisord.conf" /etc/supervisor/
echo "Writing supervisor configs..."
cat > /etc/supervisor/conf.d/box.conf <<EOF
[program:box]
command=/usr/bin/node "${BOX_SRC_DIR}/app.js"
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/box.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=2
user=yellowtent
environment=HOME="/home/yellowtent",USER="yellowtent",DEBUG="box*,connect-lastmile",NODE_ENV="cloudron"
EOF
cat > /etc/supervisor/conf.d/oauthproxy.conf <<EOF
[program:oauthproxy]
command=/usr/bin/node "${BOX_SRC_DIR}/oauthproxy.js"
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/proxy.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=2
user=yellowtent
environment=HOME="/home/yellowtent",USER="yellowtent",DEBUG="box*",NODE_ENV="cloudron"
EOF
cat > /etc/supervisor/conf.d/apphealthtask.conf <<EOF
[program:apphealthtask]
command=/usr/bin/node "${BOX_SRC_DIR}/apphealthtask.js"
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/apphealthtask.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=2
user=yellowtent
environment=HOME="/home/yellowtent",USER="yellowtent",DEBUG="box*",NODE_ENV="cloudron"
EOF
-33
View File
@@ -1,33 +0,0 @@
; supervisor config file
; http://coffeeonthekeyboard.com/using-supervisorctl-with-linux-permissions-but-without-root-or-sudo-977/
[inet_http_server]
port = 127.0.0.1:9001
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
nodaemon = false
childlogdir = /var/log/supervisor/
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=http://127.0.0.1:9001
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = conf.d/*.conf
+3 -11
View File
@@ -1,15 +1,7 @@
#!/bin/bash
set -eu
set -eu -o pipefail
echo "Stopping box code"
service supervisor stop || true
echo -n "Waiting for supervisord to stop"
while test -e "/var/run/supervisord.pid" && kill -0 `cat /var/run/supervisord.pid`; do
echo -n "."
sleep 1
done
echo ""
echo "Stopping cloudron"
systemctl stop cloudron.target
+565 -148
View File
@@ -1,177 +1,346 @@
'use strict';
exports = module.exports = {
setupAddons: setupAddons,
teardownAddons: teardownAddons,
backupAddons: backupAddons,
restoreAddons: restoreAddons,
getEnvironment: getEnvironment,
getLinksSync: getLinksSync,
getBindsSync: getBindsSync,
// exported for testing
_setupOauth: setupOauth,
_teardownOauth: teardownOauth
};
var appdb = require('./appdb.js'),
assert = require('assert'),
async = require('async'),
child_process = require('child_process'),
clientdb = require('./clientdb.js'),
config = require('../config.js'),
config = require('./config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:addons'),
docker = require('./docker.js'),
fs = require('fs'),
generatePassword = require('password-generator'),
hat = require('hat'),
MemoryStream = require('memorystream'),
once = require('once'),
os = require('os'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
shell = require('./shell.js'),
spawn = child_process.spawn,
util = require('util'),
uuid = require('node-uuid');
uuid = require('node-uuid'),
vbox = require('./vbox.js');
exports = module.exports = {
setupAddons: setupAddons,
teardownAddons: teardownAddons,
getEnvironment: getEnvironment,
// exported for testing
_allocateOAuthCredentials: allocateOAuthCredentials,
_removeOAuthCredentials: removeOAuthCredentials
};
var NOOP = function (app, options, callback) { return callback(); };
// setup can be called multiple times for the same app (configure crash restart) and existing data must not be lost
// teardown is destructive. app data stored with the addon is lost
var KNOWN_ADDONS = {
oauth: {
setup: allocateOAuthCredentials,
teardown: removeOAuthCredentials
setup: setupOauth,
teardown: teardownOauth,
backup: NOOP,
restore: setupOauth
},
ldap: {
setup: setupLdap,
teardown: teardownLdap,
backup: NOOP,
restore: setupLdap
},
sendmail: {
setup: setupSendMail,
teardown: teardownSendMail
teardown: teardownSendMail,
backup: NOOP,
restore: setupSendMail
},
mysql: {
setup: setupMySql,
teardown: teardownMySql
teardown: teardownMySql,
backup: backupMySql,
restore: restoreMySql,
},
postgresql: {
setup: setupPostgreSql,
teardown: teardownPostgreSql
teardown: teardownPostgreSql,
backup: backupPostgreSql,
restore: restorePostgreSql
},
mongodb: {
setup: setupMongoDb,
teardown: teardownMongoDb,
backup: backupMongoDb,
restore: restoreMongoDb
},
redis: {
setup: setupRedis,
teardown: teardownRedis
teardown: teardownRedis,
backup: NOOP, // no backup because we store redis as part of app's volume
restore: setupRedis // same thing
},
localstorage: {
setup: NOOP, // docker creates the directory for us
teardown: NOOP,
backup: NOOP, // no backup because it's already inside app data
restore: NOOP
},
_docker: {
setup: NOOP,
teardown: NOOP,
backup: NOOP,
restore: NOOP
}
};
function forwardFromHostToVirtualBox(rulename, port) {
if (os.platform() === 'darwin') {
debug('Setting up VirtualBox port forwarding for '+ rulename + ' at ' + port);
child_process.exec(
'VBoxManage controlvm boot2docker-vm natpf1 delete ' + rulename + ';' +
'VBoxManage controlvm boot2docker-vm natpf1 ' + rulename + ',tcp,127.0.0.1,' + port + ',,' + port);
}
var RMAPPDIR_CMD = path.join(__dirname, 'scripts/rmappdir.sh');
function debugApp(app, args) {
assert(!app || typeof app === 'object');
var prefix = app ? (app.location || 'naked_domain') : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function unforwardFromHostToVirtualBox(rulename) {
if (os.platform() === 'darwin') {
debug('Removing VirtualBox port forwarding for '+ rulename);
child_process.exec('VBoxManage controlvm boot2docker-vm natpf1 delete ' + rulename);
}
}
function setupAddons(app, addons, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
assert.strictEqual(typeof callback, 'function');
function setupAddons(app, callback) {
assert(typeof app === 'object');
assert(!app.manifest.addons || util.isArray(app.manifest.addons));
assert(typeof callback === 'function');
if (!addons) return callback(null);
if (!app.manifest.addons) return callback(null);
debugApp(app, 'setupAddons: Settings up %j', Object.keys(addons));
async.eachSeries(app.manifest.addons, function iterator(addon, iteratorCallback) {
async.eachSeries(Object.keys(addons), function iterator(addon, iteratorCallback) {
if (!(addon in KNOWN_ADDONS)) return iteratorCallback(new Error('No such addon:' + addon));
KNOWN_ADDONS[addon].setup(app, iteratorCallback);
debugApp(app, 'Setting up addon %s with options %j', addon, addons[addon]);
KNOWN_ADDONS[addon].setup(app, addons[addon], iteratorCallback);
}, callback);
}
function teardownAddons(app, callback) {
assert(typeof app === 'object');
assert(!app.manifest.addons || util.isArray(app.manifest.addons));
assert(typeof callback === 'function');
function teardownAddons(app, addons, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
assert.strictEqual(typeof callback, 'function');
if (!app.manifest.addons) return callback(null);
if (!addons) return callback(null);
async.eachSeries(app.manifest.addons, function iterator(addon, iteratorCallback) {
debugApp(app, 'teardownAddons: Tearing down %j', Object.keys(addons));
async.eachSeries(Object.keys(addons), function iterator(addon, iteratorCallback) {
if (!(addon in KNOWN_ADDONS)) return iteratorCallback(new Error('No such addon:' + addon));
KNOWN_ADDONS[addon].teardown(app, iteratorCallback);
debugApp(app, 'Tearing down addon %s with options %j', addon, addons[addon]);
KNOWN_ADDONS[addon].teardown(app, addons[addon], iteratorCallback);
}, callback);
}
function getEnvironment(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
function backupAddons(app, addons, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
assert.strictEqual(typeof callback, 'function');
appdb.getAddonConfigByAppId(appId, callback);
debugApp(app, 'backupAddons');
if (!addons) return callback(null);
debugApp(app, 'backupAddons: Backing up %j', Object.keys(addons));
async.eachSeries(Object.keys(addons), function iterator (addon, iteratorCallback) {
if (!(addon in KNOWN_ADDONS)) return iteratorCallback(new Error('No such addon:' + addon));
KNOWN_ADDONS[addon].backup(app, addons[addon], iteratorCallback);
}, callback);
}
function allocateOAuthCredentials(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function restoreAddons(app, addons, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'restoreAddons');
if (!addons) return callback(null);
debugApp(app, 'restoreAddons: restoring %j', Object.keys(addons));
async.eachSeries(Object.keys(addons), function iterator (addon, iteratorCallback) {
if (!(addon in KNOWN_ADDONS)) return iteratorCallback(new Error('No such addon:' + addon));
KNOWN_ADDONS[addon].restore(app, addons[addon], iteratorCallback);
}, callback);
}
function getEnvironment(app, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof callback, 'function');
appdb.getAddonConfigByAppId(app.id, callback);
}
function getLinksSync(app, addons) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
var links = [ ];
if (!addons) return links;
for (var addon in addons) {
switch (addon) {
case 'mysql': links.push('mysql:mysql'); break;
case 'postgresql': links.push('postgresql:postgresql'); break;
case 'sendmail': links.push('mail:mail'); break;
case 'redis': links.push('redis-' + app.id + ':redis-' + app.id); break;
case 'mongodb': links.push('mongodb:mongodb'); break;
default: break;
}
}
return links;
}
function getBindsSync(app, addons) {
assert.strictEqual(typeof app, 'object');
assert(!addons || typeof addons === 'object');
var binds = [ ];
if (!addons) return binds;
for (var addon in addons) {
switch (addon) {
case '_docker': binds.push('/var/run/docker.sock:/var/run/docker.sock:rw'); break;
case 'localstorage': binds.push(path.join(paths.DATA_DIR, app.id, 'data') + ':/app/data:rw'); break;
default: break;
}
}
return binds;
}
function setupOauth(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var id = uuid.v4();
var appId = app.id;
var clientId = 'cid-' + uuid.v4();
var clientSecret = uuid.v4();
var name = app.manifest.title;
var id = 'cid-addon-' + uuid.v4();
var clientSecret = hat(256);
var redirectURI = 'https://' + config.appFqdn(app.location);
var scope = 'profile,roleUser';
debug('allocateOAuthCredentials: id:%s clientId:%s clientSecret:%s name:%s', id, clientId, clientSecret, name);
debugApp(app, 'setupOauth: id:%s clientSecret:%s', id, clientSecret);
clientdb.add(id, appId, clientId, clientSecret, name, redirectURI, scope, function (error) {
if (error) return callback(error);
clientdb.delByAppId('addon-' + appId, function (error) { // remove existing creds
if (error && error.reason !== DatabaseError.NOT_FOUND) return callback(error);
var env = [
'OAUTH_CLIENT_ID=' + clientId,
'OAUTH_CLIENT_SECRET=' + clientSecret
];
clientdb.add(id, 'addon-' + appId, clientSecret, redirectURI, scope, function (error) {
if (error) return callback(error);
appdb.setAddonConfig(appId, 'oauth', env, callback);
var env = [
'OAUTH_CLIENT_ID=' + id,
'OAUTH_CLIENT_SECRET=' + clientSecret,
'OAUTH_ORIGIN=' + config.adminOrigin()
];
debugApp(app, 'Setting oauth addon config to %j', env);
appdb.setAddonConfig(appId, 'oauth', env, callback);
});
});
}
function removeOAuthCredentials(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function teardownOauth(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('removeOAuthCredentials: %s', app.id);
debugApp(app, 'teardownOauth');
clientdb.delByAppId(app.id, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null);
if (error) console.error(error);
clientdb.delByAppId('addon-' + app.id, function (error) {
if (error && error.reason !== DatabaseError.NOT_FOUND) console.error(error);
appdb.unsetAddonConfig(app.id, 'oauth', callback);
});
}
function setupSendMail(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function setupLdap(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var env = [
'MAIL_SERVER=' + config.get('mailServer'),
'MAIL_USERNAME=' + app.location,
'LDAP_SERVER=172.17.42.1',
'LDAP_PORT=3002',
'LDAP_URL=ldap://172.17.42.1:3002',
'LDAP_USERS_BASE_DN=ou=users,dc=cloudron',
'LDAP_GROUPS_BASE_DN=ou=groups,dc=cloudron',
'LDAP_BIND_DN=cn='+ app.id + ',ou=apps,dc=cloudron',
'LDAP_BIND_PASSWORD=' + hat(256) // this is ignored
];
debugApp(app, 'Setting up LDAP');
appdb.setAddonConfig(app.id, 'ldap', env, callback);
}
function teardownLdap(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Tearing down LDAP');
appdb.unsetAddonConfig(app.id, 'ldap', callback);
}
function setupSendMail(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var env = [
'MAIL_SMTP_SERVER=mail',
'MAIL_SMTP_PORT=25',
'MAIL_SMTP_USERNAME=' + (app.location || app.id), // use app.id for bare domains
'MAIL_DOMAIN=' + config.fqdn()
];
debug('Setting up sendmail for %s', app.id);
debugApp(app, 'Setting up sendmail');
appdb.setAddonConfig(app.id, 'sendmail', env, callback);
}
function teardownSendMail(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function teardownSendMail(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('Tearing down sendmail for %s', app.id);
debugApp(app, 'Tearing down sendmail');
appdb.unsetAddonConfig(app.id, 'sendmail', callback);
}
function setupMySql(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function setupMySql(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debug('Setting up mysql for %s', app.id);
debugApp(app, 'Setting up mysql');
var container = docker.getContainer('mysql');
var cmd = [ '/addons/mysql/service.sh', 'add', config.get('addons.mysql.rootPassword'), app.id ];
var cmd = [ '/addons/mysql/service.sh', 'add', app.id ];
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
@@ -183,7 +352,7 @@ function setupMySql(app, callback) {
var stderr = new MemoryStream();
execContainer.modem.demuxStream(stream, stdout, stderr);
stderr.on('data', function (data) { debug(data); }); // set -e output
stderr.on('data', function (data) { debugApp(app, data.toString('utf8')); }); // set -e output
var chunks = [ ];
stdout.on('data', function (chunk) { chunks.push(chunk); });
@@ -191,23 +360,27 @@ function setupMySql(app, callback) {
stream.on('error', callback);
stream.on('end', function () {
var env = Buffer.concat(chunks).toString('utf8').split('\n').slice(0, -1); // remove trailing newline
debug('Setting mysql addon config to %j', env);
debugApp(app, 'Setting mysql addon config to %j', env);
appdb.setAddonConfig(app.id, 'mysql', env, callback);
});
});
});
}
function teardownMySql(app, callback) {
function teardownMySql(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var container = docker.getContainer('mysql');
var cmd = [ '/addons/mysql/service.sh', 'remove', config.get('addons.mysql.rootPassword'), app.id ];
var cmd = [ '/addons/mysql/service.sh', 'remove', app.id ];
debug('Tearing down mysql for %s', app.id);
debugApp(app, 'Tearing down mysql');
container.exec({ Cmd: cmd }, function (error, execContainer) {
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
execContainer.start({ stream: true, stdout: true, stderr: true }, function (error, stream) {
execContainer.start(function (error, stream) {
if (error) return callback(error);
var data = '';
@@ -220,14 +393,59 @@ function teardownMySql(app, callback) {
});
}
function setupPostgreSql(app, callback) {
assert(typeof app === 'object');
assert(typeof callback === 'function');
function backupMySql(app, options, callback) {
debugApp(app, 'Backing up mysql');
debug('Setting up postgresql for %s', app.id);
callback = once(callback); // ChildProcess exit may or may not be called after error
var output = fs.createWriteStream(path.join(paths.DATA_DIR, app.id, 'mysqldump'));
output.on('error', callback);
var cp = spawn('/usr/bin/docker', [ 'exec', 'mysql', '/addons/mysql/service.sh', 'backup', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'backupMySql: done. code:%s signal:%s', code, signal);
if (!callback.called) callback(code ? 'backupMySql failed with status ' + code : null);
});
cp.stdout.pipe(output);
cp.stderr.pipe(process.stderr);
}
function restoreMySql(app, options, callback) {
callback = once(callback); // ChildProcess exit may or may not be called after error
setupMySql(app, options, function (error) {
if (error) return callback(error);
debugApp(app, 'restoreMySql');
var input = fs.createReadStream(path.join(paths.DATA_DIR, app.id, 'mysqldump'));
input.on('error', callback);
// cannot get this to work through docker.exec
var cp = spawn('/usr/bin/docker', [ 'exec', '-i', 'mysql', '/addons/mysql/service.sh', 'restore', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'restoreMySql: done %s %s', code, signal);
if (!callback.called) callback(code ? 'restoreMySql failed with status ' + code : null);
});
cp.stdout.pipe(process.stdout);
cp.stderr.pipe(process.stderr);
input.pipe(cp.stdin).on('error', callback);
});
}
function setupPostgreSql(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
debugApp(app, 'Setting up postgresql');
var container = docker.getContainer('postgresql');
var cmd = [ '/addons/postgresql/service.sh', 'add', config.get('addons.postgresql.rootPassword'), app.id ];
var cmd = [ '/addons/postgresql/service.sh', 'add', app.id ];
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
@@ -239,7 +457,7 @@ function setupPostgreSql(app, callback) {
var stderr = new MemoryStream();
execContainer.modem.demuxStream(stream, stdout, stderr);
stderr.on('data', function (data) { debug(data); }); // set -e output
stderr.on('data', function (data) { debugApp(app, data.toString('utf8')); }); // set -e output
var chunks = [ ];
stdout.on('data', function (chunk) { chunks.push(chunk); });
@@ -247,23 +465,27 @@ function setupPostgreSql(app, callback) {
stream.on('error', callback);
stream.on('end', function () {
var env = Buffer.concat(chunks).toString('utf8').split('\n').slice(0, -1); // remove trailing newline
debug('Setting postgresql addon config to %j', env);
debugApp(app, 'Setting postgresql addon config to %j', env);
appdb.setAddonConfig(app.id, 'postgresql', env, callback);
});
});
});
}
function teardownPostgreSql(app, callback) {
function teardownPostgreSql(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var container = docker.getContainer('postgresql');
var cmd = [ '/addons/postgresql/service.sh', 'remove', config.get('addons.postgresql.rootPassword'), app.id ];
var cmd = [ '/addons/postgresql/service.sh', 'remove', app.id ];
debug('Tearing down postgresql for %s', app.id);
debugApp(app, 'Tearing down postgresql');
container.exec({ Cmd: cmd }, function (error, execContainer) {
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
execContainer.start({ stream: true, stdout: true, stderr: true }, function (error, stream) {
execContainer.start(function (error, stream) {
if (error) return callback(error);
var data = '';
@@ -276,62 +498,253 @@ function teardownPostgreSql(app, callback) {
});
}
function setupRedis(app, callback) {
var redisPassword = generatePassword(64, false /* memorable */);
function backupPostgreSql(app, options, callback) {
debugApp(app, 'Backing up postgresql');
var createOptions = {
name: 'redis-' + app.id,
Hostname: config.appFqdn(app.location),
Tty: true,
Image: 'girish/redis:0.1',
Cmd: null,
Volumes: { },
VolumesFrom: '',
Env: [ 'REDIS_PASSWORD=' + redisPassword ]
};
callback = once(callback); // ChildProcess exit may or may not be called after error
var isMac = os.platform() === 'darwin';
var output = fs.createWriteStream(path.join(paths.DATA_DIR, app.id, 'postgresqldump'));
output.on('error', callback);
var startOptions = {
Binds: [ ],
// On Mac (boot2docker), we have to export the port to external world for port forwarding from Mac to work
// On linux, export to localhost only for testing purposes and not for the app itself
PortBindings: {
'6379/tcp': [{ HostPort: '0', HostIp: isMac ? '0.0.0.0' : '127.0.0.1' }]
}
};
var cp = spawn('/usr/bin/docker', [ 'exec', 'postgresql', '/addons/postgresql/service.sh', 'backup', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'backupPostgreSql: done %s %s', code, signal);
if (!callback.called) callback(code ? 'backupPostgreSql failed with status ' + code : null);
});
// docker.run does not return until the container ends :/
docker.createContainer(createOptions, function (error, container) {
if (error) return callback(new Error('Error creating container:' + error));
cp.stdout.pipe(output);
cp.stderr.pipe(process.stderr);
}
debug('Created redis container for %s with id %s', app.id, container.id);
function restorePostgreSql(app, options, callback) {
callback = once(callback); // ChildProcess exit may or may not be called after error
container.start(startOptions, function (error) {
if (error) return callback(new Error('Error starting container:' + error));
setupPostgreSql(app, options, function (error) {
if (error) return callback(error);
debug('Started redis container for %s with id %s', app.id, container.id);
debugApp(app, 'restorePostgreSql');
container.inspect(function (error, data) {
if (error) return callback(new Error('Unable to inspect container:' + error));
var input = fs.createReadStream(path.join(paths.DATA_DIR, app.id, 'postgresqldump'));
input.on('error', callback);
var redisIp = safe.query(data, 'NetworkSettings.IPAddress');
if (!redisIp) return callback(new Error('Unable to get container ip'));
var redisPort = safe.query(data, 'NetworkSettings.Ports.6379/tcp[0].HostPort');
if (!redisPort) return callback(new Error('Unable to get container port mapping'));
// cannot get this to work through docker.exec
var cp = spawn('/usr/bin/docker', [ 'exec', '-i', 'postgresql', '/addons/postgresql/service.sh', 'restore', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'restorePostgreSql: done %s %s', code, signal);
if (!callback.called) callback(code ? 'restorePostgreSql failed with status ' + code : null);
});
forwardFromHostToVirtualBox('redis-' + app.id, redisPort);
cp.stdout.pipe(process.stdout);
cp.stderr.pipe(process.stderr);
input.pipe(cp.stdin).on('error', callback);
});
}
var env = [ 'REDIS_URL=redis://redisuser:' + redisPassword + '@' + redisIp + ':6379' ];
function setupMongoDb(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
appdb.setAddonConfig(app.id, 'redis', env, callback);
debugApp(app, 'Setting up mongodb');
var container = docker.getContainer('mongodb');
var cmd = [ '/addons/mongodb/service.sh', 'add', app.id ];
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
execContainer.start(function (error, stream) {
if (error) return callback(error);
var stdout = new MemoryStream();
var stderr = new MemoryStream();
execContainer.modem.demuxStream(stream, stdout, stderr);
stderr.on('data', function (data) { debugApp(app, data.toString('utf8')); }); // set -e output
var chunks = [ ];
stdout.on('data', function (chunk) { chunks.push(chunk); });
stream.on('error', callback);
stream.on('end', function () {
var env = Buffer.concat(chunks).toString('utf8').split('\n').slice(0, -1); // remove trailing newline
debugApp(app, 'Setting mongodb addon config to %j', env);
appdb.setAddonConfig(app.id, 'mongodb', env, callback);
});
});
});
}
function teardownRedis(app, callback) {
function teardownMongoDb(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var container = docker.getContainer('mongodb');
var cmd = [ '/addons/mongodb/service.sh', 'remove', app.id ];
debugApp(app, 'Tearing down mongodb');
container.exec({ Cmd: cmd, AttachStdout: true, AttachStderr: true }, function (error, execContainer) {
if (error) return callback(error);
execContainer.start(function (error, stream) {
if (error) return callback(error);
var data = '';
stream.on('error', callback);
stream.on('data', function (d) { data += d.toString('utf8'); });
stream.on('end', function () {
appdb.unsetAddonConfig(app.id, 'mongodb', callback);
});
});
});
}
function backupMongoDb(app, options, callback) {
debugApp(app, 'Backing up mongodb');
callback = once(callback); // ChildProcess exit may or may not be called after error
var output = fs.createWriteStream(path.join(paths.DATA_DIR, app.id, 'mongodbdump'));
output.on('error', callback);
var cp = spawn('/usr/bin/docker', [ 'exec', 'mongodb', '/addons/mongodb/service.sh', 'backup', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'backupMongoDb: done %s %s', code, signal);
if (!callback.called) callback(code ? 'backupMongoDb failed with status ' + code : null);
});
cp.stdout.pipe(output);
cp.stderr.pipe(process.stderr);
}
function restoreMongoDb(app, options, callback) {
callback = once(callback); // ChildProcess exit may or may not be called after error
setupMongoDb(app, options, function (error) {
if (error) return callback(error);
debugApp(app, 'restoreMongoDb');
var input = fs.createReadStream(path.join(paths.DATA_DIR, app.id, 'mongodbdump'));
input.on('error', callback);
// cannot get this to work through docker.exec
var cp = spawn('/usr/bin/docker', [ 'exec', '-i', 'mongodb', '/addons/mongodb/service.sh', 'restore', app.id ]);
cp.on('error', callback);
cp.on('exit', function (code, signal) {
debugApp(app, 'restoreMongoDb: done %s %s', code, signal);
if (!callback.called) callback(code ? 'restoreMongoDb failed with status ' + code : null);
});
cp.stdout.pipe(process.stdout);
cp.stderr.pipe(process.stderr);
input.pipe(cp.stdin).on('error', callback);
});
}
function forwardRedisPort(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
docker.getContainer('redis-' + appId).inspect(function (error, data) {
if (error) return callback(new Error('Unable to inspect container:' + error));
var redisPort = parseInt(safe.query(data, 'NetworkSettings.Ports.6379/tcp[0].HostPort'), 10);
if (!Number.isInteger(redisPort)) return callback(new Error('Unable to get container port mapping'));
vbox.forwardFromHostToVirtualBox('redis-' + appId, redisPort);
return callback(null);
});
}
// Ensures that app's addon redis container is running. Can be called when named container already exists/running
function setupRedis(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var redisPassword = generatePassword(64, false /* memorable */);
var redisVarsFile = path.join(paths.ADDON_CONFIG_DIR, 'redis-' + app.id + '_vars.sh');
var redisDataDir = path.join(paths.DATA_DIR, app.id + '/redis');
if (!safe.fs.writeFileSync(redisVarsFile, 'REDIS_PASSWORD=' + redisPassword)) {
return callback(new Error('Error writing redis config'));
}
if (!safe.fs.mkdirSync(redisDataDir) && safe.error.code !== 'EEXIST') return callback(new Error('Error creating redis data dir:' + safe.error));
var createOptions = {
name: 'redis-' + app.id,
Hostname: config.appFqdn(app.location),
Tty: true,
Image: 'cloudron/redis:0.5.0', // if you change this, fix setup/INFRA_VERSION as well
Cmd: null,
Volumes: {
'/tmp': {},
'/run': {},
'/var/log': {}
},
VolumesFrom: []
};
var isMac = os.platform() === 'darwin';
var startOptions = {
Binds: [
redisVarsFile + ':/etc/redis/redis_vars.sh:ro',
redisDataDir + ':/var/lib/redis:rw'
],
Memory: 1024 * 1024 * 75, // 100mb
MemorySwap: 1024 * 1024 * 75 * 2, // 150mb
// On Mac (boot2docker), we have to export the port to external world for port forwarding from Mac to work
// On linux, export to localhost only for testing purposes and not for the app itself
PortBindings: {
'6379/tcp': [{ HostPort: '0', HostIp: isMac ? '0.0.0.0' : '127.0.0.1' }]
},
ReadonlyRootfs: true,
RestartPolicy: {
'Name': 'always',
'MaximumRetryCount': 0
}
};
var env = [
'REDIS_URL=redis://redisuser:' + redisPassword + '@redis-' + app.id,
'REDIS_PASSWORD=' + redisPassword,
'REDIS_HOST=redis-' + app.id,
'REDIS_PORT=6379'
];
var redisContainer = docker.getContainer(createOptions.name);
redisContainer.remove({ force: true, v: false }, function (ignoredError) {
docker.createContainer(createOptions, function (error) {
if (error && error.statusCode !== 409) return callback(error); // if not already created
redisContainer.start(startOptions, function (error) {
if (error && error.statusCode !== 304) return callback(error); // if not already running
appdb.setAddonConfig(app.id, 'redis', env, function (error) {
if (error) return callback(error);
forwardRedisPort(app.id, callback);
});
});
});
});
}
function teardownRedis(app, options, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof options, 'object');
assert.strictEqual(typeof callback, 'function');
var container = docker.getContainer('redis-' + app.id);
var removeOptions = {
@@ -340,12 +753,16 @@ function teardownRedis(app, callback) {
};
container.remove(removeOptions, function (error) {
if (error && error.statusCode === 404) return callback(null);
if (error) return callback(new Error('Error removing container:' + error));
if (error && error.statusCode !== 404) return callback(new Error('Error removing container:' + error));
unforwardFromHostToVirtualBox('redis-' + app.id);
vbox.unforwardFromHostToVirtualBox('redis-' + app.id);
callback(null);
safe.fs.unlinkSync(paths.ADDON_CONFIG_DIR, 'redis-' + app.id + '_vars.sh');
shell.sudo('teardownRedis', [ RMAPPDIR_CMD, app.id + '/redis' ], function (error, stdout, stderr) {
if (error) return callback(new Error('Error removing redis data:' + error));
appdb.unsetAddonConfig(app.id, 'redis', callback);
});
});
}
+282 -248
View File
@@ -2,25 +2,17 @@
'use strict';
var assert = require('assert'),
async = require('async'),
database = require('./database.js'),
DatabaseError = require('./databaseerror'),
debug = require('debug')('box:appdb'),
safe = require('safetydance'),
util = require('util');
exports = module.exports = {
get: get,
getBySubdomain: getBySubdomain,
getByHttpPort: getByHttpPort,
getByContainerId: getByContainerId,
add: add,
exists: exists,
del: del,
update: update,
getAll: getAll,
getPortBindings: getPortBindings,
clear: clear,
setAddonConfig: setAddonConfig,
getAddonConfig: getAddonConfig,
@@ -31,179 +23,216 @@ exports = module.exports = {
setHealth: setHealth,
setInstallationCommand: setInstallationCommand,
setRunCommand: setRunCommand,
getAppVersions: getAppVersions,
getAppStoreIds: getAppStoreIds,
// status codes
ISTATE_PENDING_INSTALL: 'pending_install',
ISTATE_PENDING_CONFIGURE: 'pending_configure',
ISTATE_PENDING_UNINSTALL: 'pending_uninstall',
ISTATE_PENDING_RESTORE: 'pending_restore',
ISTATE_PENDING_UPDATE: 'pending_update',
ISTATE_ERROR: 'error',
ISTATE_INSTALLED: 'installed',
// installation codes (keep in sync in UI)
ISTATE_PENDING_INSTALL: 'pending_install', // installs and fresh reinstalls
ISTATE_PENDING_CONFIGURE: 'pending_configure', // config (location, port) changes and on infra update
ISTATE_PENDING_UNINSTALL: 'pending_uninstall', // uninstallation
ISTATE_PENDING_RESTORE: 'pending_restore', // restore to previous backup or on upgrade
ISTATE_PENDING_UPDATE: 'pending_update', // update from installed state preserving data
ISTATE_PENDING_FORCE_UPDATE: 'pending_force_update', // update from any state preserving data
ISTATE_PENDING_BACKUP: 'pending_backup', // backup the app
ISTATE_ERROR: 'error', // error executing last pending_* command
ISTATE_INSTALLED: 'installed', // app is installed
RSTATE_RUNNING: 'running',
RSTATE_PENDING_START: 'pending_start',
RSTATE_PENDING_STOP: 'pending_stop',
RSTATE_STOPPED: 'stopped', // app stopped by user
RSTATE_DEAD: 'dead', // app stopped on it's own
RSTATE_ERROR: 'error'
RSTATE_STOPPED: 'stopped', // app stopped by use
RSTATE_ERROR: 'error',
// run codes (keep in sync in UI)
HEALTH_HEALTHY: 'healthy',
HEALTH_UNHEALTHY: 'unhealthy',
HEALTH_ERROR: 'error',
HEALTH_DEAD: 'dead',
_clear: clear
};
var APPS_FIELDS = [ 'id', 'appStoreId', 'version', 'installationState', 'installationProgress', 'runState',
'healthy', 'containerId', 'manifestJson', 'httpPort', 'location', 'dnsRecordId', 'accessRestriction' ].join(',');
var assert = require('assert'),
async = require('async'),
database = require('./database.js'),
DatabaseError = require('./databaseerror'),
safe = require('safetydance'),
util = require('util');
var APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.version', 'apps.installationState', 'apps.installationProgress', 'apps.runState',
'apps.healthy', 'apps.containerId', 'apps.manifestJson', 'apps.httpPort', 'apps.location', 'apps.dnsRecordId', 'apps.accessRestriction' ].join(',');
var APPS_FIELDS = [ 'id', 'appStoreId', 'installationState', 'installationProgress', 'runState',
'health', 'containerId', 'manifestJson', 'httpPort', 'location', 'dnsRecordId',
'accessRestriction', 'lastBackupId', 'lastBackupConfigJson', 'oldConfigJson' ].join(',');
var PORT_BINDINGS_FIELDS = [ 'hostPort', 'containerPort', 'appId' ].join(',');
var APPS_FIELDS_PREFIXED = [ 'apps.id', 'apps.appStoreId', 'apps.installationState', 'apps.installationProgress', 'apps.runState',
'apps.health', 'apps.containerId', 'apps.manifestJson', 'apps.httpPort', 'apps.location', 'apps.dnsRecordId',
'apps.accessRestriction', 'apps.lastBackupId', 'apps.lastBackupConfigJson', 'apps.oldConfigJson' ].join(',');
var PORT_BINDINGS_FIELDS = [ 'hostPort', 'environmentVariable', 'appId' ].join(',');
function postProcess(result) {
assert(result.manifestJson === null || typeof result.manifestJson === 'string');
assert.strictEqual(typeof result, 'object');
assert(result.manifestJson === null || typeof result.manifestJson === 'string');
result.manifest = safe.JSON.parse(result.manifestJson);
delete result.manifestJson;
assert(result.lastBackupConfigJson === null || typeof result.lastBackupConfigJson === 'string');
result.lastBackupConfig = safe.JSON.parse(result.lastBackupConfigJson);
delete result.lastBackupConfigJson;
assert(result.oldConfigJson === null || typeof result.oldConfigJson === 'string');
result.oldConfig = safe.JSON.parse(result.oldConfigJson);
delete result.oldConfigJson;
assert(result.hostPorts === null || typeof result.hostPorts === 'string');
assert(result.containerPorts === null || typeof result.containerPorts === 'string');
assert(result.environmentVariables === null || typeof result.environmentVariables === 'string');
result.portBindings = { };
var hostPorts = result.hostPorts === null ? [ ] : result.hostPorts.split(',');
var containerPorts = result.containerPorts === null ? [ ] : result.containerPorts.split(',');
var environmentVariables = result.environmentVariables === null ? [ ] : result.environmentVariables.split(',');
delete result.hostPorts;
delete result.containerPorts;
delete result.environmentVariables;
for (var i = 0; i < hostPorts.length; i++) {
result.portBindings[containerPorts[i]] = hostPorts[i];
for (var i = 0; i < environmentVariables.length; i++) {
result.portBindings[environmentVariables[i]] = parseInt(hostPorts[i], 10);
}
}
function get(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(appPortBindings.hostPort) AS hostPorts, GROUP_CONCAT(appPortBindings.containerPort) AS containerPorts'
+ ' FROM apps LEFT OUTER JOIN appPortBindings WHERE apps.id = ? GROUP BY apps.id', [ id ], function (error, result) {
database.query('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables'
+ ' FROM apps LEFT OUTER JOIN appPortBindings ON apps.id = appPortBindings.appId WHERE apps.id = ? GROUP BY apps.id', [ id ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
postProcess(result[0]);
postProcess(result);
callback(null, result);
callback(null, result[0]);
});
}
function getBySubdomain(subdomain, callback) {
assert(typeof subdomain === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(appPortBindings.hostPort) AS hostPorts, GROUP_CONCAT(appPortBindings.containerPort) AS containerPorts'
+ ' FROM apps LEFT OUTER JOIN appPortBindings WHERE location = ? GROUP BY apps.id', [ subdomain ], function (error, result) {
database.query('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables'
+ ' FROM apps LEFT OUTER JOIN appPortBindings ON apps.id = appPortBindings.appId WHERE location = ? GROUP BY apps.id', [ subdomain ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
postProcess(result[0]);
postProcess(result);
callback(null, result);
callback(null, result[0]);
});
}
function getByHttpPort(httpPort, callback) {
assert(typeof httpPort === 'number');
assert(typeof callback === 'function');
assert.strictEqual(typeof httpPort, 'number');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(appPortBindings.hostPort) AS hostPorts, GROUP_CONCAT(appPortBindings.containerPort) AS containerPorts'
+ ' FROM apps LEFT OUTER JOIN appPortBindings WHERE httpPort = ? GROUP BY apps.id', [ httpPort ], function (error, result) {
database.query('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables'
+ ' FROM apps LEFT OUTER JOIN appPortBindings ON apps.id = appPortBindings.appId WHERE httpPort = ? GROUP BY apps.id', [ httpPort ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
postProcess(result[0]);
postProcess(result);
callback(null, result[0]);
});
}
callback(null, result);
function getByContainerId(containerId, callback) {
assert.strictEqual(typeof containerId, 'string');
assert.strictEqual(typeof callback, 'function');
database.query('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables'
+ ' FROM apps LEFT OUTER JOIN appPortBindings ON apps.id = appPortBindings.appId WHERE containerId = ? GROUP BY apps.id', [ containerId ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
postProcess(result[0]);
callback(null, result[0]);
});
}
function getAll(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
database.all('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(appPortBindings.hostPort) AS hostPorts, GROUP_CONCAT(appPortBindings.containerPort) AS containerPorts'
database.query('SELECT ' + APPS_FIELDS_PREFIXED + ','
+ 'GROUP_CONCAT(CAST(appPortBindings.hostPort AS CHAR(6))) AS hostPorts, GROUP_CONCAT(appPortBindings.environmentVariable) AS environmentVariables'
+ ' FROM apps LEFT OUTER JOIN appPortBindings ON apps.id = appPortBindings.appId'
+ ' GROUP BY apps.id ORDER BY apps.id', function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof results === 'undefined') results = [ ];
results.forEach(postProcess);
callback(null, results);
});
}
function add(id, appStoreId, location, portBindings, accessRestriction, callback) {
assert(typeof id === 'string');
assert(typeof appStoreId === 'string');
assert(typeof location === 'string');
assert(typeof portBindings === 'object');
assert(typeof accessRestriction === 'string');
assert(typeof callback === 'function');
function add(id, appStoreId, manifest, location, portBindings, accessRestriction, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appStoreId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof manifest.version, 'string');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert.strictEqual(typeof callback, 'function');
portBindings = portBindings || { };
var conn = database.beginTransaction();
var manifestJson = JSON.stringify(manifest);
conn.run('INSERT INTO apps (id, appStoreId, installationState, location, accessRestriction) VALUES (?, ?, ?, ?, ?)',
[ id, appStoreId, exports.ISTATE_PENDING_INSTALL, location, accessRestriction ], function (error) {
if (error || !this.lastID) database.rollback(conn);
var queries = [ ];
queries.push({
query: 'INSERT INTO apps (id, appStoreId, manifestJson, installationState, location, accessRestriction) VALUES (?, ?, ?, ?, ?, ?)',
args: [ id, appStoreId, manifestJson, exports.ISTATE_PENDING_INSTALL, location, accessRestriction ]
});
if (error && error.code === 'SQLITE_CONSTRAINT') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || !this.lastID) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
async.eachSeries(Object.keys(portBindings), function iterator(containerPort, callback) {
conn.run('INSERT INTO appPortBindings (hostPort, containerPort, appId) VALUES (?, ?, ?)',
[ portBindings[containerPort], containerPort, id ], callback);
}, function done(error) {
if (error) database.rollback(conn);
if (error && error.code === 'SQLITE_CONSTRAINT') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error /* || !this.lastID*/) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
database.commit(conn, callback);
Object.keys(portBindings).forEach(function (env) {
queries.push({
query: 'INSERT INTO appPortBindings (environmentVariable, hostPort, appId) VALUES (?, ?, ?)',
args: [ env, portBindings[env], id ]
});
});
database.transaction(queries, function (error) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS, error.message));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
function exists(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT 1 FROM apps WHERE id=?', [ id ], function (error, result) {
database.query('SELECT 1 FROM apps WHERE id=?', [ id ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
return callback(null, typeof result !== 'undefined');
return callback(null, result.length !== 0);
});
}
function getPortBindings(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
database.all('SELECT ' + PORT_BINDINGS_FIELDS + ' FROM appPortBindings WHERE appId = ?', [ id ], function (error, results) {
database.query('SELECT ' + PORT_BINDINGS_FIELDS + ' FROM appPortBindings WHERE appId = ?', [ id ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results = results || [ ];
var portBindings = { };
for (var i = 0; i < results.length; i++) {
portBindings[results[i].containerPort] = results[i].hostPort;
portBindings[results[i].environmentVariable] = results[i].hostPort;
}
callback(null, portBindings);
@@ -211,29 +240,29 @@ function getPortBindings(id, callback) {
}
function del(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
var conn = database.beginTransaction();
conn.run('DELETE FROM appPortBindings WHERE appId = ?', [ id ], function (error) {
conn.run('DELETE FROM apps WHERE id = ?', [ id ], function (error) {
if (error || this.changes !== 1) database.rollback(conn);
var queries = [
{ query: 'DELETE FROM appPortBindings WHERE appId = ?', args: [ id ] },
{ query: 'DELETE FROM apps WHERE id = ?', args: [ id ] }
];
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (this.changes !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
database.transaction(queries, function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (results[1].affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
database.commit(conn, callback);
});
callback(null);
});
}
function clear(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
async.series([
database.run.bind(null, 'DELETE FROM appPortBindings'),
database.run.bind(null, 'DELETE FROM apps'),
database.run.bind(null, 'DELETE FROM appAddonConfigs')
database.query.bind(null, 'DELETE FROM appPortBindings'),
database.query.bind(null, 'DELETE FROM appAddonConfigs'),
database.query.bind(null, 'DELETE FROM apps')
], function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
return callback(null);
@@ -241,150 +270,155 @@ function clear(callback) {
}
function update(id, app, callback) {
updateWithConstraints(id, app, callback);
updateWithConstraints(id, app, '', callback);
}
function updateWithConstraints(id, app, constraints, callback) {
assert(typeof id === 'string');
assert(typeof app === 'object');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof constraints, 'string');
assert.strictEqual(typeof callback, 'function');
assert(!('portBindings' in app) || typeof app.portBindings === 'object');
if (typeof constraints === 'function') {
callback = constraints;
constraints = '';
} else {
assert(typeof constraints === 'string');
assert(typeof callback === 'function');
}
var queries = [ ];
var portBindings = app.portBindings || { };
var conn = database.beginTransaction();
async.eachSeries(Object.keys(portBindings), function iterator(containerPort, callback) {
var values = [ portBindings[containerPort], containerPort, id ];
conn.run('UPDATE appPortBindings SET hostPort = ? WHERE containerPort = ? AND appId = ?', values, callback);
}, function seriesDone(error) {
if (error) {
database.rollback(conn);
return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
}
var args = [ ], values = [ ];
for (var p in app) {
if (!app.hasOwnProperty(p)) continue;
if (p === 'manifest') {
args.push('manifestJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p !== 'portBindings') {
args.push(p + ' = ?');
values.push(app[p]);
}
}
if (values.length === 0) return database.commit(conn, callback);
values.push(id);
conn.run('UPDATE apps SET ' + args.join(', ') + ' WHERE id = ? ' + constraints, values, function (error) {
if (error || this.changes !== 1) database.rollback(conn);
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (this.changes !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
database.commit(conn, callback);
if ('portBindings' in app) {
var portBindings = app.portBindings || { };
// replace entries by app id
queries.push({ query: 'DELETE FROM appPortBindings WHERE appId = ?', args: [ id ] });
Object.keys(portBindings).forEach(function (env) {
var values = [ portBindings[env], env, id ];
queries.push({ query: 'INSERT INTO appPortBindings (hostPort, environmentVariable, appId) VALUES(?, ?, ?)', args: values });
});
});
}
// sets health on installed apps that have a runState which is not null or pending
function setHealth(appId, healthy, runState, callback) {
assert(typeof appId === 'string');
assert(typeof healthy === 'boolean');
assert(typeof runState === 'string');
assert(typeof callback === 'function');
var values = {
healthy: healthy,
runState: runState
};
var constraints = 'AND runState NOT GLOB "pending_*" AND installationState = "installed"';
if (runState === exports.RSTATE_DEAD) { // don't mark stopped apps as dead
constraints += ' AND runState != "stopped"';
}
updateWithConstraints(appId, values, constraints, callback);
}
function setInstallationCommand(appId, installationState, values, callback) {
assert(typeof appId === 'string');
assert(typeof installationState === 'string');
if (typeof values === 'function') {
callback = values;
values = { };
} else {
assert(typeof values === 'object');
assert(typeof callback === 'function');
var fields = [ ], values = [ ];
for (var p in app) {
if (p === 'manifest') {
fields.push('manifestJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'lastBackupConfig') {
fields.push('lastBackupConfigJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p === 'oldConfig') {
fields.push('oldConfigJson = ?');
values.push(JSON.stringify(app[p]));
} else if (p !== 'portBindings') {
fields.push(p + ' = ?');
values.push(app[p]);
}
}
values.installationState = installationState;
if (installationState === exports.ISTATE_PENDING_UNINSTALL) {
updateWithConstraints(appId, values, '', callback);
} else {
updateWithConstraints(appId, values, 'AND installationState NOT GLOB "pending_*"', callback);
if (values.length !== 0) {
values.push(id);
queries.push({ query: 'UPDATE apps SET ' + fields.join(', ') + ' WHERE id = ? ' + constraints, args: values });
}
}
function setRunCommand(appId, runState, callback) {
assert(typeof appId === 'string');
assert(typeof runState === 'string');
assert(typeof callback === 'function');
var values = { runState: runState };
updateWithConstraints(appId, values, 'AND runState NOT GLOB "pending_*" AND installationState = "installed"', callback);
}
function getAppVersions(callback) {
assert(typeof callback === 'function');
database.all('SELECT id, appStoreId, version FROM apps', function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
results = results || [ ];
callback(null, results);
});
}
function setAddonConfig(appId, addonId, env, callback) {
assert(typeof appId === 'string');
assert(typeof addonId === 'string');
assert(util.isArray(env));
assert(typeof callback === 'function');
if (env.length === 0) return callback(null);
var query = 'INSERT INTO appAddonConfigs(appId, addonId, value) VALUES ';
var args = [ ], queryArgs = [ ];
for (var i = 0; i < env.length; i++) {
args.push(appId, addonId, env[i]);
queryArgs.push('(?, ?, ?)');
}
database.run(query + queryArgs.join(','), args, function (error) {
database.transaction(queries, function (error, results) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS, error.message));
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (results[results.length - 1].affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
return callback(null);
});
}
function unsetAddonConfig(appId, addonId, callback) {
assert(typeof appId === 'string');
assert(typeof addonId === 'string');
assert(typeof callback === 'function');
// not sure if health should influence runState
function setHealth(appId, health, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof health, 'string');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM appAddonConfigs WHERE appId = ? AND addonId = ?', [ appId, addonId ], function (error) {
var values = { health: health };
var constraints = 'AND runState NOT LIKE "pending_%" AND installationState = "installed"';
updateWithConstraints(appId, values, constraints, callback);
}
function setInstallationCommand(appId, installationState, values, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof installationState, 'string');
if (typeof values === 'function') {
callback = values;
values = { };
} else {
assert.strictEqual(typeof values, 'object');
assert.strictEqual(typeof callback, 'function');
}
values.installationState = installationState;
values.installationProgress = '';
// Rules are:
// uninstall is allowed in any state
// force update is allowed in any state including pending_uninstall! (for better or worse)
// restore is allowed from installed or error state
// update and configure are allowed only in installed state
if (installationState === exports.ISTATE_PENDING_UNINSTALL || installationState === exports.ISTATE_PENDING_FORCE_UPDATE) {
updateWithConstraints(appId, values, '', callback);
} else if (installationState === exports.ISTATE_PENDING_RESTORE) {
updateWithConstraints(appId, values, 'AND (installationState = "installed" OR installationState = "error")', callback);
} else if (installationState === exports.ISTATE_PENDING_UPDATE || exports.ISTATE_PENDING_CONFIGURE || installationState == exports.ISTATE_PENDING_BACKUP) {
updateWithConstraints(appId, values, 'AND installationState = "installed"', callback);
} else {
callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, 'invalid installationState'));
}
}
function setRunCommand(appId, runState, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof runState, 'string');
assert.strictEqual(typeof callback, 'function');
var values = { runState: runState };
updateWithConstraints(appId, values, 'AND runState NOT LIKE "pending_%" AND installationState = "installed"', callback);
}
function getAppStoreIds(callback) {
assert.strictEqual(typeof callback, 'function');
database.query('SELECT id, appStoreId FROM apps', function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null, results);
});
}
function setAddonConfig(appId, addonId, env, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof addonId, 'string');
assert(util.isArray(env));
assert.strictEqual(typeof callback, 'function');
unsetAddonConfig(appId, addonId, function (error) {
if (error) return callback(error);
if (env.length === 0) return callback(null);
var query = 'INSERT INTO appAddonConfigs(appId, addonId, value) VALUES ';
var args = [ ], queryArgs = [ ];
for (var i = 0; i < env.length; i++) {
args.push(appId, addonId, env[i]);
queryArgs.push('(?, ?, ?)');
}
database.query(query + queryArgs.join(','), args, function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
return callback(null);
});
});
}
function unsetAddonConfig(appId, addonId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof addonId, 'string');
assert.strictEqual(typeof callback, 'function');
database.query('DELETE FROM appAddonConfigs WHERE appId = ? AND addonId = ?', [ appId, addonId ], function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
@@ -392,10 +426,10 @@ function unsetAddonConfig(appId, addonId, callback) {
}
function unsetAddonConfigByAppId(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM appAddonConfigs WHERE appId = ?', [ appId ], function (error) {
database.query('DELETE FROM appAddonConfigs WHERE appId = ?', [ appId ], function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
@@ -403,29 +437,29 @@ function unsetAddonConfigByAppId(appId, callback) {
}
function getAddonConfig(appId, addonId, callback) {
assert(typeof appId === 'string');
assert(typeof addonId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof addonId, 'string');
assert.strictEqual(typeof callback, 'function');
database.all('SELECT value FROM appAddonConfigs WHERE appId = ? AND addonId = ?', [ appId, addonId ], function (error, result) {
database.query('SELECT value FROM appAddonConfigs WHERE appId = ? AND addonId = ?', [ appId, addonId ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
var config = [ ];
result.forEach(function (v) { config.push(v.value); });
results.forEach(function (v) { config.push(v.value); });
callback(null, config);
});
}
function getAddonConfigByAppId(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
database.all('SELECT value FROM appAddonConfigs WHERE appId = ?', [ appId ], function (error, result) {
database.query('SELECT value FROM appAddonConfigs WHERE appId = ?', [ appId ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
var config = [ ];
result.forEach(function (v) { config.push(v.value); });
results.forEach(function (v) { config.push(v.value); });
callback(null, config);
});
+188
View File
@@ -0,0 +1,188 @@
'use strict';
var appdb = require('./appdb.js'),
assert = require('assert'),
async = require('async'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:apphealthmonitor'),
docker = require('./docker.js'),
mailer = require('./mailer.js'),
superagent = require('superagent'),
util = require('util');
exports = module.exports = {
start: start,
stop: stop
};
var HEALTHCHECK_INTERVAL = 10 * 1000; // every 10 seconds. this needs to be small since the UI makes only healthy apps clickable
var UNHEALTHY_THRESHOLD = 3 * 60 * 1000; // 3 minutes
var gHealthInfo = { }; // { time, emailSent }
var gRunTimeout = null;
var gDockerEventStream = null;
function debugApp(app) {
assert(!app || typeof app === 'object');
var prefix = app ? app.location : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function setHealth(app, health, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof health, 'string');
assert.strictEqual(typeof callback, 'function');
var now = new Date();
if (!(app.id in gHealthInfo)) { // add new apps to list
gHealthInfo[app.id] = { time: now, emailSent: false };
}
if (health === appdb.HEALTH_HEALTHY) {
gHealthInfo[app.id].time = now;
} else if (Math.abs(now - gHealthInfo[app.id].time) > UNHEALTHY_THRESHOLD) {
if (gHealthInfo[app.id].emailSent) return callback(null);
debugApp(app, 'marking as unhealthy since not seen for more than %s minutes', UNHEALTHY_THRESHOLD/(60 * 1000));
if (app.appStoreId !== '') mailer.appDied(app); // do not send mails for dev apps
gHealthInfo[app.id].emailSent = true;
} else {
debugApp(app, 'waiting for sometime to update the app health');
return callback(null);
}
appdb.setHealth(app.id, health, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null); // app uninstalled?
if (error) return callback(error);
app.health = health;
callback(null);
});
}
// callback is called with error for fatal errors and not if health check failed
function checkAppHealth(app, callback) {
if (app.installationState !== appdb.ISTATE_INSTALLED || app.runState !== appdb.RSTATE_RUNNING) {
debugApp(app, 'skipped. istate:%s rstate:%s', app.installationState, app.runState);
return callback(null);
}
var container = docker.getContainer(app.containerId),
manifest = app.manifest;
container.inspect(function (err, data) {
if (err || !data || !data.State) {
debugApp(app, 'Error inspecting container');
return setHealth(app, appdb.HEALTH_ERROR, callback);
}
if (data.State.Running !== true) {
debugApp(app, 'exited');
return setHealth(app, appdb.HEALTH_DEAD, callback);
}
// poll through docker network instead of nginx to bypass any potential oauth proxy
var healthCheckUrl = 'http://127.0.0.1:' + app.httpPort + manifest.healthCheckPath;
superagent
.get(healthCheckUrl)
.redirects(0)
.timeout(HEALTHCHECK_INTERVAL)
.end(function (error, res) {
if (error || res.status >= 400) { // 2xx and 3xx are ok
debugApp(app, 'not alive : %s', error || res.status);
setHealth(app, appdb.HEALTH_UNHEALTHY, callback);
} else {
debugApp(app, 'alive');
setHealth(app, appdb.HEALTH_HEALTHY, callback);
}
});
});
}
function processApps(callback) {
appdb.getAll(function (error, apps) {
if (error) return callback(error);
async.each(apps, checkAppHealth, function (error) {
if (error) console.error(error);
callback(null);
});
});
}
function run() {
processApps(function (error) {
if (error) console.error(error);
gRunTimeout = setTimeout(run, HEALTHCHECK_INTERVAL);
});
}
/*
OOM can be tested using stress tool like so:
docker run -ti -m 100M cloudron/base:0.3.3 /bin/bash
apt-get update && apt-get install stress
stress --vm 1 --vm-bytes 200M --vm-hang 0
*/
function processDockerEvents() {
// note that for some reason, the callback is called only on the first event
debug('Listening for docker events');
docker.getEvents({ filters: JSON.stringify({ event: [ 'oom' ] }) }, function (error, stream) {
if (error) return console.error(error);
gDockerEventStream = stream;
stream.setEncoding('utf8');
stream.on('data', function (data) {
var ev = JSON.parse(data);
debug('Container ' + ev.id + ' went OOM');
appdb.getByContainerId(ev.id, function (error, app) {
var program = error || !app.appStoreId ? ev.id : app.appStoreId;
var context = JSON.stringify(ev);
if (app) context = context + '\n\n' + JSON.stringify(app, null, 4) + '\n';
debug('OOM Context: %s', context);
// do not send mails for dev apps
if (app.appStoreId !== '') mailer.sendCrashNotification(program, context); // app can be null if it's an addon crash
});
});
stream.on('error', function (error) {
console.error('Error reading docker events', error);
gDockerEventStream = null; // TODO: reconnect?
});
stream.on('end', function () {
console.error('Docke event stream ended');
gDockerEventStream = null; // TODO: reconnect?
stream.end();
});
});
}
function start(callback) {
assert.strictEqual(typeof callback, 'function');
debug('Starting apphealthmonitor');
processDockerEvents();
run();
callback();
}
function stop(callback) {
assert.strictEqual(typeof callback, 'function');
clearTimeout(gRunTimeout);
gDockerEventStream.end();
callback();
}
+563 -165
View File
@@ -2,103 +2,92 @@
'use strict';
var appdb = require('./appdb.js'),
assert = require('assert'),
child_process = require('child_process'),
config = require('../config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:apps'),
docker = require('./docker.js'),
fs = require('fs'),
os = require('os'),
paths = require('./paths.js'),
split = require('split'),
stream = require('stream'),
util = require('util');
exports = module.exports = {
AppsError: AppsError,
initialize: initialize,
uninitialize: uninitialize,
get: get,
getBySubdomain: getBySubdomain,
getAll: getAll,
purchase: purchase,
install: install,
configure: configure,
uninstall: uninstall,
restore: restore,
restoreApp: restoreApp,
update: update,
backup: backup,
backupApp: backupApp,
getLogStream: getLogStream,
getLogs: getLogs,
start: start,
stop: stop,
exec: exec,
checkManifestConstraints: checkManifestConstraints,
setRestorePoint: setRestorePoint,
autoupdateApps: autoupdateApps,
// exported for testing
_validateHostname: validateHostname,
_validatePortBindings: validatePortBindings
};
var gTasks = { };
var addons = require('./addons.js'),
appdb = require('./appdb.js'),
assert = require('assert'),
async = require('async'),
backups = require('./backups.js'),
BackupsError = require('./backups.js').BackupsError,
config = require('./config.js'),
constants = require('./constants.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:apps'),
docker = require('./docker.js'),
fs = require('fs'),
manifestFormat = require('cloudron-manifestformat'),
path = require('path'),
paths = require('./paths.js'),
safe = require('safetydance'),
semver = require('semver'),
shell = require('./shell.js'),
split = require('split'),
superagent = require('superagent'),
taskmanager = require('./taskmanager.js'),
util = require('util'),
validator = require('validator');
function initialize(callback) {
assert(typeof callback === 'function');
var BACKUP_APP_CMD = path.join(__dirname, 'scripts/backupapp.sh'),
RESTORE_APP_CMD = path.join(__dirname, 'scripts/restoreapp.sh'),
BACKUP_SWAP_CMD = path.join(__dirname, 'scripts/backupswap.sh');
resume(callback); // TODO: potential race here since resume is async
function debugApp(app, args) {
assert(!app || typeof app === 'object');
var prefix = app ? app.location : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function startTask(appId) {
assert(typeof appId === 'string');
assert(!(appId in gTasks));
gTasks[appId] = child_process.fork(__dirname + '/apptask.js', [ appId ]);
gTasks[appId].once('exit', function (code, signal) {
debug('Task completed :' + appId);
delete gTasks[appId];
});
}
function stopTask(appId) {
assert(typeof appId === 'string');
if (gTasks[appId]) {
debug('Killing existing task : ' + gTasks[appId].pid);
gTasks[appId].kill();
delete gTasks[appId];
}
}
// resume install and uninstalls
function resume(callback) {
assert(typeof callback === 'function');
appdb.getAll(function (error, apps) {
if (error) return callback(error);
apps.forEach(function (app) {
debug('Creating process for ' + app.id + ' with state ' + app.installationState);
startTask(app.id);
function ignoreError(func) {
return function (callback) {
func(function (error) {
if (error) console.error('Ignored error:', error);
callback();
});
callback(null);
});
}
function uninitialize(callback) {
assert(typeof callback === 'function');
for (var appId in gTasks) {
stopTask(appId);
}
callback(null);
};
}
// http://dustinsenos.com/articles/customErrorsInNode
// http://code.google.com/p/v8/wiki/JavaScriptStackTraceApi
function AppsError(reason, errorOrMessage) {
assert(typeof reason === 'string');
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
@@ -117,21 +106,27 @@ function AppsError(reason, errorOrMessage) {
}
util.inherits(AppsError, Error);
AppsError.INTERNAL_ERROR = 'Internal Error';
AppsError.EXTERNAL_ERROR = 'External Error';
AppsError.ALREADY_EXISTS = 'Already Exists';
AppsError.NOT_FOUND = 'Not Found';
AppsError.BAD_FIELD = 'Bad Field';
AppsError.BAD_STATE = 'Bad State';
AppsError.PORT_RESERVED = 'Port Reserved';
AppsError.PORT_CONFLICT = 'Port Conflict';
AppsError.BILLING_REQUIRED = 'Billing Required';
// Hostname validation comes from RFC 1123 (section 2.1)
// Domain name validation comes from RFC 2181 (Name syntax)
// https://en.wikipedia.org/wiki/Hostname#Restrictions_on_valid_host_names
// We are validating the validity of the location-fqdn as host name
function validateHostname(location, fqdn) {
var RESERVED_LOCATIONS = [ 'admin' ];
var RESERVED_LOCATIONS = [ constants.ADMIN_LOCATION, constants.API_LOCATION ];
if (RESERVED_LOCATIONS.indexOf(location) !== -1) return new Error(location + ' is reserved');
if ((location.length + 1 + /* hyphen */ + fqdn.indexOf('.')) > 63) return new Error('Hostname length cannot be greater than 63');
if (location === '') return null; // bare location
if ((location.length + 1 /*+ hyphen */ + fqdn.indexOf('.')) > 63) return new Error('Hostname length cannot be greater than 63');
if (location.match(/^[A-Za-z0-9-]+$/) === null) return new Error('Hostname can only contain alphanumerics and hyphen');
if (location[0] === '-' || location[location.length-1] === '-') return new Error('Hostname cannot start or end with hyphen');
if (location.length + 1 /* hyphen */ + fqdn.length > 253) return new Error('FQDN length exceeds 253 characters');
@@ -140,52 +135,85 @@ function validateHostname(location, fqdn) {
}
// validate the port bindings
function validatePortBindings(portBindings) {
function validatePortBindings(portBindings, tcpPorts) {
// keep the public ports in sync with firewall rules in scripts/initializeBaseUbuntuImage.sh
// these ports are reserved even if we listen only on 127.0.0.1 because we setup HostIp to be 127.0.0.1
// for custom tcp ports
var RESERVED_PORTS = [
22, /* ssh */
25, /* smtp */
53, /* dns */
80, /* http */
443, /* https */
2003, /* graphite */
2004, /* graphite */
919, /* ssh */
2003, /* graphite (lo) */
2004, /* graphite (lo) */
2020, /* install server */
3000, /* app server */
8000 /* graphite */
config.get('port'), /* app server (lo) */
config.get('internalPort'), /* internal app server (lo) */
config.get('ldapPort'), /* ldap server (lo) */
config.get('oauthProxyPort'), /* oauth proxy server (lo) */
3306, /* mysql (lo) */
8000 /* graphite (lo) */
];
for (var containerPort in portBindings) {
var containerPortInt = parseInt(containerPort, 10);
if (isNaN(containerPortInt) || containerPortInt <= 0 || containerPortInt > 65535) {
return new Error(containerPort + ' is not a valid port');
}
if (!portBindings) return null;
var hostPortInt = parseInt(portBindings[containerPort], 10);
if (isNaN(hostPortInt) || hostPortInt <= 1024 || hostPortInt > 65535) {
return new Error(portBindings[containerPort] + ' is not a valid port');
}
var env;
for (env in portBindings) {
if (!/^[a-zA-Z0-9_]+$/.test(env)) return new AppsError(AppsError.BAD_FIELD, env + ' is not valid environment variable');
if (RESERVED_PORTS.indexOf(hostPortInt) !== -1) return new Error(hostPortInt + ' is reserved');
if (!Number.isInteger(portBindings[env])) return new Error(portBindings[env] + ' is not an integer');
if (portBindings[env] <= 0 || portBindings[env] > 65535) return new Error(portBindings[env] + ' is out of range');
if (RESERVED_PORTS.indexOf(portBindings[env]) !== -1) return new AppsError(AppsError.PORT_RESERVED, + portBindings[env]);
}
// it is OK if there is no 1-1 mapping between values in manifest.tcpPorts and portBindings. missing values implies
// that the user wants the service disabled
tcpPorts = tcpPorts || { };
for (env in portBindings) {
if (!(env in tcpPorts)) return new AppsError(AppsError.BAD_FIELD, 'Invalid portBindings ' + env);
}
return null;
}
function getIconURLSync(app) {
function getDuplicateErrorDetails(location, portBindings, error) {
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(error.reason, DatabaseError.ALREADY_EXISTS);
var match = error.message.match(/ER_DUP_ENTRY: Duplicate entry '(.*)' for key/);
if (!match) {
console.error('Unexpected SQL error message.', error);
return new AppsError(AppsError.INTERNAL_ERROR);
}
// check if the location conflicts
if (match[1] === location) return new AppsError(AppsError.ALREADY_EXISTS);
// check if any of the port bindings conflict
for (var env in portBindings) {
if (portBindings[env] === parseInt(match[1])) return new AppsError(AppsError.PORT_CONFLICT, match[1]);
}
return new AppsError(AppsError.ALREADY_EXISTS);
}
function getIconUrlSync(app) {
var iconPath = paths.APPICONS_DIR + '/' + app.id + '.png';
return fs.existsSync(iconPath) ? '/api/v1/apps/' + app.id + '/icon' : null;
}
function get(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
app.icon = getIconURLSync(app);
app.iconUrl = getIconUrlSync(app);
app.fqdn = config.appFqdn(app.location);
callback(null, app);
@@ -193,14 +221,14 @@ function get(appId, callback) {
}
function getBySubdomain(subdomain, callback) {
assert(typeof subdomain === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof callback, 'function');
appdb.getBySubdomain(subdomain, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
app.icon = getIconURLSync(app);
app.iconUrl = getIconUrlSync(app);
app.fqdn = config.appFqdn(app.location);
callback(null, app);
@@ -208,13 +236,13 @@ function getBySubdomain(subdomain, callback) {
}
function getAll(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
appdb.getAll(function (error, apps) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
apps.forEach(function (app) {
app.icon = getIconURLSync(app);
app.iconUrl = getIconUrlSync(app);
app.fqdn = config.appFqdn(app.location);
});
@@ -234,111 +262,196 @@ function validateAccessRestriction(accessRestriction) {
}
}
function install(appId, appStoreId, username, password, location, portBindings, accessRestriction, callback) {
assert(typeof appId === 'string');
assert(typeof username === 'string');
assert(typeof password === 'string');
assert(typeof location === 'string');
assert(!portBindings || typeof portBindings === 'object');
assert(typeof accessRestriction === 'string');
assert(typeof callback === 'function');
function purchase(appStoreId, callback) {
assert.strictEqual(typeof appStoreId, 'string');
assert.strictEqual(typeof callback, 'function');
var error = validateHostname(location, config.fqdn());
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
// Skip purchase if appStoreId is empty
if (appStoreId === '') return callback(null);
error = validatePortBindings(portBindings);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
var url = config.apiServerOrigin() + '/api/v1/apps/' + appStoreId + '/purchase';
error = validateAccessRestriction(accessRestriction);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
debug('Will install app with id : ' + appId);
appdb.add(appId, appStoreId, location.toLowerCase(), portBindings, accessRestriction, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(new AppsError(AppsError.ALREADY_EXISTS));
superagent.post(url).query({ token: config.token() }).end(function (error, res) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
stopTask(appId);
startTask(appId);
if (res.status === 402) return callback(new AppsError(AppsError.BILLING_REQUIRED));
if (res.status !== 201 && res.status !== 200) return callback(new Error(util.format('App purchase failed. %s %j', res.status, res.body)));
callback(null);
});
}
function configure(appId, username, password, location, portBindings, accessRestriction, callback) {
assert(typeof appId === 'string');
assert(typeof username === 'string');
assert(typeof password === 'string');
assert(!portBindings || typeof portBindings === 'object');
assert(typeof accessRestriction === 'string');
assert(typeof callback === 'function');
function install(appId, appStoreId, manifest, location, portBindings, accessRestriction, icon, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof appStoreId, 'string');
assert(manifest && typeof manifest === 'object');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert(!icon || typeof icon === 'string');
assert.strictEqual(typeof callback, 'function');
var error = location ? validateHostname(location, config.fqdn()) : null;
var error = manifestFormat.parse(manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest error: ' + error.message));
error = checkManifestConstraints(manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest cannot be installed: ' + error.message));
error = validateHostname(location, config.fqdn());
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
error = portBindings ? validatePortBindings(portBindings) : null;
error = validatePortBindings(portBindings, manifest.tcpPorts);
if (error) return callback(error);
error = validateAccessRestriction(accessRestriction);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
var values = { };
if (location) values.location = location.toLowerCase();
values.portBindings = portBindings;
values.accessRestriction = accessRestriction;
if (icon) {
if (!validator.isBase64(icon)) return callback(new AppsError(AppsError.BAD_FIELD, 'icon is not base64'));
debug('Will install app with id:%s', appId);
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, appId + '.png'), new Buffer(icon, 'base64'))) {
return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving icon:' + safe.error.message));
}
}
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_CONFIGURE, values, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
debug('Will install app with id : ' + appId);
stopTask(appId);
startTask(appId);
purchase(appStoreId, function (error) {
if (error) return callback(error);
callback(null);
appdb.add(appId, appStoreId, manifest, location.toLowerCase(), portBindings, accessRestriction, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location.toLowerCase(), portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
callback(null);
});
});
}
function update(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
function configure(appId, location, portBindings, accessRestriction, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof location, 'string');
assert.strictEqual(typeof portBindings, 'object');
assert.strictEqual(typeof accessRestriction, 'string');
assert.strictEqual(typeof callback, 'function');
var error = validateHostname(location, config.fqdn());
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
error = validateAccessRestriction(accessRestriction);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
error = validatePortBindings(portBindings, app.manifest.tcpPorts);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
var values = {
location: location.toLowerCase(),
accessRestriction: accessRestriction,
portBindings: portBindings,
oldConfig: {
location: app.location,
accessRestriction: app.accessRestriction,
portBindings: app.portBindings
}
};
debug('Will configure app with id:%s values:%j', appId, values);
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_CONFIGURE, values, function (error) {
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails(location.toLowerCase(), portBindings, error));
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
callback(null);
});
});
}
function update(appId, force, manifest, portBindings, icon, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof force, 'boolean');
assert(manifest && typeof manifest === 'object');
assert(!portBindings || typeof portBindings === 'object');
assert(!icon || typeof icon === 'string');
assert.strictEqual(typeof callback, 'function');
debug('Will update app with id:%s', appId);
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_UPDATE, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
var error = manifestFormat.parse(manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest error:' + error.message));
error = checkManifestConstraints(manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest cannot be installed:' + error.message));
error = validatePortBindings(portBindings, manifest.tcpPorts);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, error.message));
if (icon) {
if (!validator.isBase64(icon)) return callback(new AppsError(AppsError.BAD_FIELD, 'icon is not base64'));
if (!safe.fs.writeFileSync(path.join(paths.APPICONS_DIR, appId + '.png'), new Buffer(icon, 'base64'))) {
return callback(new AppsError(AppsError.INTERNAL_ERROR, 'Error saving icon:' + safe.error.message));
}
}
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
stopTask(appId);
startTask(appId);
var values = {
manifest: manifest,
portBindings: portBindings,
oldConfig: {
manifest: app.manifest,
portBindings: app.portBindings
}
};
callback(null);
appdb.setInstallationCommand(appId, force ? appdb.ISTATE_PENDING_FORCE_UPDATE : appdb.ISTATE_PENDING_UPDATE, values, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error && error.reason === DatabaseError.ALREADY_EXISTS) return callback(getDuplicateErrorDetails('' /* location cannot conflict */, portBindings, error));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
callback(null);
});
});
}
function getLogStream(appId, options, callback) {
assert(typeof appId === 'string');
assert(typeof options === 'object');
assert(typeof callback === 'function');
function getLogStream(appId, fromLine, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof fromLine, 'number'); // behaves like tail -n
assert.strictEqual(typeof callback, 'function');
debug('Getting logs for %s', appId);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, 'App not installed'));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, util.format('App is in %s state.', app.installationState)));
var container = docker.getContainer(app.containerId);
var tail = fromLine < 0 ? -fromLine : 'all';
// note: cannot access docker file directly because it needs root access
container.logs({ stdout: true, stderr: true, follow: true, timestamps: true, tail: 'all' }, function (error, logStream) {
container.logs({ stdout: true, stderr: true, follow: true, timestamps: true, tail: tail }, function (error, logStream) {
if (error && error.statusCode === 404) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var lineCount = 0;
var skipLinesStream = split(function mapper(line) {
if (++lineCount < options.fromLine) return undefined;
return JSON.stringify({ lineNumber: lineCount, log: line });
if (++lineCount < fromLine) return undefined;
var timestamp = line.substr(0, line.indexOf(' ')); // sometimes this has square brackets around it
return JSON.stringify({ lineNumber: lineCount, timestamp: timestamp.replace(/[[\]]/g,''), log: line.substr(timestamp.length + 1) });
});
skipLinesStream.close = logStream.req.abort;
logStream.pipe(skipLinesStream);
@@ -348,15 +461,15 @@ function getLogStream(appId, options, callback) {
}
function getLogs(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Getting logs for %s', appId);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, 'App not installed'));
if (app.installationState !== appdb.ISTATE_INSTALLED) return callback(new AppsError(AppsError.BAD_STATE, util.format('App is in %s state.', app.installationState)));
var container = docker.getContainer(app.containerId);
// note: cannot access docker file directly because it needs root access
@@ -369,9 +482,55 @@ function getLogs(appId, callback) {
});
}
function restore(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Will restore app with id:%s', appId);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
// restore without a backup is the same as re-install
var restoreConfig = app.lastBackupConfig, values = { };
if (restoreConfig) {
// re-validate because this new box version may not accept old configs.
// if we restore location, it should be validated here as well
error = checkManifestConstraints(restoreConfig.manifest);
if (error) return callback(new AppsError(AppsError.BAD_FIELD, 'Manifest cannot be installed: ' + error.message));
error = validatePortBindings(restoreConfig.portBindings, restoreConfig.manifest.tcpPorts); // maybe new ports got reserved now
if (error) return callback(error);
// ## should probably query new location, access restriction from user
values = {
manifest: restoreConfig.manifest,
portBindings: restoreConfig.portBindings,
oldConfig: {
location: app.location,
accessRestriction: app.accessRestriction,
portBindings: app.portBindings,
manifest: app.manifest
}
};
}
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_RESTORE, values, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
callback(null);
});
});
}
function uninstall(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Will uninstall app with id:%s', appId);
@@ -379,16 +538,15 @@ function uninstall(appId, callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
stopTask(appId);
startTask(appId);
taskmanager.restartAppTask(appId); // since uninstall is allowed from any state, kill current task
callback(null);
});
}
function start(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Will start app with id:%s', appId);
@@ -396,16 +554,15 @@ function start(appId, callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
stopTask(appId);
startTask(appId);
taskmanager.restartAppTask(appId);
callback(null);
});
}
function stop(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Will stop app with id:%s', appId);
@@ -413,10 +570,251 @@ function stop(appId, callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
stopTask(appId);
startTask(appId);
taskmanager.restartAppTask(appId);
callback(null);
});
}
function checkManifestConstraints(manifest) {
if (!manifest.dockerImage) return new Error('Missing dockerImage'); // dockerImage is optional in manifest
if (semver.valid(manifest.maxBoxVersion) && semver.gt(config.version(), manifest.maxBoxVersion)) {
return new Error('Box version exceeds Apps maxBoxVersion');
}
if (semver.valid(manifest.minBoxVersion) && semver.gt(manifest.minBoxVersion, config.version())) {
return new Error('minBoxVersion exceeds Box version');
}
return null;
}
function exec(appId, options, callback) {
assert.strictEqual(typeof appId, 'string');
assert(options && typeof options === 'object');
assert.strictEqual(typeof callback, 'function');
var cmd = options.cmd || [ '/bin/bash' ];
assert(util.isArray(cmd) && cmd.length > 0);
appdb.get(appId, function (error, app) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var container = docker.getContainer(app.containerId);
var execOptions = {
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
Tty: true,
Cmd: cmd
};
container.exec(execOptions, function (error, exec) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
var startOptions = {
Detach: false,
Tty: true,
stdin: true // this is a dockerode option that enabled openStdin in the modem
};
exec.start(startOptions, function(error, stream) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
if (options.rows && options.columns) {
exec.resize({ h: options.rows, w: options.columns }, function (error) { if (error) debug('Error resizing console', error); });
}
return callback(null, stream);
});
});
});
}
function setRestorePoint(appId, lastBackupId, lastBackupConfig, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof lastBackupId, 'string');
assert.strictEqual(typeof lastBackupConfig, 'object');
assert.strictEqual(typeof callback, 'function');
appdb.update(appId, { lastBackupId: lastBackupId, lastBackupConfig: lastBackupConfig }, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND, 'No such app'));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
return callback(null);
});
}
function autoupdateApps(updateInfo, callback) { // updateInfo is { appId -> { manifest } }
assert.strictEqual(typeof updateInfo, 'object');
assert.strictEqual(typeof callback, 'function');
function canAutoupdateApp(app, newManifest) {
var tcpPorts = newManifest.tcpPorts || { };
var portBindings = app.portBindings; // this is never null
if (Object.keys(tcpPorts).length === 0 && Object.keys(portBindings).length === 0) return null;
if (Object.keys(tcpPorts).length === 0) return new Error('tcpPorts is now empty but portBindings is not');
if (Object.keys(portBindings).length === 0) return new Error('portBindings is now empty but tcpPorts is not');
for (var env in tcpPorts) {
if (!(env in portBindings)) return new Error(env + ' is required from user');
}
// it's fine if one or more keys got removed
return null;
}
if (!updateInfo) return callback(null);
async.eachSeries(Object.keys(updateInfo), function iterator(appId, iteratorDone) {
get(appId, function (error, app) {
if (error) {
debug('Cannot autoupdate app %s : %s', appId, error.message);
return iteratorDone();
}
error = canAutoupdateApp(app, updateInfo[appId].manifest);
if (error) {
debug('app %s requires manual update. %s', appId, error.message);
return iteratorDone();
}
update(appId, false /* force */, updateInfo[appId].manifest, app.portBindings, null /* icon */, function (error) {
if (error) debug('Error initiating autoupdate of %s. %s', appId, error.message);
iteratorDone(null);
});
});
}, callback);
}
function canBackupApp(app) {
// only backup apps that are installed or pending configure. Rest of them are in some
// state not good for consistent backup (i.e addons may not have been setup completely)
return (app.installationState === appdb.ISTATE_INSTALLED && app.health === appdb.HEALTH_HEALTHY) ||
app.installationState === appdb.ISTATE_PENDING_CONFIGURE ||
app.installationState === appdb.ISTATE_PENDING_BACKUP ||
app.installationState === appdb.ISTATE_PENDING_UPDATE; // called from apptask
}
// set the 'creation' date of lastBackup so that the backup persists across time based archival rules
// s3 does not allow changing creation time, so copying the last backup is easy way out for now
function reuseOldBackup(app, callback) {
assert.strictEqual(typeof app.lastBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
backups.copyLastBackup(app, function (error, newBackupId) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
debugApp(app, 'reuseOldBackup: reused old backup %s as %s', app.lastBackupId, newBackupId);
callback(null, newBackupId);
});
}
function createNewBackup(app, addonsToBackup, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addonsToBackup || typeof addonsToBackup, 'object');
assert.strictEqual(typeof callback, 'function');
backups.getBackupUrl(app, function (error, result) {
if (error && error.reason === BackupsError.EXTERNAL_ERROR) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error.message));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
debugApp(app, 'backupApp: backup url:%s backup id:%s', result.url, result.id);
async.series([
ignoreError(shell.sudo.bind(null, 'mountSwap', [ BACKUP_SWAP_CMD, '--on' ])),
addons.backupAddons.bind(null, app, addonsToBackup),
shell.sudo.bind(null, 'backupApp', [ BACKUP_APP_CMD, app.id, result.url, result.backupKey, result.sessionToken ]),
ignoreError(shell.sudo.bind(null, 'unmountSwap', [ BACKUP_SWAP_CMD, '--off' ])),
], function (error) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
callback(null, result.id);
});
});
}
function backupApp(app, addonsToBackup, callback) {
assert.strictEqual(typeof app, 'object');
assert(!addonsToBackup || typeof addonsToBackup, 'object');
assert.strictEqual(typeof callback, 'function');
var appConfig = null, backupFunction;
if (!canBackupApp(app)) {
if (!app.lastBackupId) {
debugApp(app, 'backupApp: cannot backup app');
return callback(new AppsError(AppsError.BAD_STATE, 'App not healthy and never backed up previously'));
}
appConfig = app.lastBackupConfig;
backupFunction = reuseOldBackup.bind(null, app);
} else {
appConfig = {
manifest: app.manifest,
location: app.location,
portBindings: app.portBindings,
accessRestriction: app.accessRestriction
};
backupFunction = createNewBackup.bind(null, app, addonsToBackup);
if (!safe.fs.writeFileSync(path.join(paths.DATA_DIR, app.id + '/config.json'), JSON.stringify(appConfig), 'utf8')) {
return callback(safe.error);
}
}
backupFunction(function (error, backupId) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
debugApp(app, 'backupApp: successful id:%s', backupId);
setRestorePoint(app.id, backupId, appConfig, function (error) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
return callback(null, backupId);
});
});
}
function backup(appId, callback) {
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
get(appId, function (error, app) {
if (error && error.reason === AppsError.NOT_FOUND) return callback(new AppsError(AppsError.NOT_FOUND));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
appdb.setInstallationCommand(appId, appdb.ISTATE_PENDING_BACKUP, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(new AppsError(AppsError.BAD_STATE)); // might be a bad guess
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
taskmanager.restartAppTask(appId);
callback(null);
});
});
}
function restoreApp(app, addonsToRestore, callback) {
assert.strictEqual(typeof app, 'object');
assert.strictEqual(typeof addonsToRestore, 'object');
assert.strictEqual(typeof callback, 'function');
assert(app.lastBackupId);
backups.getRestoreUrl(app.lastBackupId, function (error, result) {
if (error && error.reason == BackupsError.EXTERNAL_ERROR) return callback(new AppsError(AppsError.EXTERNAL_ERROR, error.message));
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
debugApp(app, 'restoreApp: restoreUrl:%s', result.url);
shell.sudo('restoreApp', [ RESTORE_APP_CMD, app.id, result.url, result.backupKey, result.sessionToken ], function (error) {
if (error) return callback(new AppsError(AppsError.INTERNAL_ERROR, error));
addons.restoreAddons(app, addonsToRestore, callback);
});
});
}
+516 -482
View File
File diff suppressed because it is too large Load Diff
+52 -23
View File
@@ -2,12 +2,16 @@
'use strict';
exports = module.exports = {
initialize: initialize,
uninitialize: uninitialize
};
var assert = require('assert'),
BasicStrategy = require('passport-http').BasicStrategy,
BearerStrategy = require('passport-http-bearer').Strategy,
clientdb = require('./clientdb'),
ClientPasswordStrategy = require('passport-oauth2-client-password').Strategy,
database = require('./database'),
DatabaseError = require('./databaseerror'),
debug = require('debug')('box:auth'),
LocalStrategy = require('passport-local').Strategy,
@@ -16,15 +20,11 @@ var assert = require('assert'),
tokendb = require('./tokendb'),
user = require('./user'),
userdb = require('./userdb'),
UserError = user.UserError;
exports = module.exports = {
initialize: initialize,
uninitialize: uninitialize
};
UserError = user.UserError,
_ = require('underscore');
function initialize(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
passport.serializeUser(function (user, callback) {
callback(null, user.username);
@@ -42,21 +42,31 @@ function initialize(callback) {
});
passport.use(new LocalStrategy(function (username, password, callback) {
user.verify(username, password, function (error, result) {
if (error && error.reason === UserError.NOT_FOUND) return callback(null, false);
if (error && error.reason === UserError.WRONG_PASSWORD) return callback(null, false);
if (error) return callback(error);
if (!result) return callback(null, false);
callback(null, database.removePrivates(result));
});
if (username.indexOf('@') === -1) {
user.verify(username, password, function (error, result) {
if (error && error.reason === UserError.NOT_FOUND) return callback(null, false);
if (error && error.reason === UserError.WRONG_PASSWORD) return callback(null, false);
if (error) return callback(error);
if (!result) return callback(null, false);
callback(null, _.pick(result, 'id', 'username', 'email', 'admin'));
});
} else {
user.verifyWithEmail(username, password, function (error, result) {
if (error && error.reason === UserError.NOT_FOUND) return callback(null, false);
if (error && error.reason === UserError.WRONG_PASSWORD) return callback(null, false);
if (error) return callback(error);
if (!result) return callback(null, false);
callback(null, _.pick(result, 'id', 'username', 'email', 'admin'));
});
}
}));
passport.use(new BasicStrategy(function (username, password, callback) {
if (username.indexOf('cid-') === 0) {
debug('BasicStrategy: detected clientId %s instead of username:password', username);
debug('BasicStrategy: detected client id %s instead of username:password', username);
// username is actually client id here
// password is client secret
clientdb.getByClientId(username, function (error, client) {
clientdb.get(username, function (error, client) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, false);
if (error) return callback(error);
if (client.clientSecret != password) return callback(null, false);
@@ -74,7 +84,7 @@ function initialize(callback) {
}));
passport.use(new ClientPasswordStrategy(function (clientId, clientSecret, callback) {
clientdb.getByClientId(clientId, function(error, client) {
clientdb.get(clientId, function(error, client) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, false);
if (error) { return callback(error); }
if (client.clientSecret != clientSecret) { return callback(null, false); }
@@ -87,13 +97,32 @@ function initialize(callback) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, false);
if (error) return callback(error);
userdb.get(token.userId, function (error, user) {
// scopes here can define what capabilities that token carries
// passport put the 'info' object into req.authInfo, where we can further validate the scopes
var info = { scope: token.scope };
var tokenType;
if (token.identifier.indexOf(tokendb.PREFIX_DEV) === 0) {
token.identifier = token.identifier.slice(tokendb.PREFIX_DEV.length);
tokenType = tokendb.TYPE_DEV;
} else if (token.identifier.indexOf(tokendb.PREFIX_APP) === 0) {
tokenType = tokendb.TYPE_APP;
return callback(null, { id: token.identifier.slice(tokendb.PREFIX_APP.length), tokenType: tokenType }, info);
} else if (token.identifier.indexOf(tokendb.PREFIX_USER) === 0) {
tokenType = tokendb.TYPE_USER;
token.identifier = token.identifier.slice(tokendb.PREFIX_USER.length);
} else {
// legacy tokens assuming a user access token
tokenType = tokendb.TYPE_USER;
}
userdb.get(token.identifier, function (error, user) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, false);
if (error) return callback(error);
// scopes here can define what capabilities that token carries
// passport put the 'info' object into req.authInfo, where we can further validate the scopes
var info = { scope: token.scope };
// amend the tokenType of the token owner
user.tokenType = tokenType;
callback(null, user, info);
});
});
@@ -103,7 +132,7 @@ function initialize(callback) {
}
function uninitialize(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
callback(null);
}
+38 -28
View File
@@ -2,64 +2,74 @@
'use strict';
var assert = require('assert'),
database = require('./database.js'),
DatabaseError = require('./databaseerror'),
debug = require('debug')('box:authcodedb');
exports = module.exports = {
get: get,
add: add,
del: del,
clear: clear
delExpired: delExpired,
_clear: clear
};
var AUTHCODES_FIELDS = [ 'authCode', 'userId', 'clientId' ].join(',');
var assert = require('assert'),
database = require('./database.js'),
DatabaseError = require('./databaseerror');
var AUTHCODES_FIELDS = [ 'authCode', 'userId', 'clientId', 'expiresAt' ].join(',');
function get(authCode, callback) {
assert(typeof authCode === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof authCode, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + AUTHCODES_FIELDS + ' FROM authcodes WHERE authCode = ?', [ authCode ], function (error, result) {
database.query('SELECT ' + AUTHCODES_FIELDS + ' FROM authcodes WHERE authCode = ? AND expiresAt > ?', [ authCode, Date.now() ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
callback(null, result);
callback(null, result[0]);
});
}
function add(authCode, clientId, userId, callback) {
assert(typeof authCode === 'string');
assert(typeof clientId === 'string');
assert(typeof userId === 'string');
assert(typeof callback === 'function');
function add(authCode, clientId, userId, expiresAt, callback) {
assert.strictEqual(typeof authCode, 'string');
assert.strictEqual(typeof clientId, 'string');
assert.strictEqual(typeof userId, 'string');
assert.strictEqual(typeof expiresAt, 'number');
assert.strictEqual(typeof callback, 'function');
database.run('INSERT INTO authcodes (authCode, clientId, userId) VALUES (?, ?, ?)',
[ authCode, clientId, userId ], function (error) {
if (error && error.code === 'SQLITE_CONSTRAINT') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || !this.lastID) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
database.query('INSERT INTO authcodes (authCode, clientId, userId, expiresAt) VALUES (?, ?, ?, ?)',
[ authCode, clientId, userId, expiresAt ], function (error, result) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
function del(authCode, callback) {
assert(typeof authCode === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof authCode, 'string');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM authcodes WHERE authCode = ?', [ authCode ], function (error) {
database.query('DELETE FROM authcodes WHERE authCode = ?', [ authCode ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (this.changes !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
callback(null);
});
}
function delExpired(callback) {
assert.strictEqual(typeof callback, 'function');
database.query('DELETE FROM authcodes WHERE expiresAt <= ?', [ Date.now() ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
return callback(null, result.affectedRows);
});
}
function clear(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM authcodes', function (error) {
database.query('DELETE FROM authcodes', function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
+282
View File
@@ -0,0 +1,282 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
getSignedUploadUrl: getSignedUploadUrl,
getSignedDownloadUrl: getSignedDownloadUrl,
addSubdomain: addSubdomain,
delSubdomain: delSubdomain,
getChangeStatus: getChangeStatus,
copyObject: copyObject
};
var assert = require('assert'),
AWS = require('aws-sdk'),
config = require('./config.js'),
debug = require('debug')('box:aws'),
SubdomainError = require('./subdomainerror.js'),
superagent = require('superagent');
function getAWSCredentials(callback) {
assert.strictEqual(typeof callback, 'function');
// CaaS
if (config.token()) {
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/awscredentials';
superagent.post(url).query({ token: config.token() }).end(function (error, result) {
if (error) return callback(error);
if (result.statusCode !== 201) return callback(new Error(result.text));
if (!result.body || !result.body.credentials) return callback(new Error('Unexpected response'));
var credentials = {
accessKeyId: result.body.credentials.AccessKeyId,
secretAccessKey: result.body.credentials.SecretAccessKey,
sessionToken: result.body.credentials.SessionToken,
region: 'us-east-1'
};
if (config.aws().endpoint) credentials.endpoint = new AWS.Endpoint(config.aws().endpoint);
callback(null, credentials);
});
} else {
if (!config.aws().accessKeyId || !config.aws().secretAccessKey) return callback(new SubdomainError(SubdomainError.MISSING_CREDENTIALS));
var credentials = {
accessKeyId: config.aws().accessKeyId,
secretAccessKey: config.aws().secretAccessKey,
region: 'us-east-1'
};
if (config.aws().endpoint) credentials.endpoint = new AWS.Endpoint(config.aws().endpoint);
callback(null, credentials);
}
}
function getSignedUploadUrl(filename, callback) {
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getSignedUploadUrl: %s', filename);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var s3 = new AWS.S3(credentials);
var params = {
Bucket: config.aws().backupBucket,
Key: config.aws().backupPrefix + '/' + filename,
Expires: 60 * 30 /* 30 minutes */
};
var url = s3.getSignedUrl('putObject', params);
callback(null, { url : url, sessionToken: credentials.sessionToken });
});
}
function getSignedDownloadUrl(filename, callback) {
assert.strictEqual(typeof filename, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getSignedDownloadUrl: %s', filename);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var s3 = new AWS.S3(credentials);
var params = {
Bucket: config.aws().backupBucket,
Key: config.aws().backupPrefix + '/' + filename,
Expires: 60 * 30 /* 30 minutes */
};
var url = s3.getSignedUrl('getObject', params);
callback(null, { url: url, sessionToken: credentials.sessionToken });
});
}
function getZoneByName(zoneName, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof callback, 'function');
debug('getZoneByName: %s', zoneName);
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.listHostedZones({}, function (error, result) {
if (error) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
var zone = result.HostedZones.filter(function (zone) {
return zone.Name.slice(0, -1) === zoneName; // aws zone name contains a '.' at the end
})[0];
if (!zone) return callback(new SubdomainError(SubdomainError.NOT_FOUND, 'no such zone'));
debug('getZoneByName: found zone', zone);
callback(null, zone);
});
});
}
function addSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('addSubdomain: ' + subdomain + ' for domain ' + zoneName + ' with value ' + value);
getZoneByName(zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var params = {
ChangeBatch: {
Changes: [{
Action: 'UPSERT',
ResourceRecordSet: {
Type: type,
Name: fqdn,
ResourceRecords: [{
Value: value
}],
Weight: 0,
SetIdentifier: fqdn,
TTL: 1
}
}]
},
HostedZoneId: zone.Id
};
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.code === 'PriorRequestNotComplete') {
return callback(new SubdomainError(SubdomainError.STILL_BUSY, error.message));
} else if (error) {
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, error.message));
}
debug('addSubdomain: success. changeInfoId:%j', result);
callback(null, result.ChangeInfo.Id);
});
});
});
}
function delSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('delSubdomain: %s for domain %s.', subdomain, zoneName);
getZoneByName(zoneName, function (error, zone) {
if (error) return callback(error);
var fqdn = config.appFqdn(subdomain);
var resourceRecordSet = {
Name: fqdn,
Type: type,
ResourceRecords: [{
Value: value
}],
Weight: 0,
SetIdentifier: fqdn,
TTL: 1
};
var params = {
ChangeBatch: {
Changes: [{
Action: 'DELETE',
ResourceRecordSet: resourceRecordSet
}]
},
HostedZoneId: zone.Id
};
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.changeResourceRecordSets(params, function(error, result) {
if (error && error.message && error.message.indexOf('it was not found') !== -1) {
debug('delSubdomain: resource record set not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'NoSuchHostedZone') {
debug('delSubdomain: hosted zone not found.', error);
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error && error.code === 'PriorRequestNotComplete') {
debug('delSubdomain: resource is still busy', error);
return callback(new SubdomainError(SubdomainError.STILL_BUSY, new Error(error)));
} else if (error && error.code === 'InvalidChangeBatch') {
debug('delSubdomain: invalid change batch. No such record to be deleted.');
return callback(new SubdomainError(SubdomainError.NOT_FOUND, new Error(error)));
} else if (error) {
debug('delSubdomain: error', error);
return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, new Error(error)));
}
debug('delSubdomain: success');
callback(null);
});
});
});
}
function getChangeStatus(changeId, callback) {
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var route53 = new AWS.Route53(credentials);
route53.getChange({ Id: changeId }, function (error, result) {
if (error) return callback(error);
callback(null, result.ChangeInfo.Status);
});
});
}
function copyObject(from, to, callback) {
assert.strictEqual(typeof from, 'string');
assert.strictEqual(typeof to, 'string');
assert.strictEqual(typeof callback, 'function');
getAWSCredentials(function (error, credentials) {
if (error) return callback(error);
var params = {
Bucket: config.aws().backupBucket, // target bucket
Key: config.aws().backupPrefix + '/' + to, // target file
CopySource: config.aws().backupBucket + '/' + config.aws().backupPrefix + '/' + from, // source
};
var s3 = new AWS.S3(credentials);
s3.copyObject(params, callback);
});
}
+119
View File
@@ -0,0 +1,119 @@
'use strict';
exports = module.exports = {
BackupsError: BackupsError,
getAllPaged: getAllPaged,
getBackupUrl: getBackupUrl,
getRestoreUrl: getRestoreUrl,
copyLastBackup: copyLastBackup
};
var assert = require('assert'),
aws = require('./aws.js'),
config = require('./config.js'),
debug = require('debug')('box:backups'),
superagent = require('superagent'),
util = require('util');
function BackupsError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
} else {
this.message = 'Internal error';
this.nestedError = errorOrMessage;
}
}
util.inherits(BackupsError, Error);
BackupsError.EXTERNAL_ERROR = 'external error';
BackupsError.INTERNAL_ERROR = 'internal error';
function getAllPaged(page, perPage, callback) {
assert.strictEqual(typeof page, 'number');
assert.strictEqual(typeof perPage, 'number');
assert.strictEqual(typeof callback, 'function');
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/backups';
superagent.get(url).query({ token: config.token() }).end(function (error, result) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
if (result.statusCode !== 200) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, result.text));
if (!result.body || !util.isArray(result.body.backups)) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, 'Unexpected response'));
// [ { creationTime, boxVersion, restoreKey, dependsOn: [ ] } ] sorted by time (latest first)
return callback(null, result.body.backups);
});
}
function getBackupUrl(app, callback) {
assert(!app || typeof app === 'object');
assert.strictEqual(typeof callback, 'function');
var filename = '';
if (app) {
filename = util.format('appbackup_%s_%s-v%s.tar.gz', app.id, (new Date()).toISOString(), app.manifest.version);
} else {
filename = util.format('backup_%s-v%s.tar.gz', (new Date()).toISOString(), config.version());
}
aws.getSignedUploadUrl(filename, function (error, result) {
if (error) return callback(error);
var obj = {
id: filename,
url: result.url,
sessionToken: result.sessionToken,
backupKey: config.backupKey()
};
debug('getBackupUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
callback(null, obj);
});
}
// backupId is the s3 filename. appbackup_%s_%s-v%s.tar.gz
function getRestoreUrl(backupId, callback) {
assert.strictEqual(typeof backupId, 'string');
assert.strictEqual(typeof callback, 'function');
aws.getSignedDownloadUrl(backupId, function (error, result) {
if (error) return callback(error);
var obj = {
id: backupId,
url: result.url,
sessionToken: result.sessionToken,
backupKey: config.backupKey()
};
debug('getRestoreUrl: id:%s url:%s sessionToken:%s backupKey:%s', obj.id, obj.url, obj.sessionToken, obj.backupKey);
callback(null, obj);
});
}
function copyLastBackup(app, callback) {
assert(app && typeof app === 'object');
assert.strictEqual(typeof app.lastBackupId, 'string');
assert.strictEqual(typeof callback, 'function');
var toFilename = util.format('appbackup_%s_%s-v%s.tar.gz', app.id, (new Date()).toISOString(), app.manifest.version);
aws.copyObject(app.lastBackupId, toFilename, function (error) {
if (error) return callback(new BackupsError(BackupsError.EXTERNAL_ERROR, error));
return callback(null, toFilename);
});
}
+90
View File
@@ -0,0 +1,90 @@
/* jslint node:true */
'use strict';
exports = module.exports = {
addSubdomain: addSubdomain,
delSubdomain: delSubdomain,
getChangeStatus: getChangeStatus
};
var assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:caas'),
SubdomainError = require('./subdomainerror.js'),
superagent = require('superagent'),
util = require('util');
function addSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
var fqdn = subdomain !== '' && type === 'TXT' ? subdomain + '.' + config.fqdn() : config.appFqdn(subdomain);
debug('addSubdomain: zoneName: %s subdomain: %s type: %s value: %s fqdn: %s', zoneName, subdomain, type, value, fqdn);
var data = {
type: type,
value: value
};
superagent
.post(config.apiServerOrigin() + '/api/v1/domains/' + fqdn)
.query({ token: config.token() })
.send(data)
.end(function (error, result) {
if (error) return callback(error);
if (result.status === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.status !== 201) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null, result.body.changeId);
});
}
function delSubdomain(zoneName, subdomain, type, value, callback) {
assert.strictEqual(typeof zoneName, 'string');
assert.strictEqual(typeof subdomain, 'string');
assert.strictEqual(typeof type, 'string');
assert.strictEqual(typeof value, 'string');
assert.strictEqual(typeof callback, 'function');
debug('delSubdomain: %s for domain %s.', subdomain, zoneName);
var data = {
type: type,
value: value
};
superagent
.del(config.apiServerOrigin() + '/api/v1/domains/' + config.appFqdn(subdomain))
.query({ token: config.token() })
.send(data)
.end(function (error, result) {
if (error) return callback(error);
if (result.status === 420) return callback(new SubdomainError(SubdomainError.STILL_BUSY));
if (result.status !== 204) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null);
});
}
function getChangeStatus(changeId, callback) {
assert.strictEqual(typeof changeId, 'string');
assert.strictEqual(typeof callback, 'function');
if (changeId === '') return callback(null, 'INSYNC');
superagent
.get(config.apiServerOrigin() + '/api/v1/domains/' + config.fqdn() + '/status/' + changeId)
.query({ token: config.token() })
.end(function (error, result) {
if (error) return callback(error);
if (result.status !== 200) return callback(new SubdomainError(SubdomainError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return callback(null, result.body.status);
});
}
+65 -100
View File
@@ -2,170 +2,135 @@
'use strict';
var assert = require('assert'),
database = require('./database.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:clientdb');
exports = module.exports = {
get: get,
getAll: getAll,
getAllWithDetails: getAllWithDetails,
getByClientId: getByClientId,
getAllWithTokenCountByIdentifier: getAllWithTokenCountByIdentifier,
add: add,
del: del,
replaceByAppId: replaceByAppId,
update: update,
getByAppId: getByAppId,
delByAppId: delByAppId,
clear: clear
_clear: clear
};
var CLIENTS_FIELDS = [ 'id', 'appId', 'clientId', 'clientSecret', 'name', 'redirectURI', 'scope' ].join(',');
var CLIENTS_FIELDS_PREFIXED = [ 'clients.id', 'clients.appId', 'clients.clientId', 'clients.clientSecret', 'clients.name', 'clients.redirectURI', 'clients.scope' ].join(',');
var assert = require('assert'),
database = require('./database.js'),
DatabaseError = require('./databaseerror.js');
var CLIENTS_FIELDS = [ 'id', 'appId', 'clientSecret', 'redirectURI', 'scope' ].join(',');
var CLIENTS_FIELDS_PREFIXED = [ 'clients.id', 'clients.appId', 'clients.clientSecret', 'clients.redirectURI', 'clients.scope' ].join(',');
function get(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE id = ?', [ id ], function (error, result) {
database.query('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE id = ?', [ id ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
callback(null, result);
callback(null, result[0]);
});
}
function getAll(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
database.all('SELECT ' + CLIENTS_FIELDS + ' FROM clients', [ ], function (error, results) {
database.query('SELECT ' + CLIENTS_FIELDS + ' FROM clients ORDER BY appId', function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof results === 'undefined') results = [];
callback(null, results);
});
}
function getAllWithDetails(callback) {
assert(typeof callback === 'function');
function getAllWithTokenCountByIdentifier(identifier, callback) {
assert.strictEqual(typeof identifier, 'string');
assert.strictEqual(typeof callback, 'function');
// TODO should this be per user?
database.all('SELECT ' + CLIENTS_FIELDS_PREFIXED + ',COUNT(tokens.clientId) AS tokenCount FROM clients LEFT OUTER JOIN tokens ON clients.id=tokens.clientId GROUP BY clients.id', [], function (error, results) {
database.query('SELECT ' + CLIENTS_FIELDS_PREFIXED + ',COUNT(tokens.clientId) AS tokenCount FROM clients LEFT OUTER JOIN tokens ON clients.id=tokens.clientId WHERE tokens.identifier=? GROUP BY clients.id', [ identifier ], function (error, results) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof results === 'undefined') results = [];
callback(null, results);
});
}
function getByClientId(clientId, callback) {
assert(typeof clientId === 'string');
assert(typeof callback === 'function');
database.get('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE clientId = ? LIMIT 1', [ clientId ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
return callback(null, result);
});
}
function getByAppId(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
database.get('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE appId = ? LIMIT 1', [ appId ], function (error, result) {
database.query('SELECT ' + CLIENTS_FIELDS + ' FROM clients WHERE appId = ? LIMIT 1', [ appId ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (typeof result === 'undefined') return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (result.length === 0) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
return callback(null, result);
return callback(null, result[0]);
});
}
function add(id, appId, clientId, clientSecret, name, redirectURI, scope, callback) {
assert(typeof id === 'string');
assert(typeof appId === 'string');
assert(typeof clientId === 'string');
assert(typeof clientSecret === 'string');
assert(typeof name === 'string');
assert(typeof redirectURI === 'string');
assert(typeof scope === 'string');
assert(typeof callback === 'function');
function add(id, appId, clientSecret, redirectURI, scope, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof clientSecret, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
var data = {
$id: id,
$appId: appId,
$clientId: clientId,
$clientSecret: clientSecret,
$name: name,
$redirectURI: redirectURI,
$scope: scope
};
var data = [ id, appId, clientSecret, redirectURI, scope ];
database.run('INSERT INTO clients (id, appId, clientId, clientSecret, name, redirectURI, scope) VALUES ($id, $appId, $clientId, $clientSecret, $name, $redirectURI, $scope)', data, function (error) {
if (error && error.code === 'SQLITE_CONSTRAINT') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || !this.lastID) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
database.query('INSERT INTO clients (id, appId, clientSecret, redirectURI, scope) VALUES (?, ?, ?, ?, ?)', data, function (error, result) {
if (error && error.code === 'ER_DUP_ENTRY') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || result.affectedRows === 0) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
function update(id, appId, clientSecret, redirectURI, scope, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof clientSecret, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
var data = [ appId, clientSecret, redirectURI, scope, id ];
database.query('UPDATE clients SET appId = ?, clientSecret = ?, redirectURI = ?, scope = ? WHERE id = ?', data, function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
callback(null);
});
}
function del(id, callback) {
assert(typeof id === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM clients WHERE id = ?', [ id ], function (error) {
database.query('DELETE FROM clients WHERE id = ?', [ id ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (this.changes !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
callback(null);
});
}
function delByAppId(appId, callback) {
assert(typeof appId === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof appId, 'string');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM clients WHERE appId=?', [ appId ], function (error) {
database.query('DELETE FROM clients WHERE appId=?', [ appId ], function (error, result) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
if (this.changes !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
if (result.affectedRows !== 1) return callback(new DatabaseError(DatabaseError.NOT_FOUND));
return callback(null);
});
}
function replaceByAppId(id, appId, clientId, clientSecret, name, redirectURI, scope, callback) {
assert(typeof id === 'string');
assert(typeof appId === 'string');
assert(typeof clientId === 'string');
assert(typeof clientSecret === 'string');
assert(typeof name === 'string');
assert(typeof redirectURI === 'string');
assert(typeof scope === 'string');
assert(typeof callback === 'function');
var data = {
$id: id,
$appId: appId,
$clientId: clientId,
$clientSecret: clientSecret,
$name: name,
$redirectURI: redirectURI,
$scope: scope
};
database.run('INSERT OR REPLACE INTO clients (id, appId, clientId, clientSecret, name, redirectURI, scope) VALUES ($id, $appId, $clientId, $clientSecret, $name, $redirectURI, $scope)', data, function (error) {
if (error && error.code === 'SQLITE_CONSTRAINT') return callback(new DatabaseError(DatabaseError.ALREADY_EXISTS));
if (error || !this.lastID) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
callback(null);
});
}
function clear(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
database.run('DELETE FROM clients WHERE appId!="webadmin"', function (error) {
database.query('DELETE FROM clients WHERE appId!="webadmin"', function (error) {
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
return callback(null);
+226
View File
@@ -0,0 +1,226 @@
'use strict';
exports = module.exports = {
ClientsError: ClientsError,
add: add,
get: get,
update: update,
del: del,
getAllWithDetailsByUserId: getAllWithDetailsByUserId,
getClientTokensByUserId: getClientTokensByUserId,
delClientTokensByUserId: delClientTokensByUserId
};
var assert = require('assert'),
util = require('util'),
hat = require('hat'),
appdb = require('./appdb.js'),
tokendb = require('./tokendb.js'),
constants = require('./constants.js'),
async = require('async'),
clientdb = require('./clientdb.js'),
DatabaseError = require('./databaseerror.js'),
uuid = require('node-uuid');
function ClientsError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
} else {
this.message = 'Internal error';
this.nestedError = errorOrMessage;
}
}
util.inherits(ClientsError, Error);
ClientsError.INVALID_SCOPE = 'Invalid scope';
function validateScope(scope) {
assert.strictEqual(typeof scope, 'string');
if (scope === '') return new ClientsError(ClientsError.INVALID_SCOPE);
if (scope === '*') return null;
// TODO maybe validate all individual scopes if they exist
return null;
}
function add(appIdentifier, redirectURI, scope, callback) {
assert.strictEqual(typeof appIdentifier, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof scope, 'string');
assert.strictEqual(typeof callback, 'function');
var error = validateScope(scope);
if (error) return callback(error);
var id = 'cid-' + uuid.v4();
var clientSecret = hat(256);
clientdb.add(id, appIdentifier, clientSecret, redirectURI, scope, function (error) {
if (error) return callback(error);
var client = {
id: id,
appId: appIdentifier,
clientSecret: clientSecret,
redirectURI: redirectURI,
scope: scope
};
callback(null, client);
});
}
function get(id, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
clientdb.get(id, function (error, result) {
if (error) return callback(error);
callback(null, result);
});
}
// we only allow appIdentifier and redirectURI to be updated
function update(id, appIdentifier, redirectURI, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof appIdentifier, 'string');
assert.strictEqual(typeof redirectURI, 'string');
assert.strictEqual(typeof callback, 'function');
clientdb.get(id, function (error, result) {
if (error) return callback(error);
clientdb.update(id, appIdentifier, result.clientSecret, redirectURI, result.scope, function (error, result) {
if (error) return callback(error);
callback(null, result);
});
});
}
function del(id, callback) {
assert.strictEqual(typeof id, 'string');
assert.strictEqual(typeof callback, 'function');
clientdb.del(id, function (error, result) {
if (error) return callback(error);
callback(null, result);
});
}
function getAllWithDetailsByUserId(userId, callback) {
assert.strictEqual(typeof userId, 'string');
assert.strictEqual(typeof callback, 'function');
clientdb.getAllWithTokenCountByIdentifier(tokendb.PREFIX_USER + userId, function (error, results) {
if (error && error.reason === DatabaseError.NOT_FOUND) return callback(null, []);
if (error) return callback(error);
// We have several types of records here
// 1) webadmin has an app id of 'webadmin'
// 2) oauth proxy records are always the app id prefixed with 'proxy-'
// 3) addon oauth records for apps prefixed with 'addon-'
// 4) external app records prefixed with 'external-'
// 5) normal apps on the cloudron without a prefix
var tmp = [];
async.each(results, function (record, callback) {
if (record.appId === constants.ADMIN_CLIENT_ID) {
record.name = constants.ADMIN_NAME;
record.location = constants.ADMIN_LOCATION;
record.type = 'webadmin';
tmp.push(record);
return callback(null);
} else if (record.appId === constants.TEST_CLIENT_ID) {
record.name = constants.TEST_NAME;
record.location = constants.TEST_LOCATION;
record.type = 'test';
tmp.push(record);
return callback(null);
}
var appId = record.appId;
var type = 'app';
// Handle our different types of oauth clients
if (record.appId.indexOf('addon-') === 0) {
appId = record.appId.slice('addon-'.length);
type = 'addon';
} else if (record.appId.indexOf('proxy-') === 0) {
appId = record.appId.slice('proxy-'.length);
type = 'proxy';
}
appdb.get(appId, function (error, result) {
if (error) {
console.error('Failed to get app details for oauth client', result, error);
return callback(null); // ignore error so we continue listing clients
}
record.name = result.manifest.title + (record.appId.indexOf('proxy-') === 0 ? 'OAuth Proxy' : '');
record.location = result.location;
record.type = type;
tmp.push(record);
callback(null);
});
}, function (error) {
if (error) return callback(error);
callback(null, tmp);
});
});
}
function getClientTokensByUserId(clientId, userId, callback) {
assert.strictEqual(typeof clientId, 'string');
assert.strictEqual(typeof userId, 'string');
assert.strictEqual(typeof callback, 'function');
tokendb.getByIdentifierAndClientId(tokendb.PREFIX_USER + userId, clientId, function (error, result) {
if (error && error.reason === DatabaseError.NOT_FOUND) {
// this can mean either that there are no tokens or the clientId is actually unknown
clientdb.get(clientId, function (error/*, result*/) {
if (error) return callback(error);
callback(null, []);
});
return;
}
if (error) return callback(error);
callback(null, result || []);
});
}
function delClientTokensByUserId(clientId, userId, callback) {
assert.strictEqual(typeof clientId, 'string');
assert.strictEqual(typeof userId, 'string');
assert.strictEqual(typeof callback, 'function');
tokendb.delByIdentifierAndClientId(tokendb.PREFIX_USER + userId, clientId, function (error) {
if (error && error.reason === DatabaseError.NOT_FOUND) {
// this can mean either that there are no tokens or the clientId is actually unknown
clientdb.get(clientId, function (error/*, result*/) {
if (error) return callback(error);
callback(null);
});
return;
}
if (error) return callback(error);
callback(null);
});
}
+512 -177
View File
@@ -2,7 +2,6 @@
'use strict';
// intentionally placed here because of circular dep with updater
exports = module.exports = {
CloudronError: CloudronError,
@@ -11,46 +10,75 @@ exports = module.exports = {
activate: activate,
getConfig: getConfig,
getStatus: getStatus,
backup: backup,
getBackupUrl: getBackupUrl,
setCertificate: setCertificate,
getIp: getIp
sendHeartbeat: sendHeartbeat,
update: update,
reboot: reboot,
migrate: migrate,
backup: backup,
ensureBackup: ensureBackup
};
var assert = require('assert'),
config = require('../config.js'),
debug = require('debug')('box:cloudron'),
var apps = require('./apps.js'),
AppsError = require('./apps.js').AppsError,
assert = require('assert'),
async = require('async'),
backups = require('./backups.js'),
BackupsError = require('./backups.js').BackupsError,
clientdb = require('./clientdb.js'),
execFile = require('child_process').execFile,
config = require('./config.js'),
debug = require('debug')('box:cloudron'),
fs = require('fs'),
os = require('os'),
locker = require('./locker.js'),
path = require('path'),
paths = require('./paths.js'),
progress = require('./progress.js'),
safe = require('safetydance'),
settings = require('./settings.js'),
SettingsError = settings.SettingsError,
shell = require('./shell.js'),
subdomains = require('./subdomains.js'),
superagent = require('superagent'),
sysinfo = require('./sysinfo.js'),
tokendb = require('./tokendb.js'),
updater = require('./updater.js'),
updateChecker = require('./updatechecker.js'),
user = require('./user.js'),
UserError = user.UserError,
userdb = require('./userdb.js'),
util = require('util'),
uuid = require('node-uuid'),
_ = require('underscore');
webhooks = require('./webhooks.js');
var SUDO = '/usr/bin/sudo',
TAR = os.platform() === 'darwin' ? '/usr/bin/tar' : '/bin/tar',
BACKUP_CMD = path.join(__dirname, 'scripts/backup.sh'),
RELOAD_NGINX_CMD = path.join(__dirname, 'scripts/reloadnginx.sh');
var RELOAD_NGINX_CMD = path.join(__dirname, 'scripts/reloadnginx.sh'),
REBOOT_CMD = path.join(__dirname, 'scripts/reboot.sh'),
BACKUP_BOX_CMD = path.join(__dirname, 'scripts/backupbox.sh'),
BACKUP_SWAP_CMD = path.join(__dirname, 'scripts/backupswap.sh'),
INSTALLER_UPDATE_URL = 'http://127.0.0.1:2020/api/v1/installer/update';
var gAddDnsRecordsTimerId = null,
gCloudronDetails = null; // cached cloudron details like region,size...
function debugApp(app, args) {
assert(!app || typeof app === 'object');
var prefix = app ? app.location : '(no app)';
debug(prefix + ' ' + util.format.apply(util, Array.prototype.slice.call(arguments, 1)));
}
function ignoreError(func) {
return function (callback) {
func(function (error) {
if (error) console.error('Ignored error:', error);
callback();
});
};
}
var gBackupTimerId = null,
gAddMailDnsRecordsTimerId = null,
gGetCertificateTimerId = null,
gCachedIp = null;
function CloudronError(reason, errorOrMessage) {
assert(typeof reason === 'string');
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
@@ -70,234 +98,260 @@ function CloudronError(reason, errorOrMessage) {
util.inherits(CloudronError, Error);
CloudronError.BAD_FIELD = 'Field error';
CloudronError.INTERNAL_ERROR = 'Internal Error';
CloudronError.EXTERNAL_ERROR = 'External Error';
CloudronError.ALREADY_PROVISIONED = 'Already Provisioned';
CloudronError.APPSTORE_DOWN = 'Appstore Down';
CloudronError.BAD_USERNAME = 'Bad username';
CloudronError.BAD_EMAIL = 'Bad email';
CloudronError.BAD_PASSWORD = 'Bad password';
CloudronError.BAD_NAME = 'Bad name';
CloudronError.BAD_STATE = 'Bad state';
CloudronError.NOT_FOUND = 'Not found';
function initialize(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
// every backup restarts the box. the setInterval is only needed should that fail for some reason
gBackupTimerId = setInterval(backup, 4 * 60 * 60 * 1000);
sendHeartBeat();
if (process.env.NODE_ENV !== 'test') {
addMailDnsRecords();
if (process.env.BOX_ENV !== 'test') {
addDnsRecords();
}
callback(null);
}
function uninitialize(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
clearInterval(gBackupTimerId);
gBackupTimerId = null;
clearTimeout(gAddMailDnsRecordsTimerId);
gAddMailDnsRecordsTimerId = null;
clearTimeout(gGetCertificateTimerId);
gGetCertificateTimerId = null;
gCachedIp = null;
clearTimeout(gAddDnsRecordsTimerId);
gAddDnsRecordsTimerId = null;
callback(null);
}
function activate(username, password, email, callback) {
assert(typeof username === 'string');
assert(typeof password === 'string');
assert(typeof email === 'string');
assert(typeof callback === 'function');
function setTimeZone(ip, callback) {
assert.strictEqual(typeof ip, 'string');
assert.strictEqual(typeof callback, 'function');
debug('setTimeZone ip:%s', ip);
superagent.get('http://www.telize.com/geoip/' + ip).end(function (error, result) {
if (error || result.statusCode !== 200) {
debug('Failed to get geo location', error);
return callback(null);
}
if (!result.body.timezone) {
debug('No timezone in geoip response : %j', result.body);
return callback(null);
}
debug('Setting timezone to ', result.body.timezone);
settings.setTimeZone(result.body.timezone, callback);
});
}
function activate(username, password, email, name, ip, callback) {
assert.strictEqual(typeof username, 'string');
assert.strictEqual(typeof password, 'string');
assert.strictEqual(typeof email, 'string');
assert.strictEqual(typeof ip, 'string');
assert(!name || typeof name, 'string');
assert.strictEqual(typeof callback, 'function');
debug('activating user:%s email:%s', username, email);
user.create(username, password, email, true /* admin */, function (error) {
if (error && error.reason === UserError.ALREADY_EXISTS) return callback(new CloudronError(CloudronError.ALREADY_PROVISIONED));
if (error && error instanceof UserError) return callback(error);
setTimeZone(ip, function () { }); // TODO: get this from user. note that timezone is detected based on the browser location and not the cloudron region
if (!name) name = settings.getDefaultSync(settings.CLOUDRON_NAME_KEY);
settings.setCloudronName(name, function (error) {
if (error && error.reason === SettingsError.BAD_FIELD) return callback(new CloudronError(CloudronError.BAD_NAME));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
clientdb.getByAppId('webadmin', function (error, result) {
user.createOwner(username, password, email, function (error, userObject) {
if (error && error.reason === UserError.ALREADY_EXISTS) return callback(new CloudronError(CloudronError.ALREADY_PROVISIONED));
if (error && error.reason === UserError.BAD_USERNAME) return callback(new CloudronError(CloudronError.BAD_USERNAME));
if (error && error.reason === UserError.BAD_PASSWORD) return callback(new CloudronError(CloudronError.BAD_PASSWORD));
if (error && error.reason === UserError.BAD_EMAIL) return callback(new CloudronError(CloudronError.BAD_EMAIL));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
// Also generate a token so the admin creation can also act as a login
var token = tokendb.generateToken();
var expires = new Date(Date.now() + 60 * 60000).toUTCString(); // 1 hour
tokendb.add(token, username, result.id, expires, '*', function (error) {
clientdb.getByAppId('webadmin', function (error, result) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, { token: token, expires: expires });
// Also generate a token so the admin creation can also act as a login
var token = tokendb.generateToken();
var expires = Date.now() + 24 * 60 * 60 * 1000; // 1 day
tokendb.add(token, tokendb.PREFIX_USER + userObject.id, result.id, expires, '*', function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, { token: token, expires: expires });
});
});
});
});
}
function getBackupUrl(callback) {
assert(typeof callback === 'function');
if (!config.appServerUrl()) return callback(new Error('No appstore server url set'));
if (!config.token()) return callback(new Error('No appstore server token set'));
var url = config.appServerUrl() + '/api/v1/boxes/' + config.fqdn() + '/backupurl';
superagent.put(url).query({ token: config.token(), boxVersion: config.version() }).end(function (error, result) {
if (error) return callback(new Error('Error getting presigned backup url: ' + error.message));
if (result.statusCode !== 201 || !result.body || !result.body.url) return callback(new Error('Error getting presigned backup url : ' + result.statusCode));
return callback(null, result.body.url);
});
}
function backup(callback) {
assert(typeof callback === 'function');
getBackupUrl(function (error, url) {
if (error) return callback(new CloudronError(CloudronError.APPSTORE_DOWN, error.message));
debug('backup: url %s', url);
execFile(SUDO, [ BACKUP_CMD, url ], { }, function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
return callback(null);
});
});
}
function getIp() {
if (gCachedIp) return gCachedIp;
var ifaces = os.networkInterfaces();
for (var dev in ifaces) {
if (dev.match(/^(en|eth|wlp).*/) === null) continue;
for (var i = 0; i < ifaces[dev].length; i++) {
if (ifaces[dev][i].family === 'IPv4') {
gCachedIp = ifaces[dev][i].address;
return gCachedIp;
}
}
}
return null;
};
function getStatus(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
userdb.count(function (error, count) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, { activated: count !== 0, version: config.version() });
settings.getCloudronName(function (error, cloudronName) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, {
activated: count !== 0,
version: config.version(),
cloudronName: cloudronName
});
});
});
}
function getCloudronDetails(callback) {
assert.strictEqual(typeof callback, 'function');
if (gCloudronDetails) return callback(null, gCloudronDetails);
superagent
.get(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn())
.query({ token: config.token() })
.end(function (error, result) {
if (error) return callback(error);
if (result.status !== 200) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
gCloudronDetails = result.body.box;
return callback(null, gCloudronDetails);
});
}
function getConfig(callback) {
assert(typeof callback === 'function');
assert.strictEqual(typeof callback, 'function');
callback(null, {
appServerUrl: config.appServerUrl(),
isDev: /dev/i.test(config.get('boxVersionsUrl')),
fqdn: config.fqdn(),
ip: getIp(),
version: config.version(),
update: updater.getUpdateInfo()
})
// TODO avoid pyramid of awesomeness with async
getCloudronDetails(function (error, result) {
if (error) {
console.error('Failed to fetch cloudron details.', error);
// set fallback values to avoid dependency on appstore
result = {
region: result ? result.region : null,
size: result ? result.size : null
};
}
settings.getCloudronName(function (error, cloudronName) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
settings.getDeveloperMode(function (error, developerMode) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
callback(null, {
apiServerOrigin: config.apiServerOrigin(),
webServerOrigin: config.webServerOrigin(),
isDev: config.isDev(),
fqdn: config.fqdn(),
ip: sysinfo.getIp(),
version: config.version(),
update: updateChecker.getUpdateInfo(),
progress: progress.get(),
isCustomDomain: config.isCustomDomain(),
developerMode: developerMode,
region: result.region,
size: result.size,
cloudronName: cloudronName
});
});
});
});
}
function sendHeartBeat() {
var HEARTBEAT_INTERVAL = 1000 * 60;
function sendHeartbeat() {
// Only send heartbeats after the admin dns record is synced to give appstore a chance to know that fact
if (!config.get('dnsInSync')) return;
if (!config.appServerUrl()) {
debug('No appstore server url set. Not sending heartbeat.');
return;
}
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/heartbeat';
if (!config.token()) {
debug('No appstore server token set. Not sending heartbeat.');
return;
}
var url = config.appServerUrl() + '/api/v1/boxes/' + config.fqdn() + '/heartbeat';
debug('Sending heartbeat ' + url);
superagent.get(url).query({ token: config.token(), version: config.version() }).end(function (error, result) {
superagent.post(url).query({ token: config.token(), version: config.version() }).timeout(10000).end(function (error, result) {
if (error) debug('Error sending heartbeat.', error);
else if (result.statusCode !== 200) debug('Server responded to heartbeat with ' + result.statusCode);
else debug('Heartbeat successful');
setTimeout(sendHeartBeat, HEARTBEAT_INTERVAL);
else if (result.statusCode !== 200) debug('Server responded to heartbeat with %s %s', result.statusCode, result.text);
else debug('Heartbeat sent to %s', url);
});
};
}
function sendMailDnsRecordsRequest(callback) {
assert(typeof callback === 'function');
function addDnsRecords() {
if (config.get('dnsInSync')) return sendHeartbeat(); // already registered send heartbeat
var DKIM_SELECTOR = 'mail';
var DMARC_REPORT_EMAIL = 'girish@forwardbias.in';
var DMARC_REPORT_EMAIL = 'dmarc-report@cloudron.io';
var dkimPublicKeyFile = path.join(paths.HARAKA_CONFIG_DIR, 'dkim/' + config.fqdn() + '/public');
var dkimPublicKeyFile = path.join(paths.MAIL_DATA_DIR, 'dkim/' + config.fqdn() + '/public');
var publicKey = safe.fs.readFileSync(dkimPublicKeyFile, 'utf8');
if (publicKey === null) return callback(new Error('Error reading dkim public key'));
if (publicKey === null) {
console.error('Error reading dkim public key. Stop DNS setup.');
return;
}
// remove header, footer and new lines
publicKey = publicKey.split('\n').slice(1, -2).join('');
// note that dmarc requires special DNS records for external RUF and RUA
var records = [
// naked domain
{ subdomain: '', type: 'A', value: sysinfo.getIp() },
// webadmin domain
{ subdomain: 'my', type: 'A', value: sysinfo.getIp() },
// softfail all mails not from our IP. Note that this uses IP instead of 'a' should we use a load balancer in the future
{ subdomain: '', type: 'TXT', value: '"v=spf1 ip4:' + getIp() + ' ~all"' },
{ subdomain: '', type: 'TXT', value: '"v=spf1 ip4:' + sysinfo.getIp() + ' ~all"' },
// t=s limits the domainkey to this domain and not it's subdomains
{ subdomain: DKIM_SELECTOR + '._domainkey', type: 'TXT', value: '"v=DKIM1; t=s; p=' + publicKey + '"' },
// DMARC requires special setup if report email id is in different domain
{ subdomain: '_dmarc', type: 'TXT', value: '"v=DMARC1; p=none; pct=100; rua=mailto:' + DMARC_REPORT_EMAIL + '; ruf=' + DMARC_REPORT_EMAIL + '"' }
];
debug('sendMailDnsRecords request:%s', JSON.stringify(records));
debug('addDnsRecords:', records);
superagent
.post(config.appServerUrl() + '/api/v1/subdomains')
.set('Accept', 'application/json')
.query({ token: config.token() })
.send({ records: records })
.end(function (error, res) {
if (error) return callback(error);
debug('sendMailDnsRecords status: %s', res.status);
if (res.status === 409) return callback(null); // already registered
if (res.status !== 201) return callback(new Error(util.format('Failed to add Mail DNS records: %s %j', res.status, res.body)));
return callback(null, res.body.ids);
});
}
function addMailDnsRecords() {
// TODO assert replaced with a non fatal return, for local development
if (!config.token()) return;
if (config.get('mailDnsRecordIds').length !== 0) return; // already registered
sendMailDnsRecordsRequest(function (error, ids) {
subdomains.addMany(records, function (error, changeIds) {
if (error) {
console.error('Mail DNS record addition failed', error);
gAddMailDnsRecordsTimerId = setTimeout(addMailDnsRecords, 30000);
console.error('Admin DNS record addition failed', error);
gAddDnsRecordsTimerId = setTimeout(addDnsRecords, 10000);
return;
}
debug('Added Mail DNS records successfully');
config.set('mailDnsRecordIds', ids);
function checkIfInSync() {
debug('addDnsRecords: Check if admin DNS record is in sync.');
async.eachSeries(changeIds, function (changeId, callback) {
subdomains.status(changeId, function (error, result) {
if (error) return callback(new Error('Failed to check if admin DNS record is in sync.', error));
if (result !== 'done') return callback(new Error(changeId + ' is not in sync. result:' + result));
callback(null);
});
}, function (error) {
if (error) {
console.error(error);
gAddDnsRecordsTimerId = setTimeout(checkIfInSync, 5000);
return;
}
debug('addDnsRecords: done');
config.set('dnsInSync', true);
sendHeartbeat(); // send heartbeat after the dns records are done
});
}
checkIfInSync();
});
}
function setCertificate(certificate, key, callback) {
assert(typeof certificate === 'string');
assert(typeof key === 'string');
assert.strictEqual(typeof certificate, 'string');
assert.strictEqual(typeof key, 'string');
assert.strictEqual(typeof callback, 'function');
debug('Updating certificates');
@@ -309,10 +363,291 @@ function setCertificate(certificate, key, callback) {
return callback(new CloudronError(CloudronError.INTERNAL_ERROR, safe.error.message));
}
execFile(SUDO, [ RELOAD_NGINX_CMD ], { timeout: 10000 }, function (error) {
shell.sudo('setCertificate', [ RELOAD_NGINX_CMD ], function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
return callback(null);
});
}
function reboot(callback) {
shell.sudo('reboot', [ REBOOT_CMD ], callback);
}
function migrate(size, region, callback) {
assert.strictEqual(typeof size, 'string');
assert.strictEqual(typeof region, 'string');
assert.strictEqual(typeof callback, 'function');
var error = locker.lock(locker.OP_MIGRATE);
if (error) return callback(new CloudronError(CloudronError.BAD_STATE, error.message));
function unlock(error) {
if (error) {
debug('Failed to migrate', error);
locker.unlock(locker.OP_MIGRATE);
} else {
debug('Migration initiated successfully');
// do not unlock; cloudron is migrating
}
return;
}
// initiate the migration in the background
backupBoxAndApps(function (error, restoreKey) {
if (error) return unlock(error);
debug('migrate: size %s region %s restoreKey %s', size, region, restoreKey);
superagent
.post(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/migrate')
.query({ token: config.token() })
.send({ size: size, region: region, restoreKey: restoreKey })
.end(function (error, result) {
if (error) return unlock(error);
if (result.status === 409) return unlock(new CloudronError(CloudronError.BAD_STATE));
if (result.status === 404) return unlock(new CloudronError(CloudronError.NOT_FOUND));
if (result.status !== 202) return unlock(new CloudronError(CloudronError.EXTERNAL_ERROR, util.format('%s %j', result.status, result.body)));
return unlock(null);
});
});
callback(null);
}
function update(boxUpdateInfo, callback) {
assert.strictEqual(typeof boxUpdateInfo, 'object');
assert.strictEqual(typeof callback, 'function');
if (!boxUpdateInfo) return callback(null);
var error = locker.lock(locker.OP_BOX_UPDATE);
if (error) return callback(new CloudronError(CloudronError.BAD_STATE, error.message));
// initiate the update/upgrade but do not wait for it
if (boxUpdateInfo.upgrade) {
debug('Starting upgrade');
doUpgrade(boxUpdateInfo, function (error) {
if (error) {
debug('Upgrade failed with error: %s', error);
locker.unlock(locker.OP_BOX_UPDATE);
}
});
} else {
debug('Starting update');
doUpdate(boxUpdateInfo, function (error) {
if (error) {
debug('Update failed with error: %s', error);
locker.unlock(locker.OP_BOX_UPDATE);
}
});
}
callback(null);
}
function doUpgrade(boxUpdateInfo, callback) {
assert(boxUpdateInfo !== null && typeof boxUpdateInfo === 'object');
function upgradeError(e) {
progress.set(progress.UPDATE, -1, e.message);
callback(e);
}
progress.set(progress.UPDATE, 5, 'Backing up for upgrade');
backupBoxAndApps(function (error) {
if (error) return upgradeError(error);
superagent.post(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/upgrade')
.query({ token: config.token() })
.send({ version: boxUpdateInfo.version })
.end(function (error, result) {
if (error) return upgradeError(new Error('Error making upgrade request: ' + error));
if (result.status !== 202) return upgradeError(new Error(util.format('Server not ready to upgrade. statusCode: %s body: %j', result.status, result.body)));
progress.set(progress.UPDATE, 10, 'Updating base system');
// no need to unlock since this is the last thing we ever do on this box
callback(null);
});
});
}
function doUpdate(boxUpdateInfo, callback) {
assert(boxUpdateInfo && typeof boxUpdateInfo === 'object');
function updateError(e) {
progress.set(progress.UPDATE, -1, e.message);
callback(e);
}
progress.set(progress.UPDATE, 5, 'Backing up for update');
backupBoxAndApps(function (error) {
if (error) return updateError(error);
// fetch a signed sourceTarballUrl
superagent.get(config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/sourcetarballurl')
.query({ token: config.token(), boxVersion: boxUpdateInfo.version })
.end(function (error, result) {
if (error) return updateError(new Error('Error fetching sourceTarballUrl: ' + error));
if (result.status !== 200) return updateError(new Error('Error fetching sourceTarballUrl status: ' + result.status));
if (!safe.query(result, 'body.url')) return updateError(new Error('Error fetching sourceTarballUrl response: ' + JSON.stringify(result.body)));
// NOTE: the args here are tied to the installer revision, box code and appstore provisioning logic
var args = {
sourceTarballUrl: result.body.url,
// this data is opaque to the installer
data: {
apiServerOrigin: config.apiServerOrigin(),
aws: config.aws(),
backupKey: config.backupKey(),
boxVersionsUrl: config.get('boxVersionsUrl'),
fqdn: config.fqdn(),
isCustomDomain: config.isCustomDomain(),
restoreUrl: null,
restoreKey: null,
token: config.token(),
tlsCert: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.cert'), 'utf8'),
tlsKey: fs.readFileSync(path.join(paths.NGINX_CERT_DIR, 'host.key'), 'utf8'),
version: boxUpdateInfo.version,
webServerOrigin: config.webServerOrigin()
}
};
debug('updating box %j', args);
superagent.post(INSTALLER_UPDATE_URL).send(args).end(function (error, result) {
if (error) return updateError(error);
if (result.status !== 202) return updateError(new Error('Error initiating update: ' + JSON.stringify(result.body)));
progress.set(progress.UPDATE, 10, 'Updating cloudron software');
callback(null);
});
});
// Do not add any code here. The installer script will stop the box code any instant
});
}
function backup(callback) {
assert.strictEqual(typeof callback, 'function');
var error = locker.lock(locker.OP_FULL_BACKUP);
if (error) return callback(new CloudronError(CloudronError.BAD_STATE, error.message));
// clearing backup ensures tools can 'wait' on progress
progress.clear(progress.BACKUP);
// start the backup operation in the background
backupBoxAndApps(function (error) {
if (error) console.error('backup failed.', error);
locker.unlock(locker.OP_FULL_BACKUP);
});
callback(null);
}
function ensureBackup(callback) {
callback = callback || function () { };
backups.getAllPaged(1, 1, function (error, backups) {
if (error) {
debug('Unable to list backups', error);
return callback(error); // no point trying to backup if appstore is down
}
if (backups.length !== 0 && (new Date() - new Date(backups[0].creationTime) < 23 * 60 * 60 * 1000)) { // ~1 day ago
debug('Previous backup was %j, no need to backup now', backups[0]);
return callback(null);
}
backup(callback);
});
}
function backupBoxWithAppBackupIds(appBackupIds, callback) {
assert(util.isArray(appBackupIds));
backups.getBackupUrl(null /* app */, function (error, result) {
if (error && error.reason === BackupsError.EXTERNAL_ERROR) return callback(new CloudronError(CloudronError.EXTERNAL_ERROR, error.message));
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
debug('backup: url %s', result.url);
async.series([
ignoreError(shell.sudo.bind(null, 'mountSwap', [ BACKUP_SWAP_CMD, '--on' ])),
shell.sudo.bind(null, 'backupBox', [ BACKUP_BOX_CMD, result.url, result.backupKey, result.sessionToken ]),
ignoreError(shell.sudo.bind(null, 'unmountSwap', [ BACKUP_SWAP_CMD, '--off' ])),
], function (error) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
debug('backup: successful');
webhooks.backupDone(result.id, null /* app */, appBackupIds, function (error) {
if (error) return callback(error);
callback(null, result.id);
});
});
});
}
// this function expects you to have a lock
function backupBox(callback) {
apps.getAll(function (error, allApps) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
var appBackupIds = allApps.map(function (app) { return app.lastBackupId; });
appBackupIds = appBackupIds.filter(function (id) { return id !== null; }); // remove apps that were never backed up
backupBoxWithAppBackupIds(appBackupIds, callback);
});
}
// this function expects you to have a lock
function backupBoxAndApps(callback) {
callback = callback || function () { }; // callback can be empty for timer triggered backup
apps.getAll(function (error, allApps) {
if (error) return callback(new CloudronError(CloudronError.INTERNAL_ERROR, error));
var processed = 0;
var step = 100/(allApps.length+1);
progress.set(progress.BACKUP, processed, '');
async.mapSeries(allApps, function iterator(app, iteratorCallback) {
++processed;
apps.backupApp(app, app.manifest.addons, function (error, backupId) {
progress.set(progress.BACKUP, step * processed, 'Backed up app at ' + app.location);
if (error && error.reason !== AppsError.BAD_STATE) {
debugApp(app, 'Unable to backup', error);
return iteratorCallback(error);
}
iteratorCallback(null, backupId || null); // clear backupId if is in BAD_STATE and never backed up
});
}, function appsBackedUp(error, backupIds) {
if (error) {
progress.set(progress.BACKUP, 100, error.message);
return callback(error);
}
backupIds = backupIds.filter(function (id) { return id !== null; }); // remove apps in bad state that were never backed up
backupBoxWithAppBackupIds(backupIds, function (error, restoreKey) {
progress.set(progress.BACKUP, 100, error ? error.message : '');
callback(error, restoreKey);
});
});
});
}
+3 -13
View File
@@ -1,6 +1,6 @@
LoadPlugin "table"
<Plugin table>
<Table "/sys/fs/cgroup/memory/docker/<%= containerId %>/memory.stat">
<Table "/sys/fs/cgroup/memory/system.slice/docker-<%= containerId %>.scope/memory.stat">
Instance "<%= appId %>-memory"
Separator " \\n"
<Result>
@@ -10,7 +10,7 @@ LoadPlugin "table"
</Result>
</Table>
<Table "/sys/fs/cgroup/memory/docker/<%= containerId %>/memory.max_usage_in_bytes">
<Table "/sys/fs/cgroup/memory/system.slice/docker-<%= containerId %>.scope/memory.max_usage_in_bytes">
Instance "<%= appId %>-memory"
Separator "\\n"
<Result>
@@ -20,7 +20,7 @@ LoadPlugin "table"
</Result>
</Table>
<Table "/sys/fs/cgroup/cpuacct/docker/<%= containerId %>/cpuacct.stat">
<Table "/sys/fs/cgroup/cpuacct/system.slice/docker-<%= containerId %>.scope/cpuacct.stat">
Instance "<%= appId %>-cpu"
Separator " \\n"
<Result>
@@ -30,13 +30,3 @@ LoadPlugin "table"
</Result>
</Table>
</Plugin>
LoadPlugin "filecount"
<Plugin "filecount">
<Directory "/home/yellowtent/data/appdata/<%= appId %>">
Instance "<%= appId %>-appdata"
IncludeHidden true
Recursive true
</Directory>
</Plugin>
+204
View File
@@ -0,0 +1,204 @@
/* jslint node: true */
'use strict';
exports = module.exports = {
baseDir: baseDir,
// values set here will be lost after a upgrade/update. use the sqlite database
// for persistent values that need to be backed up
get: get,
set: set,
// ifdefs to check environment
CLOUDRON: process.env.BOX_ENV === 'cloudron',
TEST: process.env.BOX_ENV === 'test',
// convenience getters
apiServerOrigin: apiServerOrigin,
webServerOrigin: webServerOrigin,
fqdn: fqdn,
token: token,
version: version,
isCustomDomain: isCustomDomain,
database: database,
// these values are derived
adminOrigin: adminOrigin,
internalAdminOrigin: internalAdminOrigin,
appFqdn: appFqdn,
zoneName: zoneName,
isDev: isDev,
backupKey: backupKey,
aws: aws,
// for testing resets to defaults
_reset: initConfig
};
var assert = require('assert'),
constants = require('./constants.js'),
fs = require('fs'),
path = require('path'),
safe = require('safetydance'),
_ = require('underscore');
var homeDir = process.env.HOME || process.env.HOMEPATH || process.env.USERPROFILE;
var data = { };
function baseDir() {
if (exports.CLOUDRON) return homeDir;
if (exports.TEST) return path.join(homeDir, '.cloudron_test');
}
var cloudronConfigFileName = path.join(baseDir(), 'configs/cloudron.conf');
function saveSync() {
fs.writeFileSync(cloudronConfigFileName, JSON.stringify(data, null, 4)); // functions are ignored by JSON.stringify
}
function initConfig() {
// setup defaults
data.fqdn = 'localhost';
data.token = null;
data.mailServer = null;
data.adminEmail = null;
data.mailDnsRecordIds = [ ];
data.boxVersionsUrl = null;
data.version = null;
data.isCustomDomain = false;
data.webServerOrigin = null;
data.internalPort = 3001;
data.ldapPort = 3002;
data.oauthProxyPort = 3003;
data.backupKey = 'backupKey';
data.aws = {
backupBucket: null,
backupPrefix: null,
accessKeyId: null, // selfhosting only
secretAccessKey: null // selfhosting only
};
data.dnsInSync = false;
if (exports.CLOUDRON) {
data.port = 3000;
data.apiServerOrigin = null;
data.database = null;
} else if (exports.TEST) {
data.port = 5454;
data.apiServerOrigin = 'http://localhost:6060'; // hock doesn't support https
data.database = {
hostname: 'localhost',
username: 'root',
password: '',
port: 3306,
name: 'boxtest'
};
data.token = 'APPSTORE_TOKEN';
data.aws.backupBucket = 'testbucket';
data.aws.backupPrefix = 'testprefix';
data.aws.endpoint = 'http://localhost:5353';
} else {
assert(false, 'Unknown environment. This should not happen!');
}
if (safe.fs.existsSync(cloudronConfigFileName)) {
var existingData = safe.JSON.parse(safe.fs.readFileSync(cloudronConfigFileName, 'utf8'));
_.extend(data, existingData); // overwrite defaults with saved config
return;
}
saveSync();
}
// cleanup any old config file we have for tests
if (exports.TEST) safe.fs.unlinkSync(cloudronConfigFileName);
initConfig();
// set(obj) or set(key, value)
function set(key, value) {
if (typeof key === 'object') {
var obj = key;
for (var k in obj) {
assert(k in data, 'config.js is missing key "' + k + '"');
data[k] = obj[k];
}
} else {
data = safe.set(data, key, value);
}
saveSync();
}
function get(key) {
assert.strictEqual(typeof key, 'string');
return safe.query(data, key);
}
function apiServerOrigin() {
return get('apiServerOrigin');
}
function webServerOrigin() {
return get('webServerOrigin');
}
function fqdn() {
return get('fqdn');
}
// keep this in sync with start.sh admin.conf generation code
function appFqdn(location) {
assert.strictEqual(typeof location, 'string');
if (location === '') return fqdn();
return isCustomDomain() ? location + '.' + fqdn() : location + '-' + fqdn();
}
function adminOrigin() {
return 'https://' + appFqdn(constants.ADMIN_LOCATION);
}
function internalAdminOrigin() {
return 'http://127.0.0.1:' + get('port');
}
function token() {
return get('token');
}
function version() {
return get('version');
}
function isCustomDomain() {
return get('isCustomDomain');
}
function zoneName() {
if (isCustomDomain()) return fqdn(); // the appstore sets up the custom domain as a zone
// for shared domain name, strip out the hostname
return fqdn().substr(fqdn().indexOf('.') + 1);
}
function database() {
return get('database');
}
function isDev() {
return /dev/i.test(get('boxVersionsUrl'));
}
function backupKey() {
return get('backupKey');
}
function aws() {
return get('aws');
}
+16
View File
@@ -0,0 +1,16 @@
'use strict';
// default admin installation location. keep in sync with ADMIN_LOCATION in setup/start.sh and BOX_ADMIN_LOCATION in appstore constants.js
exports = module.exports = {
ADMIN_LOCATION: 'my',
API_LOCATION: 'api', // this is unused but reserved for future use (#403)
ADMIN_NAME: 'Settings',
ADMIN_CLIENT_ID: 'webadmin', // oauth client id
ADMIN_APPID: 'admin', // admin appid (settingsdb)
TEST_NAME: 'Test',
TEST_LOCATION: '',
TEST_CLIENT_ID: 'test'
};
+143
View File
@@ -0,0 +1,143 @@
'use strict';
exports = module.exports = {
initialize: initialize,
uninitialize: uninitialize
};
var apps = require('./apps.js'),
assert = require('assert'),
cloudron = require('./cloudron.js'),
CronJob = require('cron').CronJob,
debug = require('debug')('box:cron'),
settings = require('./settings.js'),
updateChecker = require('./updatechecker.js');
var gAutoupdaterJob = null,
gBoxUpdateCheckerJob = null,
gAppUpdateCheckerJob = null,
gHeartbeatJob = null,
gBackupJob = null;
var gInitialized = false;
var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
// cron format
// Seconds: 0-59
// Minutes: 0-59
// Hours: 0-23
// Day of Month: 1-31
// Months: 0-11
// Day of Week: 0-6
function initialize(callback) {
assert.strictEqual(typeof callback, 'function');
if (gInitialized) return callback();
settings.events.on(settings.TIME_ZONE_KEY, recreateJobs);
settings.events.on(settings.AUTOUPDATE_PATTERN_KEY, autoupdatePatternChanged);
gInitialized = true;
recreateJobs(callback);
}
function recreateJobs(unusedTimeZone, callback) {
if (typeof unusedTimeZone === 'function') callback = unusedTimeZone;
settings.getAll(function (error, allSettings) {
debug('Creating jobs with timezone %s', allSettings[settings.TIME_ZONE_KEY]);
if (gHeartbeatJob) gHeartbeatJob.stop();
gHeartbeatJob = new CronJob({
cronTime: '00 */1 * * * *', // every minute
onTick: cloudron.sendHeartbeat,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gBackupJob) gBackupJob.stop();
gBackupJob = new CronJob({
cronTime: '00 00 */4 * * *', // every 4 hours
onTick: cloudron.ensureBackup,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gBoxUpdateCheckerJob) gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = new CronJob({
cronTime: '00 */10 * * * *', // every 10 minutes
onTick: updateChecker.checkBoxUpdates,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
if (gAppUpdateCheckerJob) gAppUpdateCheckerJob.stop();
gAppUpdateCheckerJob = new CronJob({
cronTime: '00 */10 * * * *', // every 10 minutes
onTick: updateChecker.checkAppUpdates,
start: true,
timeZone: allSettings[settings.TIME_ZONE_KEY]
});
autoupdatePatternChanged(allSettings[settings.AUTOUPDATE_PATTERN_KEY]);
if (callback) callback();
});
}
function autoupdatePatternChanged(pattern) {
assert.strictEqual(typeof pattern, 'string');
debug('Auto update pattern changed to %s', pattern);
if (gAutoupdaterJob) gAutoupdaterJob.stop();
if (pattern === 'never') return;
gAutoupdaterJob = new CronJob({
cronTime: pattern,
onTick: function() {
var updateInfo = updateChecker.getUpdateInfo();
if (updateInfo.box) {
debug('Starting autoupdate to %j', updateInfo.box);
cloudron.update(updateInfo.box, NOOP_CALLBACK);
} else if (updateInfo.apps) {
debug('Starting app update to %j', updateInfo.apps);
apps.autoupdateApps(updateInfo.apps, NOOP_CALLBACK);
} else {
debug('No auto updates available');
}
},
start: true,
timeZone: gBoxUpdateCheckerJob.cronTime.zone // hack
});
}
function uninitialize(callback) {
assert.strictEqual(typeof callback, 'function');
if (!gInitialized) return callback();
if (gAutoupdaterJob) gAutoupdaterJob.stop();
gAutoupdaterJob = null;
gBoxUpdateCheckerJob.stop();
gBoxUpdateCheckerJob = null;
gAppUpdateCheckerJob.stop();
gAppUpdateCheckerJob = null;
gHeartbeatJob.stop();
gHeartbeatJob = null;
gBackupJob.stop();
gBackupJob = null;
gInitialized = false;
callback();
}
+156 -77
View File
@@ -1,123 +1,202 @@
/* jslint node:true */
/* jslint node: true */
'use strict';
var assert = require('assert'),
async = require('async'),
config = require('../config.js'),
DatabaseError = require('./databaseerror.js'),
debug = require('debug')('box:database'),
paths = require('./paths.js'),
sqlite3 = require('sqlite3');
exports = module.exports = {
initialize: initialize,
uninitialize: uninitialize,
removePrivates: removePrivates,
query: query,
transaction: transaction,
beginTransaction: beginTransaction,
rollback: rollback,
commit: commit,
clear: clear,
get: get,
all: all,
run: run
_clear: clear
};
var gConnectionPool = [ ], // used to track active transactions
gDatabase = null;
var assert = require('assert'),
async = require('async'),
once = require('once'),
config = require('./config.js'),
mysql = require('mysql'),
util = require('util');
var NOOP_CALLBACK = function (error) { if (error) console.error(error); };
var gConnectionPool = null,
gDefaultConnection = null;
function initialize(callback) {
gDatabase = new sqlite3.Database(paths.DATABASE_FILENAME);
gDatabase.on('error', function (error) {
console.error('Database error in ' + paths.DATABASE_FILENAME + ':', error);
function initialize(options, callback) {
if (typeof options === 'function') {
callback = options;
options = {
connectionLimit: 5
};
}
assert.strictEqual(typeof options.connectionLimit, 'number');
assert.strictEqual(typeof callback, 'function');
if (gConnectionPool !== null) return callback(null);
gConnectionPool = mysql.createPool({
connectionLimit: options.connectionLimit,
host: config.database().hostname,
user: config.database().username,
password: config.database().password,
port: config.database().port,
database: config.database().name,
multipleStatements: false,
ssl: false
});
gDatabase.run('PRAGMA busy_timeout=5000', callback);
reconnect(callback);
}
function uninitialize(callback) {
assert(typeof callback === 'function');
if (gConnectionPool) {
gConnectionPool.end(callback);
gConnectionPool = null;
} else {
callback(null);
}
}
debug('Closing database');
gDatabase.close();
gDatabase = null;
function setupConnection(connection, callback) {
assert.strictEqual(typeof connection, 'object');
assert.strictEqual(typeof callback, 'function');
debug('Closing %d active transactions', gConnectionPool.length);
gConnectionPool.forEach(function (conn) { conn.close(); });
gConnectionPool = [ ];
connection.on('error', console.error);
callback(null);
async.series([
connection.query.bind(connection, 'USE ' + config.database().name),
connection.query.bind(connection, 'SET SESSION sql_mode = \'strict_all_tables\'')
], function (error) {
connection.removeListener('error', console.error);
if (error) connection.release();
callback(error);
});
}
function reconnect(callback) {
callback = callback ? once(callback) : function () {};
gConnectionPool.getConnection(function (error, connection) {
if (error) {
console.error('Unable to reestablish connection to database. Try again in a bit.', error.message);
return setTimeout(reconnect.bind(null, callback), 1000);
}
connection.on('error', function (error) {
// by design, we catch all normal errors by providing callbacks.
// this function should be invoked only when we have no callbacks pending and we have a fatal error
assert(error.fatal, 'Non-fatal error on connection object');
console.error('Unhandled mysql connection error.', error);
// This is most likely an issue an can cause double callbacks from reconnect()
setTimeout(reconnect.bind(null, callback), 1000);
});
setupConnection(connection, function (error) {
if (error) return setTimeout(reconnect.bind(null, callback), 1000);
gDefaultConnection = connection;
callback(null);
});
});
}
function clear(callback) {
assert.strictEqual(typeof callback, 'function');
// the clear funcs don't completely clear the db, they leave the migration code defaults
async.series([
require('./appdb.js').clear,
require('./authcodedb.js').clear,
require('./clientdb.js').clear,
require('./tokendb.js').clear,
require('./userdb.js').clear
require('./appdb.js')._clear,
require('./authcodedb.js')._clear,
require('./clientdb.js')._clear,
require('./tokendb.js')._clear,
require('./userdb.js')._clear,
require('./settingsdb.js')._clear
], callback);
}
function beginTransaction() {
var conn = new sqlite3.Database(paths.DATABASE_FILENAME);
conn._started = Date.now();
conn._slowWarningIntervalId = setInterval((function () {
debug('Transaction running for %d msecs', Date.now() - this._started);
}).bind(conn), 2000);
function beginTransaction(callback) {
assert.strictEqual(typeof callback, 'function');
gConnectionPool.push(conn);
conn.serialize();
conn.run('PRAGMA busy_timeout=5000', NOOP_CALLBACK);
conn.run('BEGIN TRANSACTION', NOOP_CALLBACK);
return conn;
gConnectionPool.getConnection(function (error, connection) {
if (error) return callback(error);
setupConnection(connection, function (error) {
if (error) return callback(error);
connection.beginTransaction(function (error) {
if (error) return callback(error);
return callback(null, connection);
});
});
});
}
function rollback(conn, callback) {
gConnectionPool.splice(gConnectionPool.indexOf(conn), 1);
conn.run('ROLLBACK', NOOP_CALLBACK);
clearInterval(conn._slowWarningIntervalId);
debug('Transaction took %d msecs', Date.now() - conn._started);
conn.close(); // close waits for pending statements
if (callback) callback();
}
function rollback(connection, callback) {
assert.strictEqual(typeof callback, 'function');
function commit(conn, callback) {
gConnectionPool.splice(gConnectionPool.indexOf(conn), 1);
conn.run('COMMIT', function (error) {
clearInterval(conn._slowWarningIntervalId);
debug('Transaction took %d msecs', Date.now() - conn._started);
if (error) return callback(new DatabaseError(DatabaseError.INTERNAL_ERROR, error));
connection.rollback(function (error) {
if (error) console.error(error); // can this happen?
connection.release();
callback(null);
});
conn.close(); // close waits for pending statements
}
function removePrivates(obj) {
var res = { };
// FIXME: if commit fails, is it supposed to return an error ?
function commit(connection, callback) {
assert.strictEqual(typeof callback, 'function');
for (var p in obj) {
if (!obj.hasOwnProperty(p)) continue;
if (p.substring(0, 1) === '_') continue;
res[p] = obj[p]; // ## make deep copy?
}
connection.commit(function (error) {
if (error) return rollback(connection, callback);
return res;
connection.release();
return callback(null);
});
}
function get() {
return gDatabase.get.apply(gDatabase, arguments);
function query() {
var args = Array.prototype.slice.call(arguments);
var callback = args[args.length - 1];
assert.strictEqual(typeof callback, 'function');
if (gDefaultConnection === null) return callback(new Error('No connection to database'));
args[args.length -1 ] = function (error, result) {
if (error && error.fatal) {
gDefaultConnection = null;
setTimeout(reconnect, 1000);
}
callback(error, result);
};
gDefaultConnection.query.apply(gDefaultConnection, args);
}
function all() {
return gDatabase.all.apply(gDatabase, arguments);
}
function run() {
return gDatabase.run.apply(gDatabase, arguments);
function transaction(queries, callback) {
assert(util.isArray(queries));
assert.strictEqual(typeof callback, 'function');
beginTransaction(function (error, conn) {
if (error) return callback(error);
async.mapSeries(queries, function iterator(query, done) {
conn.query(query.query, query.args, done);
}, function seriesDone(error, results) {
if (error) return rollback(conn, callback.bind(null, error));
commit(conn, callback.bind(null, null, results));
});
});
}
+6 -7
View File
@@ -2,20 +2,20 @@
'use strict';
exports = module.exports = DatabaseError;
var assert = require('assert'),
util = require('util');
module.exports = exports = DatabaseError;
function DatabaseError(reason, errorOrMessage) {
assert(typeof reason === 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined' || errorOrMessage === null);
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
if (typeof errorOrMessage === 'undefined' || errorOrMessage === null) {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
@@ -29,5 +29,4 @@ util.inherits(DatabaseError, Error);
DatabaseError.INTERNAL_ERROR = 'Internal error';
DatabaseError.ALREADY_EXISTS = 'Entry already exist';
DatabaseError.NOT_FOUND = 'Record not found';
DatabaseError.RECORD_SCHEMA = 'Record does not match the schema';
DatabaseError.FIELD_ERROR = 'Invalid field';
DatabaseError.BAD_FIELD = 'Invalid field';
+85
View File
@@ -0,0 +1,85 @@
/* jslint node: true */
'use strict';
exports = module.exports = {
DeveloperError: DeveloperError,
enabled: enabled,
setEnabled: setEnabled,
issueDeveloperToken: issueDeveloperToken,
getNonApprovedApps: getNonApprovedApps
};
var assert = require('assert'),
config = require('./config.js'),
tokendb = require('./tokendb.js'),
settings = require('./settings.js'),
superagent = require('superagent'),
util = require('util');
function DeveloperError(reason, errorOrMessage) {
assert.strictEqual(typeof reason, 'string');
assert(errorOrMessage instanceof Error || typeof errorOrMessage === 'string' || typeof errorOrMessage === 'undefined');
Error.call(this);
Error.captureStackTrace(this, this.constructor);
this.name = this.constructor.name;
this.reason = reason;
if (typeof errorOrMessage === 'undefined') {
this.message = reason;
} else if (typeof errorOrMessage === 'string') {
this.message = errorOrMessage;
} else {
this.message = 'Internal error';
this.nestedError = errorOrMessage;
}
}
util.inherits(DeveloperError, Error);
DeveloperError.INTERNAL_ERROR = 'Internal Error';
function enabled(callback) {
assert.strictEqual(typeof callback, 'function');
settings.getDeveloperMode(function (error, enabled) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
callback(null, enabled);
});
}
function setEnabled(enabled, callback) {
assert.strictEqual(typeof enabled, 'boolean');
assert.strictEqual(typeof callback, 'function');
settings.setDeveloperMode(enabled, function (error) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
callback(null);
});
}
function issueDeveloperToken(user, callback) {
assert.strictEqual(typeof user, 'object');
assert.strictEqual(typeof callback, 'function');
var token = tokendb.generateToken();
var expiresAt = Date.now() + 24 * 60 * 60 * 1000; // 1 day
tokendb.add(token, tokendb.PREFIX_DEV + user.id, '', expiresAt, 'apps,settings,roleDeveloper', function (error) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
callback(null, { token: token, expiresAt: expiresAt });
});
}
function getNonApprovedApps(callback) {
assert.strictEqual(typeof callback, 'function');
var url = config.apiServerOrigin() + '/api/v1/boxes/' + config.fqdn() + '/apps';
superagent.get(url).query({ token: config.token(), boxVersion: config.version() }).end(function (error, result) {
if (error) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, error));
if (result.status !== 200) return callback(new DeveloperError(DeveloperError.INTERNAL_ERROR, util.format('App listing failed. %s %j', result.status, result.body)));
callback(null, result.body.apps || []);
});
}
+6 -12
View File
@@ -2,29 +2,23 @@
'use strict';
var assert = require('assert'),
debug = require('debug')('box:digitalocean'),
config = require('../config.js'),
dns = require('native-dns');
exports = module.exports = {
checkPtrRecord: checkPtrRecord
};
var assert = require('assert'),
debug = require('debug')('box:digitalocean'),
dns = require('native-dns');
function checkPtrRecord(ip, fqdn, callback) {
assert(ip === null || typeof ip === 'string');
assert(typeof fqdn === 'string');
assert(typeof callback === 'function');
assert.strictEqual(typeof fqdn, 'string');
assert.strictEqual(typeof callback, 'function');
debug('checkPtrRecord: ' + ip);
if (!ip) return callback(new Error('Network down'));
if (config.LOCAL) {
debug('checkPtrRecord disabled in local mode.');
return callback(null, true);
}
dns.resolve4('ns1.digitalocean.com', function (error, rdnsIps) {
if (error || rdnsIps.length === 0) return callback(new Error('Failed to query DO DNS'));
+2 -3
View File
@@ -1,7 +1,6 @@
'use strict';
var assert = require('assert'),
Docker = require('dockerode'),
var Docker = require('dockerode'),
fs = require('fs'),
os = require('os'),
path = require('path'),
@@ -11,7 +10,7 @@ exports = module.exports = (function () {
var docker;
var options = connectOptions(); // the real docker
if (process.env.NODE_ENV === 'test') {
if (process.env.BOX_ENV === 'test') {
// test code runs a docker proxy on this port
docker = new Docker({ host: 'http://localhost', port: 5687 });
} else {
+138
View File
@@ -0,0 +1,138 @@
'use strict';
exports = module.exports = {
start: start,
stop: stop
};
var assert = require('assert'),
config = require('./config.js'),
debug = require('debug')('box:ldap'),
user = require('./user.js'),
UserError = user.UserError,
ldap = require('ldapjs');
var gServer = null;
var NOOP = function () {};
var gLogger = {
trace: NOOP,
debug: NOOP,
info: debug,
warn: debug,
error: console.error,
fatal: console.error
};
var GROUP_USERS_DN = 'cn=users,ou=groups,dc=cloudron';
var GROUP_ADMINS_DN = 'cn=admins,ou=groups,dc=cloudron';
function start(callback) {
assert.strictEqual(typeof callback, 'function');
gServer = ldap.createServer({ log: gLogger });
gServer.search('ou=users,dc=cloudron', function (req, res, next) {
debug('ldap user search: dn %s, scope %s, filter %s', req.dn.toString(), req.scope, req.filter.toString());
user.list(function (error, result){
if (error) return next(new ldap.OperationsError(error.toString()));
// send user objects
result.forEach(function (entry) {
var dn = ldap.parseDN('cn=' + entry.id + ',ou=users,dc=cloudron');
var groups = [ GROUP_USERS_DN ];
if (entry.admin) groups.push(GROUP_ADMINS_DN);
var tmp = {
dn: dn.toString(),
attributes: {
objectclass: ['user'],
objectcategory: 'person',
cn: entry.id,
uid: entry.id,
mail: entry.email,
displayname: entry.username,
username: entry.username,
samaccountname: entry.username, // to support ActiveDirectory clients
memberof: groups
}
};
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && req.filter.matches(tmp.attributes)) {
res.send(tmp);
}
});
res.end();
});
});
gServer.search('ou=groups,dc=cloudron', function (req, res, next) {
debug('ldap group search: dn %s, scope %s, filter %s', req.dn.toString(), req.scope, req.filter.toString());
user.list(function (error, result){
if (error) return next(new ldap.OperationsError(error.toString()));
var groups = [{
name: 'users',
admin: false
}, {
name: 'admins',
admin: true
}];
groups.forEach(function (group) {
var dn = ldap.parseDN('cn=' + group.name + ',ou=groups,dc=cloudron');
var members = group.admin ? result.filter(function (entry) { return entry.admin; }) : result;
var tmp = {
dn: dn.toString(),
attributes: {
objectclass: ['group'],
cn: group.name,
memberuid: members.map(function(entry) { return entry.id; })
}
};
if ((req.dn.equals(dn) || req.dn.parentOf(dn)) && req.filter.matches(tmp.attributes)) {
res.send(tmp);
}
});
res.end();
});
});
gServer.bind('ou=apps,dc=cloudron', function(req, res, next) {
// TODO: validate password
debug('ldap application bind: %s', req.dn.toString());
res.end();
});
gServer.bind('ou=users,dc=cloudron', function(req, res, next) {
debug('ldap user bind: %s', req.dn.toString());
if (!req.dn.rdns[0].cn) return next(new ldap.NoSuchObjectError(req.dn.toString()));
user.verify(req.dn.rdns[0].cn, req.credentials || '', function (error, result) {
if (error && error.reason === UserError.NOT_FOUND) return next(new ldap.NoSuchObjectError(req.dn.toString()));
if (error && error.reason === UserError.WRONG_PASSWORD) return next(new ldap.InvalidCredentialsError(req.dn.toString()));
if (error) return next(new ldap.OperationsError(error));
res.end();
});
});
gServer.listen(config.get('ldapPort'), callback);
}
function stop(callback) {
assert.strictEqual(typeof callback, 'function');
gServer.close();
callback();
}
+71
View File
@@ -0,0 +1,71 @@
'use strict';
var assert = require('assert'),
debug = require('debug')('box:locker'),
EventEmitter = require('events').EventEmitter,
util = require('util');
function Locker() {
this._operation = null;
this._timestamp = null;
this._watcherId = -1;
this._lockDepth = 0; // recursive locks
}
util.inherits(Locker, EventEmitter);
// these are mutually exclusive operations
Locker.prototype.OP_BOX_UPDATE = 'box_update';
Locker.prototype.OP_FULL_BACKUP = 'full_backup';
Locker.prototype.OP_APPTASK = 'apptask';
Locker.prototype.OP_MIGRATE = 'migrate';
Locker.prototype.lock = function (operation) {
assert.strictEqual(typeof operation, 'string');
if (this._operation !== null) return new Error('Already locked for ' + this._operation);
this._operation = operation;
++this._lockDepth;
this._timestamp = new Date();
var that = this;
this._watcherId = setInterval(function () { debug('Lock unreleased %s', that._operation); }, 1000 * 60 * 5);
debug('Acquired : %s', this._operation);
this.emit('locked', this._operation);
return null;
};
Locker.prototype.recursiveLock = function (operation) {
if (this._operation === operation) {
++this._lockDepth;
debug('Re-acquired : %s Depth : %s', this._operation, this._lockDepth);
return null;
}
return this.lock(operation);
};
Locker.prototype.unlock = function (operation) {
assert.strictEqual(typeof operation, 'string');
if (this._operation !== operation) throw new Error('Mismatched unlock. Current lock is for ' + this._operation); // throw because this is a programming error
if (--this._lockDepth === 0) {
debug('Released : %s', this._operation);
this._operation = null;
this._timestamp = null;
clearInterval(this._watcherId);
this._watcherId = -1;
} else {
debug('Recursive lock released : %s. Depth : %s', this._operation, this._lockDepth);
}
this.emit('unlocked', operation);
return null;
};
exports = module.exports = new Locker();
+19
View File
@@ -0,0 +1,19 @@
<%if (format === 'text') { %>
Dear Admin,
The application titled '<%= title %>' that you installed at <%= appFqdn %>
is not responding.
This is most likely a problem in the application. Please report this issue to
support@cloudron.io (by forwarding this email).
You are receiving this email because you are an Admin of the Cloudron at <%= fqdn %>.
Thank you,
Application WatchDog
<% } else { %>
<% } %>

Some files were not shown because too many files have changed in this diff Show More