When set, the box will issue an upgrade request instead of an update
request. The intention here is that if the updater is broken then
we can just create a new source tarball without having to create a
new image.
This is just a coding style thing because bash gets all worked up
if we reuse these variable names in functions as local
example:
func() {
local VAR="value" # does not work even if this is local since VAR is readonly
echo "${VAR}"
}
readonly VAR="deal"
func
Currently only webadmin/error.ejs is templated. gulp html
will generated a error.html based on the values in
deploymentConfig.json
Currently we also commit the templating output to the relevant
branch (eg. for development branch master). This means if a
templated file was changed, prior to committing a change we have
to run gulp to generate the output.
Fixes#132
They can be switched using NODE_ENV.
NODE_ENV="cloudron": config.CLOUDRON is true, we are running on a deployed cloudron
NODE_ENV="test": config.TEST is true, we are running unit tests
NODE_ENV="": config.LOCAL is true, we are running locally
postinstall.sh should never use the network in the first place. The
pull exists merely for dev convenience where we can test those images
using a forced push but not having to new build a new base image
tar --list --verbose --file=box.tar now shows:
drwxr-xr-x girishra/staff 0 2015-01-17 16:57 ./
Without this change, mktemp was creating directories have no r,x for
others and group. This meant that nginx which was running www-user
was unable to access the website inside box code.
This is primarily for documentation/readability. We are able to
tell the version of installer by looking at the version file (instead
of an image id which tell us nothing).
Fixes#120
The installed version is different from package.json because the version
is bumped for plain image changes as well
The updater is now simplified to only allow updates when there is a change
in versions file.
It takes arguments --image <image_id> or --code <source_code_url>
image_id is generated using scripts/createDigitalOceanImage.sh
source_code_url is generated using scripts/createSourceTarball.sh
The install server is now always started by the init script.
When started up, it determines it's mode based on the exitence
of box srcdir. If it does not exist, it starts an external
listening provision/restore server. Once the appstore, provisions
the box, it switches to update mode.
If the box srcdir does exist, the installer starts out in update mode.
In update mode, the server listens on localhost:2020. In this
mode, the web interface can ask it to update the box.
Fixes#115
* The base image contains only installer code. Installer code
can only be changed with a base image change
* The box code is download from s3 instead of git. The s3 tarball
consists of node dependancies already
Part of #115
bundle creates a tarball out of the box source code with dependancies
and uploads it to S3
It uses s3-cli which requires a file ~/.s3cfg like so:
[default]
access_key = AKIAJ3GNZ2C7W5XKAH7Q
secret_key = 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
This reuses the 'regions' argument to transfer the
new image to the list specified. The first entry is the
region, where the temporary droplet is being created.
eg. --regions="ams3 ny2 sfo1"
The image droplet is created in Amsterdam.
The script currently does not wait for all images to be
fully transferred, yet.
getopts - bash built-in that supports only short form
getopt - separate program and there are two variants - GNU and BSD. The BSD one
does not support long options.
Also, move the deletion of containers into the installer script.
This is the place where supervisor is stopped, so it's a good idea
to stop the apps and containers there as well.
The actual container removal happens in postinstall because the
postinstall can decide to reuse apps if it wants
Doing a db migration for a column rename is too complex in sqlite,
so the initial schema itself is modified.
(you have to create a new table and copy things over)
From the docs:
AUTO log files and their backups will be deleted when supervisord restarts.
This has the result that log files disappear after every update
hostname - name of an actual addressable thing on the internet
domain name - the name of a network
root domain - this is actually "." or empty. the top most thing
tld, gtld - this is thing under root domain which has an entry in the root servers - com, co.uk etc
fqdn - this is really fully qualified hostname. so the full name in a host
sub domain - refers to a network rather a hostname. example.foo.com is a sub domain iff it's a network. otherwise it would be a hostname. foo.com is a subdomain of com.
zone - a concept of DNS where entries under a sub domain are delegated to a nameserver.
The installer is run in provision-mode in the init script
The installer is run as update-mode by supervisor by postinstall
Previously, we used to run the *same* installer code, which was
part of the base image, in both modes. However, after a reboot,
the old installer code is 'gone' and thus we start running the
newer installer code. This distinction is very subtle and this change
makes it more obvious.
If we want the same installer code to run in both modes, then we
really need to split out installer into a separate repository. This
can be done if required later.
This also makes it clear that announce is a feature of the provision
mode.
The only reason not to use the metadata completely for provisioning
is because many VPS providers do not provide it.
If we rely on a metadata API, we can pretty much remove the installer
server
express ensures that req.body is { } with invalid json
If not, we have to check assert(body && typeof body === 'object').
This is because typeof null is 'object'.
Express uses res.status. HttpSuccess and HttpError now use the same property
instead of statusCode
HttpError now requires an error object or string as second argument
HttpError now marks all calls with error object as internal errors
Everything else should have a proper message anyway (giving proper
contextual information). This way we don't leak random app information
through our REST API.
HttpSuccess has optional body
204 does not require a body
1) getopt needs a shortform -o.
2) Need to break out when $1 becomes null
3) remove unnecessary semicolons
4) some magic incantation to pass all args to the postinstall script
There is no telling how long the install script takes because
DO networking is so flaky.
This also allows appstore to have a short timeout for the provision
call.
The previous provisioning scheme had issues with updates. Because,
configuration was already part of the base bootstrap, providing an
update meant creating a new image.
The key insight in this new provisioning scheme is to treat config
files used by our code as something that can always be regenerated
on demand. Every update kills the config and recreates it all over.
Current flow is thus:
1. bootstrap init code starts up install/server.js. This server merely
listens for provision and restore calls.
2. The installer calls install.sh. This script simply checks out the
requested revision. Note that the installer is from what is in the
base image. Changing the installer requires a new base image. If a
restore url is provided, this downloads the restore data.
3. The install.sh calls postinstall.sh of the requested revision.
It setups the code calling npm install, migrates any data and creates
configs - collectd, graphite, nginx etc. This also creates cloudron.conf.
Because postinstall.sh is from requested revision, all the data, configs
are all based on the requested revision.
Note that installation of new packages should be done at base image creation
time.
The changes also provide separation of announce and hearbeat calls:
- announce is for cloudron coming up and installer running
- heartbeat is for box server running
TODO:
appstore url is only part of image becase installer needs to announce.
This can be fixed by moving to user metadata
Fixes#110
When using curl -T, the content-type header is not set and upload
works fine. When using --data-binary, curl sets it to application/x-www-form-urlencoded
which s3 doesn't like.
postinstall script is run automatically after npm install ends.
This create unnecessary confusion and we want to have more
control over when exactly the migration happens
The docker data as well as user (yellowtent) home is now btrfs.
This will greatly help us in backups. We simply take a btrfs
snapshot and back that up. This way we don't need to stop all
containers and simulates the same thing as a power outage.
Part of #108
config.js is now meant for instance level data ie data for specific
cloudron instance. The sqlite database is meant for data that is
needed across restores.
This is attempting to fix
Thu, 09 Oct 2014 06:30:22 GMT box:apptask Apptask completed for 326e4abd-aa72-4069-abf3-238757b3167d { [Error: HTTP code is 500 which indicates error: server error - Cannot destroy container 4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: Unable to remove filesystem for 4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: remove /var/lib/docker/containers/4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: directory not empty
See https://github.com/docker/docker/issues/8203
This is attempting to fix
Thu, 09 Oct 2014 06:30:22 GMT box:apptask Apptask completed for 326e4abd-aa72-4069-abf3-238757b3167d { [Error: HTTP code is 500 which indicates error: server error - Cannot destroy container 4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: Unable to remove filesystem for 4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: remove /var/lib/docker/containers/4524791d0efe279ce556a6c10c484bd09e5163f4279a5de84fa30836be49ebf4: directory not empty
See https://github.com/docker/docker/issues/8203
This change firewalls everything except the internal bridge. An upcoming
change should disable icc as well but that requires us to link all the
apps with the mail container.
Part of #59
Stolen from SO article (https://stackoverflow.com/questions/321299/what-is-the-reason-not-to-use-select)
If you specify columns in a SQL statement, the SQL execution engine will error if that column is removed from the table and the query is executed.
You can more easily scan code where that column is being used.
You should always write queries to bring back the least amount of information.
As others mention if you use ordinal column access you should never use select *
If your SQL statement joins tables, select * gives you all columns from all tables in the join
The default restart policy for containers is "no". As a result,
the service container (graphite, haraka) are not started on
system reboot.
The apps themselves are resumed by apps.js. That code also handles
resumption of app tasks should the box have crashed/rebooted midway.
This code could have been placed in a supervisor script or an init
script. init script means tighter integration into the system OS
which we want to avoid. supervisor script could be done at some point
should we need a more sophisticated "pre-start" script.
Fixes#98
The hard lesson is that SSE sucks.
*) The cross-origin policy is not exactly well defined but this doesn't affect us
*) There is no way to set Authorization header and we have to rely on cookies
*) Have to use a polyfill to make work on IE
Also
*) timeout module does not close if headers are sent. So, it never closes SSE
*) nginx buffers responses until completion, so we have to disable that with X-Accel-Buffering header
The auth header basically means we have to move back to long polling or support
special auth just for SSE :-(
All tokens are still given out with the
wildcard scope. The syncer and file related
routes don't have scopes. As well as the
firsttime use and provisioning routes.
We now have the following scopes so far:
- root /api/v1/..
- profile /api/v1/profile
- users /api/v1/users/..
- apps /api/v1/apps/..
- settings /api/v1/settings/..
Each token can carry a comma separated list
of scopes, the token is good for.
This happens when the cloudron itself rebooted and the
appstore had no clue of that. But in case the appstore
triggered the reboot, the cloudron will announce itself
and thus concludes a successful reboot
You get errors in the shell like below:
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
job-working-directory: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
This is probably the reason why the json binary didn't work
The apphealthtask basically keep the database and the docker app
state in sync. But it should not overwrite any of the *_PENDING_*
commands which are commands from the REST API to the apptasj.
Follow the naming convention as used in provisioning by appstore:
host.cert = public cert
host.info = textual output
host.key = private key
host.pem = key file and cert file combined into one
The non provisioned boxes will always be configured
to work on localhost. This includes the webadmin OAuth
records. After the cloudron gets provisioned, the FQDN
will be set to the correct value, which requires regeneration
of the OAuth client records.
APP_ORIGIN
The origin of the app. Like https://foo-box.domain.com
The box cannot use os.hostname(). This is because docker sets up
the container's hostname() to be just the host name (and not FQDN)
and node uses gethostbyname(). This is simply a convenience instead
of executing hostname -f.
ADMIN_ORIGIN
The origin of the box admin. Like https://admin-box.domain.com
OAUTH_CLIENT_ID
OAUTH_CLIENT_SECRET
Self explanatory oauth variables
Part of #47
X-Forwarded-For gives the address of the client which connected to the proxy
X-Forwarded-Port gives the port the client connected to on the proxy (e.g. 80 or 443)
X-Forwarded-Proto gives the protocol the client used to connect to the proxy (http or https)
X-Forwarded-Host gives the content of the Host header the client sent to the proxy.
I am not sure if rewriting Host header to $host is a good idea.
We used to have a yellowtent.conf previously. This change moves back to
the old pattern except the file is called cloudron.conf.
- naked_domain hack is not needed in config.js
- We can load the config.js synchronously and without db being initialized
- works better multiprocess. Initially, I thought I can just pass the config as
command line or child_process.send() and use the config in apptask. But, apptask
uses database code which uses config.js. This means then that we either pass
config around everywhere or we have a config.load() which is asynchronous.
Eventually: we remove settingsdb as well
Probably fixes#50
Previously we were linking the supervisor config
directory into /etc/supervisor and using template manifests,
which got changed during bootstrapping. In order to avoid
changing files in the git repository and more harder to
follow 'sed's the bootstrap.sh now simply creates those files
directly with appropriate values.
This also eliminates the yellowtent.json, as those values
are now passed via the process.env, set in the supervisor
manifest for the box
Treat config.js as a global object
BASE_DIR env is set in the config.js so that child processes
(like apphealthtask, apptask) pick up the correct baseDir
Instead of executing the bootstrap script via ssh,
we now run it once the box boots the first time.
It will update the repo and run the box code, which
needs to contact the appstore in order to get provisioned.
Instead of
appname.username.cloudron.us
we now use
appname-username.cloudron.us
This allows us to get a wildcard certificate for cloudron.us and the
ssl stuff will work.
The motivation for appname-username instead of username-appname is primarily
UI at this point. We can show a suffix easily on the right side of a line edit.
Fixes#49
As this script is part of the box repo, it is actually
not intended to be run from here, but we need to store it
somewhere for now. The idea is to copy it to a pristine
ubuntu droplet, run it and then create a new base image for
cloudrons off of that droplet.
This change makes it clear that we are really after the fqdn and not the hostname.
The code has been working only because Digital Ocean sets the hostname
to be the FQDN.
What I learnt about hostnames
-----------------------------
The kernel has get/sethostname() and get/setdomainname() system calls.
There is restriction on what can be set as the hostname. init scripts
usually set the contents of /etc/hostname. How hostname is setup depends
on the distribution - it could be the simple name OR the fqdn. CentOS
for example puts the FQDN and ubuntu puts the simple name.
DigitalOcean puts the name of the box in /etc/hostname. So far this has
worked in our favor because os.hostname() which uses gethostname() gave
us the FQDN.
Docker sets only the simple name in /etc/hostname but sets up the
OS host/domain name correctly. This mean os.hostname() does not provide
the FQDN in docker. Altering it is not possible because it requires the
the SYS_ADMIN caps which container don't have (unless --priveliged).
Also, hostname -f first does gethostname(), then does a DNS lookup
using getaddrinfo() or the deprecated getaddressbyname(). Using that IP,
it does a reverse lookup.
The DNS system itself using nsswitch.conf to determine look up order.
The first entry in /etc/hosts file is taken as the domain entry for
reverse lookups.
By default, we have the naked domain point to admin. This is a redirect
because the auth code always redirects to admin.*. For apps, we won't
do a redirect.
The state machine was written to handle restarts. This behavior
while awesome is unnecesary and complicates code. Simplify by
just using an async.series()
If an app had exited, we only attempt to start the container. Since
the volume should already exist from a previous run.
Not sure if the healthtask should trigger a restart of the app or
similar automatically. Currently, this is only done for server restart.
I don't think it matters if we do subdomain registration first or
nginx first. Nginx first has the mild advantage that we are sure
that nginx is ready to show status page before the dns is done
propagating.
Initially I tried moving the code to a proper task system like que.
However, I could not figure out how to make task cancellation work.
For example, if an app is installing and user cancels it, we want
to kill the installation task. Que does not provide any way to manage
processes. Secondly, there is no way to attach ids to tasks. The id
is autogenerated (incremented number) which means I cannot get at the
task by appid.
Current design: spin out apptask processes for every app's install/uninstall.
An install/uninstall kills any existing apptask process associated with
that task.
apphealth check is a separate process. It is always running and it just
pings all running/dead apps.
Since for now we do not have special scoping for
access tokens, presenting a decision dialog to the
user and let him approve makes not much sens.
This commit will skip this step, without interrupting
the normal OAuth flow.
Supervisor does not spawn a shell to run the application,
thus the process environment of supervisord will be inherited.
This can be overridden with 'environmen=...' in the config file
Host-mounted volumes contain files that are owned by user ids of the container.
However, these values leak through to the host. As a result, one needs to
be the root user to remove the appdata directory.
Initially, a setuid shell script was planned. However, most unixes don't
allos setuid on shell scripts.
We use sudo instead. A special suderos file needs to exist. For example,
In /etc/sudoers.d/rmappdir.sh, add
USERNAME host = (root) NOPASSWD: BOXDIR/src/rmappdir.sh
For some reason making ng-href an expression doesn't work.
ng-href="https://{{ app.location + '.' + window.location.host }}"
window.location.host is always empty :(
The main nginx conf automatically redirects all http to https. This
means that apps need to be doing TLS on their vhost configs as well.
Currently, they all share the same cert and key. This works only because
the cert and key are wildcard certficates.
Leave it to the app to copy it's data into /app/data as needed.
There are many advantages to this approach:
1. The app won't rely needlessly on some initial data in /app/data.
This means we can clear the data dir should the user ask to and the
app will work as expected.
2. box code will now just work on the Mac :)
It turns out that mounting a host path in /app/data will shadow
the contents of the container /app/data. I expected docker to
copy all files from /app/data of container into the volume.
Since docker doesn't do this for us, we have to copy over the
contents ourselves before we start the container.
https://github.com/dotcloud/docker/issues/1992
Without this change, every app that has any 'initial' config
will have to basically write a setup script which has to run
on startup.
This changes is a little untested because volume mounting cannot
be tested on the Mac through boot2docker.
Notes:
1. The VOLUME cmd in Dockerfile merely exposes the directory to
be available for other container to mount as volumes (i.e they
will be available for --volumes-from)
Initially there was a plan to integrate this with the volume code but
that is a little complicated. The apptask runs in spirit as a separate
process. This means that we need to figure a way to send it the username
and password to create volumes. A work around might be to create the volumes
in the routes code instead. However, the uninstall still needs to happen
in apptask code which makes it asymmetric.
Lastly, we need to ensure that app id does not clash with user volumes.
Create a directory called views that conatins the controller and
the partials together. These two are tightly tied together anyway.
Can't see any point of grouping things based on programming
patterns (it might make sense to group things based on function
as our code becomes bigger)
One of the motivations for this is to allow the node code to restart
nginx once the nginx config files have been written out. Without
supervisor, the app code needs to be root. With supervisor, we can
just ask supervisor to restart it for us (which helpfully listens
for commands in a tcp port 9001).
The supervisor configs are crude and need to somehow use environment
variables for log file paths and such. %(ENV)s format is supposed to
work but doesn't.
supervisor is not a daemon because it's easy to start/stop it through
run.sh (and Ctrl+C) works nicely during development. We can possibly
run it as a daemon once supervisor configs are more stable.
nginx is not a daemon anymore because it is run through supervisor.
supervisorctl seems to magically connect to supervisor even without
passing it -c <conf> file. Not sure how that works.
One option is store salt, public and private keys are binary BLOBs.
However, these can be cumbersome to see in database viewers and is
a pain should we ever consider switching databases. String is the
most portable.
So, save all buffers are hex-encoded strings in database.
tokendb is migrated for the moment
DatabaseError class needs to be fixed to have a nicer API. It's
very error prone to have the reason as the second argument!
AFAIK, the convention seems to be to follow the language
convention. So, camel case won out in favor of underscores
since we use camel case in our code.
From the docs:
"app.router has been removed and middleware and routes are executed in the
order they are added. Your code should move any calls to app.use that came
after app.use(app.router) after any routes (HTTP verbs)."
syncer code is now considered core functionality. There were some
problems with making the syncer code an app.
syncer needs exclusive access to volumes. The volumes have a specific
directory format (repo/) and the syncer cannot know if random
apps write to the volume outside it's REST API. This means that
if syncer were just an app, syncer's volumes cannot be shared
with other apps. This would make simple cases like gallery app
modifying pictures impossible. Additionally, docker was supposed
to be used for mounting all the user's volumes into the app.
However, docker does not have a way to add additional mounts on
running containers (minor issue). It would also require to run
an app instance per user.
The new strategy is to make syncer a filesystem API. All read/write
in the system goes through syncer's REST API. This means that we
need only one app instance for multiple users. Volumes don't need
to be mounted, since the app just uses REST calls.
The sync server does not call into the volume server at all. It is
assumed that all the volumes that the sync app has access to will
be mounted in the container in the mountRoot.
I tried a couple of hacks before the magic file approach.
1. First determine the empty tree
git hash-object -t tree /dev/null gives 4b825dc642
Then,
git update-index --index-info
040000 tree 4b825dc642 path/to/empty-folder
Sadly, git converts the above to a submodule when committing :-(
2. The next idea was to repurpose symlinks to denote empty dir.
While this idea could work, it requires the additional work of removing
the symlink before creating the directory.
The current approach is to use a magic file inside empty dir (like .gitignore).
The magic file is filtered when we read from index and when we read tree from
the object store.
Being optimistic about the quality of our server here :) If a request
timed out, we assume the client is taking too long to complete it's request.
In the future, we can make this more sophisticated, by checking if it's
the server or the client which is taking time.
This adds basic configuration for 80 and 443
nginx can be run with:
./run.sh
and requires root access to bind to the ports.
Basic routes are specified in nginx/server.routes
Additional application routes can be put into the
nginx/applications/ subfolder.
Until npm 1.4.4 please remove your node_modules
subfolder and run npm --production and then npm shrinkwrap.
Or if just adding a new non dev dependency:
npm i foo --save
npm shrinkwrap
Returning arrays is considered bad practice. This is because Array
constructor can be overriden and this makes REST api prone to
cross-scripting attacks. For example, someone can put a REST call in
a <script src="someothersite/api/v1/list.js"> tag and the browser
will prompty send cookies. Even though the result is JSON, it's
result can be captured if you overload the array constructor!
This adds a load of unit tests around volume user management
and introduces the password changing capabilities. The API is
still a bit ugly but should work now at least.
The coding style is:
- Use and test for null for object properties
- Use 'delete' only for hashes
- Use 'undefined' only when testing for a key in a hash
My intention was to set it to 512mb but the file I have is 514mb.
So, I chose a prime number above 514.
For some reason, instead of a 413 HTTP Error, the connection gets
reset when uploading large files. Needs investigation.
There is now a /v1/firsttime GET call to check in which mode
the device currently is. The convenience redirection to firsttime.html
for the webadmin, which was done for every request is now gone.
Also add a basic test to verify the server can start and stop.
We still have an issue with api/server/server-test.js to use the
regular ~/.yellowtent/ folders thus test might break if this directory
is used.
This makes sure we can handle files with spaces and funny characters
like brackets without shell-escaping.
Using exec has security implication since a filename could be 'foo && rm -rf /'
The style is thus:
- Required arguments are listed as proper arguments
- Optional arguments are in options
- Options object is optional and ca be skipped
This commit also changes the initial file in volume from README to README.md.
Side note: By default, superagent only buffers response for text/* and form data.
So when sending across README, the file is just an octet-stream and the response
in not received in res.text. This can be fixed by calling buffer(false) in superagent
request. Renaming the file to README.md side steps this problem because .md files
have the mime type text/x-markdown.
It pollutes the source directory with stuff and git clean -dxf nuked
all my old setup :( Don't want that to happen again.
Also create the paths synchronously, otherwise this ends up with a
race with the db.initialize() code.
- server now takes has a configRoot, dataRoot, mountRoot with
defaults so no startup argument is needed
* configRoot: for server config like user db
* dataRoot: actual encrypted container storage
* mountRoot: mount point folder for mounted encfs volumes
- First time use is now alwasy detected and the browser gets
redirected by the server
By default, this addon provides a single database on MySQL 5.6.19. The database is already created and the application
only needs to create the tables.
Exported environment variables:
```
MYSQL_URL= # the mysql url (only set when using a single database, see below)
MYSQL_USERNAME= # username
MYSQL_PASSWORD= # password
MYSQL_HOST= # server IP/hostname
MYSQL_PORT= # server port
MYSQL_DATABASE= # database name (only set when using a single database, see below)
```
For debugging, [cloudron exec](https://www.npmjs.com/package/cloudron) can be used to run the `mysql` client within the context of the app:
```
cloudron exec
> mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} --host=${MYSQL_HOST} ${MYSQL_DATABASE}
```
The `multipleDatabases` option can be set to `true` if the app requires more than one database. When enabled,
the following environment variables are injected:
```
MYSQL_DATABASE_PREFIX= # prefix to use to create databases
```
## oauth
The Cloudron OAuth 2.0 provider can be used in an app to implement Single Sign-On.
Exported environment variables:
```
OAUTH_CLIENT_ID= # client id
OAUTH_CLIENT_SECRET= # client secret
```
The callback url required for the OAuth transaction can be contructed from the environment variables below:
```
APP_DOMAIN= # hostname of the app
APP_ORIGIN= # origin of the app of the form https://domain
API_ORIGIN= # origin of the OAuth provider of the form https://my-cloudrondomain
```
OAuth2 URLs can be constructed as follows:
```
AuthorizationURL = ${API_ORIGIN}/api/v1/oauth/dialog/authorize # see above for API_ORIGIN
TokenURL = ${API_ORIGIN}/api/v1/oauth/token
```
The token obtained via OAuth has a restricted scope wherein they can only access the [profile API](/references/api.html#profile). This restriction
is so that apps cannot make undesired changes to the user's Cloudron.
We currently provide OAuth2 integration for Ruby [omniauth](https://git.cloudron.io/cloudron/omniauth-cloudron) and Node.js [passport](https://git.cloudron.io/cloudron/passport-cloudron).
## postgresql
By default, this addon provides PostgreSQL 9.4.4.
Exported environment variables:
```
POSTGRESQL_URL= # the postgresql url
POSTGRESQL_USERNAME= # username
POSTGRESQL_PASSWORD= # password
POSTGRESQL_HOST= # server name
POSTGRESQL_PORT= # server port
POSTGRESQL_DATABASE= # database name
```
The postgresql addon whitelists the hstore and pg_trgm extensions to be installable by the database owner.
For debugging, [cloudron exec](https://www.npmjs.com/package/cloudron) can be used to run the `psql` client within the context of the app:
The `memoryLimit` field is the maximum amount of memory (including swap) in bytes an app is allowed to consume before it
gets killed and restarted.
By default, all apps have a memoryLimit of 256MB. For example, to have a limit of 500MB,
```
"memoryLimit": 524288000
```
## maxBoxVersion
Type: semver string
Required: no
The `maxBoxVersion` field is the maximum box version that the app can possibly run on. Attempting to install the app on
a box greater than `maxBoxVersion` will fail.
This is useful when a new box release introduces features which are incompatible with the app. This situation is quite
unlikely and it is recommended to leave this unset.
## minBoxVersion
Type: semver string
Required: no
The `minBoxVersion` field is the minimum box version that the app can possibly run on. Attempting to install the app on
a box lesser than `minBoxVersion` will fail.
This is useful when the app relies on features that are only available from a certain version of the box. If unset, the
default value is `0.0.1`.
## postInstallMessage
Type: markdown string
Required: no
The `postInstallMessageField` is a message that is displayed to the user after an app is installed.
The intended use of this field is to display some post installation steps that the user has to carry out to
complete the installation. For example, displaying the default admin credentials and informing the user to
to change it.
The message can have the following special tags:
*`<sso> ... </sso>` - Content in `sso` blocks are shown if SSO enabled.
*`<nosso> ... </nosso>`- Content in `nosso` blocks are shows when SSO is disabled.
## optionalSso
Type: boolean
Required: no
The `optionalSso` field can be set to true for apps that can be installed optionally without using the Cloudron user management.
This only applies if any Cloudron auth related addons are used. When set, the Cloudron will not inject the auth related addon environment variables.
Any app startup scripts have to be able to deal with missing env variables in this case.
## tagline
Type: one-line string
Required: no (required for submitting to the Cloudron Store)
The `tagline` is used by the Cloudron Store to display a single line short description of the application.
```
"tagline": "The very best note keeper"
```
## tags
Type: Array of strings
Required: no (required for submitting to the Cloudron Store)
The `tags` are used by the Cloudron Store for filtering searches by keyword.
```
"tags": [ "git", "version control", "scm" ]
```
## targetBoxVersion
Type: semver string
Required: no
The `targetBoxVersion` field is the box version that the app was tested on. By definition, this version has to be greater
than the `minBoxVersion`.
The box uses this value to enable compatibility behavior of APIs. For example, an app sets the targetBoxVersion to 0.0.5
and is published on the store. Later, box version 0.0.10 introduces a new feature that conflicts with how apps used
to run in 0.0.5 (say SELinux was enabled for apps). When the box runs such an app, it ensures compatible behavior
and will disable the SELinux feature for the app.
If unspecified, this value defaults to `minBoxVersion`.
## tcpPorts
Type: object
Required: no
Syntax: Each key is the environment variable. Each value is an object containing `title`, `description` and `defaultValue`.
An optional `containerPort` may be specified.
The `tcpPorts` field provides information on the non-http TCP ports/services that your application is listening on. During
installation, the user can decide how these ports are exposed from their Cloudron.
For example, if the application runs an SSH server at port 29418, this information is listed here. At installation time,
the user can decide any of the following:
* Expose the port with the suggested `defaultValue` to the outside world. This will only work if no other app is being exposed at same port.
* Provide an alternate value on which the port is to be exposed to outside world.
* Disable the port/service.
To illustrate, the application lists the ports as below:
```
"tcpPorts": {
"SSH_PORT": {
"title": "SSH Port",
"description": "SSH Port over which repos can be pushed & pulled",
"defaultValue": 29418,
"containerPort": 22
}
},
```
In the above example:
*`SSH_PORT` is an app specific environment variable. Only strings, numbers and _ (underscore) are allowed. The author has to ensure that they don't clash with platform profided variable names.
*`title` is a short one line information about this port/service.
*`description` is a multi line description about this port/service.
*`defaultValue` is the recommended port value to be shown in the app installation UI.
*`containerPort` is the port that the app is listening on (recall that each app has it's own networking namespace).
In more detail:
* If the user decides to disable the SSH service, this environment variable `SSH_PORT` is absent. Applications _must_ detect this on
start up and disable these services.
*`SSH_PORT` is set to the value of the exposed port. Should the user choose to expose the SSH server on port 6000, then the
value of SSH_PORT is 6000.
*`defaultValue` is **only** used for display purposes in the app installation UI. This value is independent of the value
that the app is listening on. For example, the app can run an SSH server at port 22 but still recommend a value of 29418 to the user.
*`containerPort` is the port that the app is listening on. The Cloudron runtime will _bridge_ the user chosen external port
with the app specific `containerPort`. Cloudron Apps are containerized and each app has it's own networking namespace.
As a result, different apps can have the same `containerPort` value because these values are namespaced.
* The environment variable `SSH_PORT` may be used by the app to display external URLs. For example, the app might want to display
the SSH URL. In such a case, it would be incorrect to use the `containerPort` 22 or the `defaultValue` 29418 since this is not
the value chosen by the user.
*`containerPort` is optional and can be omitted, in which case the bridged port numbers are the same internally and externally.
Some apps use the same variable (in their code) for listen port and user visible display strings. When packaging these apps,
it might be simpler to listen on `SSH_PORT` internally. In such cases, the app can omit the `containerPort` value and should
instead reconfigure itself to listen internally on `SSH_PORT` on each start up.
## title
Type: string
Required: yes
The `title` is the primary application title displayed on the Cloudron Store.
Example:
```
"title": "Gitlab"
```
## version
Type: semver string
Required: yes
The `version` field specifies a [semver](http://semver.org/) string. The version is used by the Cloudron to compare versions and to
determine if an update is available.
Example:
```
"version": "1.1.0"
```
## website
Type: url
Required: yes
The `website` field is a URL where the user can read more about the application.
`nginx` is often used as a reverse proxy in front of the application, to dispatch to different backend programs based on the request route or other characteristics. In such a case it is recommended to run nginx and the application through a process manager like `supervisor`.
The nginx configuration, provided with the base image, can be used by adding an application specific config file under `/etc/nginx/sites-enabled/` when building the docker image.
Since the base image nginx configuration is unpatched from the ubuntu package, the application configuration has to ensure nginx is using `/run/` instead of `/var/lib/nginx/` to support the read-only filesystem nature of a Cloudron application.
First create a new bucket for the backups, using the minio commandline tools or the webinterface. The bucket has to have **read and write** permissions.
The information to be copied to the Cloudron's backup settings form may look similar to:
The `Encryption key` is an arbitrary passphrase used to encrypt the backups. Keep the passphrase safe; it is
required to decrypt the backups when restoring the Cloudron.
# Email
Cloudron has a built-in email server. By default, it only sends out email on behalf of apps
(for example, password reset or notification). You can enable the email server for sending
and receiving mail on the `settings` page. This feature is only available if you have setup
a DNS provider like Digital Ocean or Route53.
Your server's IP plays a big role in how emails from our Cloudron get handled. Spammers
frequently abuse public IP addresses and as a result your Cloudron might possibly start
out with a bad reputation. The good news is that most IP based blacklisting services cool
down over time. The Cloudron sets up DNS entries for SPF, DKIM, DMARC automatically and
reputation should be easy to get back.
## Checklist
* If you are unable to receive mail, first thing to check is if your VPS provider lets you
receive mail on port 25.
* Digital Ocean - New accounts frequently have port 25 blocked. Write to their support to
unblock your server.
* EC2, Lightsail & Scaleway - Edit your security group to allow email.
* Setup a Reverse DNS PTR record to be setup for the `my` subdomain.
**Note:** PTR records are a feature of your VPS provider and not your domain provider.
* You can verify the PTR record [https://mxtoolbox.com/ReverseLookup.aspx](here).
* AWS EC2 & Lightsail - Fill the [PTR request form](https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request).
* Digital Ocean - Digital Ocean sets up a PTR record based on the droplet's name. So, simply rename
your droplet to `my.<domain>`. Note that some new Digital Ocean accounts have [port 25 blocked](https://www.digitalocean.com/community/questions/port-25-smtp-external-access).
* Linode - Follow this [guide](https://www.linode.com/docs/networking/dns/setting-reverse-dns).
* Scaleway - Edit your security group to allow email and [reboot the server](https://community.online.net/t/security-group-not-working/2096) for the change to take effect. You can also set a PTR record on the interface with your `my.<domain>`.
* Check if your IP is listed in any DNSBL list [here](http://multirbl.valli.org/) and [here](http://www.blk.mx).
In most cases, you can apply for removal of your IP by filling out a form at the DNSBL manager site.
* When using wildcard or manual DNS backends, you have to setup the DMARC, MX records manually.
* Finally, check your spam score at [mail-tester.com](https://www.mail-tester.com/). The Cloudron
should get 100%, if not please let us know.
# CLI Tool
The [Cloudron tool](https://git.cloudron.io/cloudron/cloudron-cli) is useful for managing
a Cloudron. <b class="text-danger">The Cloudron CLI tool has to be installed & run on a Laptop or PC</b>
Once installed, you can install, configure, list, backup and restore apps from the command line.
## Linux & OS X
Installing the CLI tool requires node.js and npm. The CLI tool can be installed using the following command:
```
npm install -g cloudron
```
Depending on your setup, you may need to run this as root.
On OS X, it is known to work with the `openssl` package from homebrew.
See [#14](https://git.cloudron.io/cloudron/cloudron-cli/issues/14) for more information.
## Windows
The CLI tool does not work on Windows. Please contact us on our [chat](https://chat.cloudron.io) if you want to help with Windows support.
# Updates
Apps installed from the Cloudron Store are automatically updated every night.
The Cloudron platform itself updates in two ways: update or upgrade.
### Update
An **update** is applied onto the running server instance. Such updates are performed
every night. You can also use the Cloudron UI to initiate an update immediately.
The Cloudron will always make a complete backup before attempting an update. In the unlikely
case an update fails, it can be [restored](/references/selfhosting.html#restore).
### Upgrade
An **upgrade** requires a new OS image. This process involves creating a new server from scratch
with the latest code and restoring it from the last backup.
To upgrade follow these steps closely:
* Create a new backup - `cloudron machine backup create`
* List the latest backup - `cloudron machine backup list`
* Make the backup available for the new cloudron instance:
*`S3` - When storing backup ins S3, make the latest box backup public - files starting with `box_` (from v0.94.0) or `backup_`. This can be done from the AWS S3 console as seen here:
*`File system` - When storing backups in `/var/backups`, you have to make the box and the app backups available to the new Cloudron instance's `/var/backups`. This can be achieved in a variety of ways depending on the situation: like scp'ing the backup files to the machine before installation, mounting the external backup hard drive into the new Cloudron's `/var/backup` OR downloading a copy of the backup using `cloudron machine backup download` and uploading them to the new machine. After doing so, pass `file:///var/backups/<path to box backup>` as the `--restore-url` below.
* Create a new Cloudron by following the [installing](/references/selfhosting.html#installing) section.
When running the setup script, pass in the `--encryption-key` and `--restore-url` flags.
The `--encryption-key` is the backup encryption key. It can be displayed with `cloudron machine info`
Similar to the initial installation, a Cloudron upgrade looks like:
Note: When upgrading an old version of Cloudron (<= 0.94.0), pass the `--version 0.94.1` flag and then continue updating
from that.
* Finally, once you see the newest version being displayed in your Cloudron webinterface, you can safely delete the old server instance.
# Restore
To restore a Cloudron from a specific backup:
* Select the backup - `cloudron machine backup list`
* Make the backup public
*`S3` - Make the box backup publicly readable - files starting with `box_` (from v0.94.0) or `backup_`. This can be done from the AWS S3 console. Once the box has restored, you can make it private again.
*`File system` - When storing backups in `/var/backups`, you have to make the box and the app backups available to the new Cloudron instance's `/var/backups`. This can be achieved in a variety of ways depending on the situation: like scp'ing the backup files to the new machine before Cloudron installation OR mounting an external backup hard drive into the new Cloudron's `/var/backup` OR downloading a copy of the backup using `cloudron machine backup download` and uploading them to the new machine. After doing so, pass `file:///var/backups/<path to box backup>` as the `--restore-url` below.
* Create a new Cloudron by following the [installing](/references/selfhosting.html#installing) section.
When running the setup script, pass in the `version`, `encryption-key`, `domain` and `restore-url` flags.
The `version` field is the version of the Cloudron that the backup corresponds to (it is embedded
in the backup file name).
* Make the box backup private, once the upgrade is complete.
# Security
Security is a core feature of the Cloudron and we continue to push out updates to tighten the Cloudron's security policy. Our goal is that Cloudron users should be able to rely on Cloudron being secure out of the box without having to do manual configuration.
This section lists various security measures in place to protect the Cloudron.
## HTTP Security
* Cloudron admin has a CSP policy that prevents XSS attacks.
* Cloudron set various security related HTTP headers like `X-XSS-Protection`, `X-Download-Options`,
`X-Content-Type-Options`, `X-Permitted-Cross-Domain-Policies`, `X-Frame-Options` across all apps.
## SSL
* Cloudron enforces HTTPS across all apps. HTTP requests are automatically redirected to
HTTPS.
* The Cloudron automatically installs and renews certificates for your apps as needed. Should
installation of certificate fail for reasons beyond it's control, Cloudron admins will get a notification about it.
* Cloudron sets the `Strict-Transport-Security` header (HSTS) to protect apps against downgrade attacks
and cookie hijacking.
* Cloudron has A+ rating for SSL from [SSL Labs](https://cloudron.io/blog/2017-02-22-release-0.102.0.html).
## App isolation
* Apps are isolated completely from one another. One app cannot tamper with another apps' database or
local files. We achieve this using Linux Containers.
* Apps run with a read-only rootfs preventing attacks where the application code can be tampered with.
* Apps can only connect to addons like databases, LDAP, email relay using authentication.
* Apps are run with an AppArmor profile that disables many system calls and restricts access to `proc`
and `sys` filesystems.
* Most apps are run as non-root user. In the future, we intend to implement user namespaces.
* Each app is run in it's own subdomain as opposed to sub-paths. This ensures that XSS vulnerabilities
in one app doesn't [compromise](https://security.stackexchange.com/questions/24155/preventing-insecure-webapp-on-subdomain-compromise-security-of-main-webapp) other apps.
## Email
* Cloudron checks against the [Zen Spamhaus DNSBL](https://www.spamhaus.org/zen/) before accepting mail.
* Email can only be accessed with IMAP over TLS (IMAPS).
* Email can only be relayed (including same-domain emails) by authenticated users using SMTP/STARTTLS.
* Cloudron ensures that `MAIL FROM` is the same as the authenticated user. Users cannot spoof each other.
* All outbound mails from Cloudron are `DKIM` signed.
* Cloudron automatically sets up SPF, DMARC policies in the DNS for best email delivery.
*`journalctl -a -u box` to get debug output of box related code.
*`docker ps` will give you the list of containers. The addon containers are named as `mail`, `postgresql`,
`mysql` etc. If you want to get a specific container's log output, `journalctl -a CONTAINER_ID=<container_id>`.
# Alerts
The Cloudron will notify the Cloudron administrator via email if apps go down, run out of memory, have updates
available etc.
You will have to setup a 3rd party service like [Cloud Watch](https://aws.amazon.com/cloudwatch/) or [UptimeRobot](http://uptimerobot.com/) to monitor the Cloudron itself. You can use `https://my.<domain>/api/v1/cloudron/status`
as the health check URL.
# Help
If you run into any problems, join us at our [chat](https://chat.cloudron.io) or [email us](mailto:support@cloudron.io).
* Server Name - Use the `my` subdomain of your Cloudron
* Port - 993
* Connection Security - TLS
* Username/password - Same as your Cloudron credentials
## Sending email using SMTP
Use the following settings to send email.
* Server Name - Use the `my` subdomain of your Cloudron
* Port - 587
* Connection Security - STARTTLS
* Username/password - Same as your Cloudron credentials
## Email filters using Sieve
Use the following settings to setup email filtering users via Manage Sieve.
* Server Name - Use the `my` subdomain of your Cloudron
* Port - 4190
* Connection Security - TLS
* Username/password - Same as your Cloudron credentials
The [Rainloop](https://cloudron.io/appstore.html?app=net.rainloop.cloudronapp) and [Roundcube](https://cloudron.io/appstore.html?app=net.roundcube.cloudronapp)
apps are already pre-configured to use the above settings.
## Aliases
You can configure one or more aliases alongside the primary email address of each user. You can set aliases by editing the
user's settings, available behind the edit button in the user listing. Note that aliases cannot conflict with existing user names.
Pushing tag for rev [53b51eabcb89] on {https://cdn-registry-1.docker.io/v1/repositories/cloudron/img-2074d69134a7e0da3d6cdf3c53e241c4/tags/76cebfdd-7822-4f3d-af17-b3eb393ae604}
Build succeeded
```
## Installing
Now that we have built the image, we can install our latest build on the Cloudron
using the following command:
```
$ cloudron install
Using cloudron craft.selfhost.io
Using build 76cebfdd-7822-4f3d-af17-b3eb393ae604 from 1 hour ago
Location: tutorial # This is the location into which the application installs
App is being installed with id: 4dedd3bb-4bae-41ef-9f32-7f938995f85e
=> Waiting to start installation
=> Registering subdomain .
=> Verifying manifest .
=> Downloading image ..............
=> Creating volume .
=> Creating container
=> Setting up collectd profile ................
=> Waiting for DNS propagation ...
App is installed.
```
This makes the app available at https://tutorial-craft.selfhost.io.
Open the app in your default browser:
```
cloudron open
```
You should see `Hello World`.
# Testing
The application testing cycle involves `cloudron build` and `cloudron install`.
Note that `cloudron install` updates an existing app in place.
You can view the logs using `cloudron logs`. When the app is running you can follow the logs
using `cloudron logs -f`.
For example, you can see the console.log output in our server.js with the command below:
```
$ cloudron logs
Using cloudron craft.selfhost.io
2015-05-08T03:28:40.233940616Z Server running at port 8000
```
It is also possible to run a *shell* and *execute* arbitrary commands in the context of the application
process by using `cloudron exec`. By default, exec simply drops you into an interactive bash shell with
which you can inspect the file system and the environment.
```
$ cloudron exec
```
You can also execute arbitrary commands:
```
$ cloudron exec env # display the env variables that your app is running with
```
# Storing data
For file system storage, an app can use the `localstorage` addon to store data under `/app/data`.
When the `localstorage` addon is active, any data under /app/data is automatically backed up. When an
app is updated, /app/data already contains the data generated by the previous version.
*Note*: For convenience, the initial CloudronManifest.json generated by `cloudron init` already contains this
addon.
Let us put this theory into action by saving a *visit counter* as a file.
*server.js* has been modified to count the number of visitors on the site by storing a counter
in a file named ```counter.dat```.
File ```tutorial/server.js```
```javascript
var http = require('http'),
fs = require('fs'),
util = require('util');
var COUNTER_FILE = '/app/data/counter.dat';
var server = http.createServer(function (request, response) {
Now that we have built the image, we can install our latest build on the Cloudron
using the following command:
```
$ cloudron install
Using cloudron craft.selfhost.io
Using build 76cebfdd-7822-4f3d-af17-b3eb393ae604 from 1 hour ago
Location: tutorial # This is the location into which the application installs
App is being installed with id: 4dedd3bb-4bae-41ef-9f32-7f938995f85e
=> Waiting to start installation
=> Registering subdomain .
=> Verifying manifest .
=> Downloading image ..............
=> Creating volume .
=> Creating container
=> Setting up collectd profile ................
=> Waiting for DNS propagation ...
App is installed.
```
Open the app in your default browser:
```
cloudron open
```
You should see `Hello World`.
# Testing
The application testing cycle involves `cloudron build` and `cloudron install`.
Note that `cloudron install` updates an existing app in place.
You can view the logs using `cloudron logs`. When the app is running you can follow the logs
using `cloudron logs -f`.
For example, you can see the console.log output in our server.js with the command below:
```
$ cloudron logs
Using cloudron craft.selfhost.io
16:44:11 [main] Server running at port 8000
```
It is also possible to run a *shell* and *execute* arbitrary commands in the context of the application
process by using `cloudron exec`. By default, exec simply drops you into an interactive bash shell with
which you can inspect the file system and the environment.
```
$ cloudron exec
```
You can also execute arbitrary commands:
```
$ cloudron exec env # display the env variables that your app is running with
```
### Debugging
An app can be placed in `debug` mode by passing `--debug` to `cloudron install` or `cloudron configure`.
Doing so, runs the app in a non-readonly rootfs and unlimited memory. By default, this will also ignore
the `RUN` command specified in the Dockerfile. The developer can then interactively test the app and
startup scripts using `cloudron exec`.
This mode can be used to identify the files being modified by your application - often required to
debug situations where your app does not run on a readonly rootfs. Run your app using `cloudron exec`
and use `find / -mmin -30` to find file that have been changed or created in the last 30 minutes.
You can turn off debugging mode using `cloudron configure --no-debug`.
# Addons
## Filesystem
The application container created on the Cloudron has a `readonly` file system. Writing to any location
other than the below will result in an error:
* `/tmp` - Use this location for temporary files. The Cloudron will cleanup any files in this directory
periodically.
* `/run` - Use this location for runtime configuration and dynamic data. These files should not be expected
to persist across application restarts (for example, after an update or a crash).
* `/app/data` - Use this location to store application data that is to be backed up. To use this location,
you must use the [localstorage](/references/addons.html#localstorage) addon. For convenience, the initial CloudronManifest.json generated by
`cloudron init` already contains this addon.
## Database
Most web applications require a database of some form. In theory, it is possible to run any
database you want as part of the application image. This is, however, a waste of server resources
should every app runs it's own database server.
Cloudron currently provides [mysql](/references/addons.html#mysql), [postgresql](/references/addons.html#postgresql),
[mongodb](/references/addons.html#mongodb), [redis](/references/addons.html#redis) database addons. When choosing
these addons, the Cloudron will inject environment variables that contain information on how to connect
to the addon.
See https://git.cloudron.io/cloudron/tutorial-redis for a simple example of how redis can be used by
an application. The server simply uses the environment variables to connect to redis.
## Email
Cloudron applications can send email using the `sendmail` addon. Using the `sendmail` addon provides
the SMTP server and authentication credentials in environment variables.
Cloudron applications can also receive mail via IMAP using the `recvmail` addon.
## Authentication
The Cloudron has a centralized panel for managing users and groups. Apps can integrate Single Sign-On
authentication using LDAP or OAuth.
Apps can integrate with the Cloudron authentication system using LDAP, OAuth or Simple Auth. See the
[authentication](/references/authentication.html) reference page for more details.
See https://git.cloudron.io/cloudron/tutorial-ldap for a simple example of how to authenticate via LDAP.
For apps that are single user can skip Single Sign-On support by setting the `"singleUser": true`
in the manifest. By doing so, the Cloudron will installer will show a dialog to choose a user.
For app that have no user management at all, the Cloudron implements an `OAuth proxy` that
optionally lets the Cloudron admin make the app visible only for logged in users.
# Best practices
## No Setup
A Cloudron app is meant to instantly usable after installation. For this reason, Cloudron apps must not
show any setup screen after installation and should simply choose reasonable defaults.
Databases, email configuration should be automatically picked up from the environment variables using
addons.
## Docker
Cloudron uses Docker in the backend, so the package build script is a regular `Dockerfile`.
The app is run as a read-only docker container. Only `/run` (dynamic data), `/app/data` (backup data) and `/tmp` (temporary files) are writable at runtime. Because of this:
* Install any required packages in the Dockerfile.
* Create static configuration files in the Dockerfile.
* Create symlinks to dynamic configuration files under `/run` in the Dockerfile.
### Source directory
By convention, Cloudron apps install the source code in `/app/code`. Do not forget to create the directory for the code of the app:
```sh
RUN mkdir -p /app/code
WORKDIR /app/code
```
### Download archives
When packaging an app you often want to download and extract archives (e.g. from github).
This can be done in one line by combining `wget` and `tar` like this:
```docker
ENV VERSION 1.6.2
RUN wget "https://github.com/FreshRSS/FreshRSS/archive/${VERSION}.tar.gz" -O - \
| tar -xz -C /app/code --strip-components=1
```
The `--strip-components=1` causes the topmost directory in the archive to be skipped.
Always pin the download to a specific tag or commit instead of using `HEAD` or `master`
so that the builds are reasonably reproducible.
### Applying patches
To get the app working in Cloudron, sometimes it is necessary to patch the original sources. Patch is a safe way to modify sources, as it fails when the expected original sources changed too much.
First create a backup copy of the full sources (to be able to calculate the differences):
```sh
cp -a extensions extensions-orig
```
Then modify the sources in the original path and when finished, create a patch like this:
RUN patch -p1 -d /app/code/extensions < /app/code/change-ttrss-file-path.patch
```
The `-p1` causes patch to ignore the topmost directory in the patch.
## Process manager
Docker supports restarting processes natively. Should your application crash, it will be restarted
automatically. If your application is a single process, you do not require any process manager.
Use supervisor, pm2 or any of the other process managers if you application has more then one component.
This **excludes** web servers like apache, nginx which can already manage their children by themselves.
Be sure to pick a process manager that [forwards signals](#sigterm-handling) to child processes.
## Automatic updates
Some apps support automatic updates by overwriting themselves. A Cloudron app cannot overwrite itself
because of the read-only file system. For this reason, disable auto updates for app and let updates be
triggered through the Cloudron Store. This ties in better to the Cloudron's update and restore approach
should something go wrong with the update.
## Logging
Cloudron applications stream their logs to stdout and stderr. In practice, this ideal is hard to achieve.
Some programs like apache simply don't log to stdout. In those cases, simply log to `/tmp` or `/run`.
Logging to stdout has many advantages:
* App does not need to rotate logs and the Cloudron takes care of managing logs.
* App does not need special mechanism to release log file handles (on a log rotate).
* Integrates better with tooling like cloudron cli.
## Memory
By default, applications get 256MB RAM (including swap). This can be changed using the `memoryLimit`
field in the manifest.
Design your application runtime for concurrent use by 50 users. The Cloudron is not designed for
concurrent access by 100s or 1000s of users.
An app can determine it's memory limit by reading `/sys/fs/cgroup/memory/memory.limit_in_bytes`.
## Authentication
Apps should integrate with one of the [authentication strategies](/references/authentication.html).
This saves the user from having to manage separate set of credentials for each app.
## Start script
Many apps do not launch the server directly, as we did in our basic example. Instead, they execute
a `start.sh` script (named so by convention) which is used as the app entry point.
At the end of the Dockerfile you should add your start script (`start.sh`) and set it as the default command.
Ensure that the `start.sh` is executable in the app package repo. This can be done with `chmod +x start.sh`.
```docker
ADD start.sh /app/code/start.sh
CMD [ "/app/code/start.sh" ]
```
### One-time init
One common pattern is to initialize the data directory with some commands once depending on the existence of a special `.initialized` file.
```sh
if ! [ -f /app/data/.initialized ]; then
echo "Fresh installation, setting up data directory..."
# Setup commands here
touch /app/data/.initialized
echo "Done."
fi
```
To copy over some files from the code directory you can use the following command:
```sh
rsync -a /app/code/config/ /app/data/config/
```
### chown data files
Since the app containers use other user ids than the host, it is sometimes necessary to change the permissions on the data directory:
```sh
chown -R cloudron.cloudron /app/data
```
For Apache+PHP apps you might need to change permissions to `www-data.www-data` instead.
### Persisting random values
Some apps need a random value that is initialized once and does not change afterwards (e.g. a salt for security purposes). This can be accomplished by creating a random value and storing it in a file in the data directory like this:
Addon information (mail, database) exposed as environment are subject to change across restarts and an application must use these values directly (i.e not cache them across restarts). For this reason, it usually regenerates any config files with the current database settings on each invocation.
First create a config file template like this:
```sh
... snipped ...
'mysql' => array(
'driver' => 'mysql',
'host' => '##MYSQL_HOST',
'port' => '##MYSQL_PORT',
'database' => '##MYSQL_DATABASE',
'username' => '##MYSQL_USERNAME',
'password' => '##MYSQL_PASSWORD',
'charset' => 'utf8',
'collation' => 'utf8_general_ci',
'prefix' => '',
),
... snipped ...
```
Add the template file to the Dockerfile and create a symlink to the dynamic configuration file as follows:
bash, by default, does not automatically forward signals to child processes. This would mean that a SIGTERM sent to the parent processes does not reach the children. For this reason, be sure to `exec` as the
last line of the start.sh script. Programs like gosu, nginx, apache do proper SIGTERM handling.
For example, start apache using `exec` as below:
```sh
echo "Starting apache"
APACHE_CONFDIR="" source /etc/apache2/envvars
rm -f "${APACHE_PID_FILE}"
exec /usr/sbin/apache2 -DFOREGROUND
```
## Popular stacks
### Apache
Apache requires some configuration changes to work properly with Cloudron. The following commands configure Apache in the following way:
* Disable all default sites
* Print errors into the app's log and disable other logs
* Limit server processes to `5` (good default value)
* Change the port number to Cloudrons default `8000`
```docker
RUN rm /etc/apache2/sites-enabled/* \
&& sed -e 's,^ErrorLog.*,ErrorLog "/dev/stderr",' -i /etc/apache2/apache2.conf \
&& sed -e "s,MaxSpareServers[^:].*,MaxSpareServers 5," -i /etc/apache2/mods-available/mpm_prefork.conf \
In `start.sh` Apache can be started using these commands:
```sh
echo "Starting apache..."
APACHE_CONFDIR="" source /etc/apache2/envvars
rm -f "${APACHE_PID_FILE}"
exec /usr/sbin/apache2 -DFOREGROUND
```
### PHP
PHP wants to store session data at `/var/lib/php/sessions` which is read-only in Cloudron. To fix this problem you can move this data to `/run/php/sessions` with these commands:
```docker
RUN rm -rf /var/lib/php/sessions && ln -s /run/php/sessions /var/lib/php/sessions
```
Don't forget to create this directory and it's ownership in the `start.sh`:
```sh
mkdir -p /run/php/sessions
chown www-data:www-data /run/php/sessions
```
### Java
Java scales its memory usage dynamically according to the available system memory. Due to how Docker works, Java sees the hosts total memory instead of the memory limit of the app. To restrict Java to the apps memory limit it is necessary to add a special parameter to Java calls.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.