This repository integrates all HCF components.
Note: You can run the Windows Cell Add-On on a variety of systems within a Vagrant VM. For more information, see To Deploy HCF on Windows Using VirtualBox.
- Login to Jenkins
- Lookup the
hcf-vagrant-in-cloud-developjob - Use the
Build with Parameterslink to start a build - Specify the branch you want built and start
NOTE: These are the common instructions that are shared between all providers, some providers have different requirements, make sure that you read the appropriate section for your provider.
-
Install Vagrant (version 1.7.4 and higher).
-
Clone the repository and run the following command to allow Vagrant to interact with the mounted submodules:
git clone [email protected]:hpcloud/hcf
cd hcf
git submodule update --init --recursiveImportant: Ensure you do not have uncommitted changes in any submodules.
- Bring the VM online and
sshinto it:
# Replace X with one of: vmware_fusion, vmware_workstation, virtualbox
vagrant up --provider X
vagrant sshNote: The virtualbox provider is unstable and we've had many problems with HCF on it, try to use vmware when possible.
- On the VM, navigate to the
~/hcfdirectory and run themake vagrant-prepcommand.
cd hcf
make vagrant-prepNote: You need to run this command only after initially creating the VM.
- On the VM, start HCF using the
make runcommand.
make run- Install VMware Fusion 7 and Vagrant (version
1.7.4and higher).
Note: To get a license for VMware Fusion 7, use your HPE email address to send a message to [email protected] with the subject Fusion license request.
- Install the Vagrant Fusion provider plugin:
vagrant plugin install vagrant-vmware-fusionNote vagrant-vmware-fusion version 4.0.9 or greater is required.
- Download the Vagrant Fusion Provider license and install it:
vagrant plugin license vagrant-vmware-fusion /path/to/license.lic- Follow the common instructions in the section above
- Install Vagrant (version
1.7.4and higher) and thelibvirtdependencies and allow non-rootaccess tolibvirt:
sudo apt-get install libvirt-bin libvirt-dev qemu-utils qemu-kvm nfs-kernel-server- Allow non-
rootaccess tolibvirt:
sudo usermod -G libvirtd -a <username>- Log out, log in, and then install the
libvirtplugin:
vagrant plugin install vagrant-libvirt- Follow the common instructions above
Important: The VM may not come online during your first attempt.
- Install Vagrant (version
1.7.4and higher) and enable NFS over UDP:
sudo firewall-cmd --zone FedoraWorkstation --change-interface vboxnet0
sudo firewall-cmd --permanent --zone FedoraWorkstation --add-service nfs
sudo firewall-cmd --permanent --zone FedoraWorkstation --add-service rpc-bind
sudo firewall-cmd --permanent --zone FedoraWorkstation --add-service mountd
sudo firewall-cmd --permanent --zone FedoraWorkstation --add-port 2049/udp
sudo firewall-cmd --reload
sudo systemctl enable nfs-server.service
sudo systemctl start nfs-server.service- Install
libvirtdependencies, allow non-rootaccess tolibvirt, and create a group for thelibvirtuser:
sudo dnf install libvirt-daemon-kvm libvirt-devel
sudo usermod -G libvirt -a <username>
newgrp libvirt- Install
fog-libvirt0.0.3 and thelibvirtplugins:
# Workaround for https://github.com/fog/fog-libvirt/issues/16
vagrant plugin install --plugin-version 0.0.3 fog-libvirt
vagrant plugin install vagrant-libvirt- To set the
libvertdaemon user to your username/group, edit/etc/libvirt/qemu.confas follows:
user = "<username>"
group = "<username>"
- Follow the common instructions above
Important: The VM may not come online during your first attempt.
Important: Working on a Windows host is significantly more complicated because of heavy usage of symlinks. On Windows, only the VirtualBox provider is supported.
- Ensure that line endings are handled correctly.
git config --global core.autocrlf input- Clone the repository, bring the VM online, and
sshinto it:
Important: Do not recursively update submodules. To ensure that symlinks are configured properly, you need to do this on the Vagrant VM. To be able to clone everything within the VM, you will need an ssh key within the VM allowed on GitHub.
vagrant up --provider virtualbox
vagrant ssh- Configure symlinks and initialize submodules:
cd ~/hcf
git config --global core.symlinks true
git config core.symlinks true
git submodule update --init --recursive- On the VM, navigate to the
~/hcfdirectory and run themake vagrant-prepcommand.
cd hcf
make vagrant-prepNote: You need to run this command only after initially creating the VM.
- On the VM, start HCF
make run- For the Windows Cell Add-On, see the Windows Cell Readme.
Important: You can run the Windows Cell Add-On on a variety of systems within a Vagrant VM.
-
Pick a target, e.g.
aws-spot-distand runmake aws-spot-distto generate the archive populated with development defaults and secrets. -
Extract the newly created .zip file to a temporary working dir:
mkdir /tmp/hcf-aws
cd /tmp/hcf-aws
unzip $OLDPWD/aws-???.zip
cd aws- Follow the instructions in README-aws.md
| Name | Effect |
|---|---|
run |
Set up HCF on the current node (bin/run.sh) |
stop |
Stop HCF on the current node |
vagrant-box |
Build the Vagrant box image using packer |
vagrant-prep |
Shortcut for building everything needed for make run |
| Name | Effect |
|---|---|
cf-release |
bosh create release for cf-release |
usb-release |
bosh create release for cf-usb-release |
diego-release |
bosh create release for diego-release |
etcd-release |
bosh create release for etcd-release |
garden-release |
bosh create release for garden-runc-release |
cf-mysql-release |
bosh create release for cf-mysql-release |
hcf-sso-release |
bosh create release for hcf-sso-release |
hcf-versions-release |
bosh create release for hcf-versions-release |
cflinuxfs2-rootfs-release |
bosh create release for cflinuxfs2-rootfs-release |
releases |
Make all of the BOSH releases above |
| Name | Effect |
|---|---|
build |
make compile + make images |
compile-base |
fissile build layer compilation |
compile |
fissile build packages |
images |
make bosh-images + make docker-images |
image-base |
fissile build layer stemcell |
bosh-images |
fissile build images |
docker-images |
docker build in each dir in ./docker-images |
tag |
Tag HCF images and bosh role images |
publish |
Publish HCF images and bosh role images to Docker Hub |
hcp |
Generate HCP service definitions |
mpc |
Generate Terraform MPC definitions for a single-node microcloud |
aws |
Generate Terraform AWS definitions for a single-node microcloud |
| Name | Effect | Notes |
|---|---|---|
dist |
Generate and package various setups | |
mpc-dist |
Generate and package Terraform MPC definitions for a single-node microcloud | |
aws-dist |
Generate and package Terraform AWS definitions for a single-node microcloud | |
aws-proxy-dist |
Generate and package Terraform AWS definitions for a proxied microcloud | |
aws-spot-dist |
Generate and package Terraform AWS definitions for a single-node microcloud using a spot instance | |
aws-spot-proxy-dist |
Generate and package Terraform AWS definitions for a proxied microcloud using spot instances |
- To look at entrypoint logs, run the
docker logs <role-name>command. To follow the logs, run thedocker logs -f <role-name>command.
__Note:__ For `bosh` roles, `monit` logs are displayed. For `docker` roles, the `stdout` and `stderr` from the entry point are displayed.
- All logs for all components can be found here on the Vagrant box in
~/.run/log.
On the Vagrant box, run the following commands:
cd ~/hcf
# (There is no need for a graceful stop.)
docker rm -f $(docker ps -a -q)
# Delete all data.
sudo rm -rf ~/.run/store
# Start everything.
make runOn the Vagrant box, run the following commands:
cd ~/hcf
# Stop gracefully.
make stop
# Delete all logs.
sudo rm -rf ~/.run/log
# Start everything.
make runOn the Vagrant box, when hcf-status reports all roles are running, enable diego_docker support with
cf enable-feature-flag diego_dockerand execute the following commands:
run-role.sh /home/vagrant/hcf/bin/settings/ smoke-tests
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-flight-recorderTo run the tests against a remote machine (e.g. to test a HCP deployment),
first make sure that your settings match the deployed configuration; the
easiest way to do this is to deploy via the fully-specified instance
definition files rather than the minimal ones meant for HSM. Also remember to
enable diego_docker as above. Afterwards, run the tests as normal but
with a DOMAIN override:
run-role.sh /home/vagrant/hcf/bin/settings/ smoke-tests --env DOMAIN=hcf.hcp.example.com
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain --env DOMAIN=hcf.hcp.example.com
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests --env DOMAIN=hcf.hcp.example.com
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-autoscaler --env DOMAIN=hcf.hcp.example.comIt is not currently possible to run acceptance-tests-flight-recorder on HCP,
as it expects direct access to the other roles in the cluster.
Use the following command to specify additional include/exclude patterns for test filenames:
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain --env INCLUDE=pattern --env EXCLUDE=patternFor example to run just 005_sso_test.sh and 014_sso_authenticated_passthrough_test.sh:
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain --env INCLUDE=ssoIt is also possible to run custom tests by mounting them at the /tests mountpoint inside the container.
The mounted tests will be combined with the bundled tests. To exclude the bundled tests match against
names starting with 3 digits followed by an underscore:
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain --env 'EXCLUDE=\b\d{3}_' -v /tmp/tests:/testsOr explicitly select only the mounted tests with:
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests-brain --env 'INCLUDE=^/tests/' -v /tmp/tests:/testsUse the following command to specify the changes to the test suites to run:
run-role.sh /home/vagrant/hcf/bin/settings/ acceptance-tests --env CATS_SUITES=-suite,+suiteEach suite is separated by a comma. The modifiers apply until the next modifier is seen, and have the following meanings:
| Modifier | Meaning |
|---|---|
+ |
Enable the following suites |
- |
Disable the following suites |
= |
Disable all suites, and enable the following suites |
On the Vagrant box, run the following commands:
cd ~/hcf
# Stop gracefully.
make stop
# Delete all fissile images.
docker rmi $(fissile show image)
# Re-create the images and then run them.
make images runTry each of the following solutions sequentially:
-
Run the
~. && vagrant reloadcommand. -
Run
vagrant halt && vagrant reloadcommand. -
Manually stop the virtual machine and then run the
vagrant reloadcommand. -
Run the
vagrant destroy -f && vagrant upcommand and then runmake vagrant-prep runon the Vagrant box.
You can target the cluster on the hardcoded cf-dev.io address assigned to a host-only network adapter.
You can access any URL or endpoint that references this address from your host.
-
Use the role manifest to expose the port for the mysql proxy role
-
The MySQL instance is exposed at
192.168.77.77:3306. -
The default username is:
root. -
You can find the default password in the
MYSQL_ADMIN_PASSWORDenvironment variable in the~/hcf/bin/settings/settings.envfile on the Vagrant box.
-
Add a Git submodule to the BOSH release in
./src. -
Mention the new release in
./bin/.fissilerc -
Edit the release parameters:
a. Add new roles or change existing ones in `./container-host-files/etc/hcf/config/role-manifest.yml`.
b. Add exposed environment variables (`yaml path: /configuration/variables`).
c. Add configuration templates (`yaml path: /configuration/templates` and `yaml path: /roles/*/configuration/templates`).
d. Add defaults for your configuration settings to `~/hcf/bin/settings/settings.env`.
e. If you need any extra default certificates, add them to `~/hcf/bin/settings/certs.env`.
f. Add generation code for the certs to `~/hcf/bin/generate-dev-certs.sh`.
-
Add any opinions (static defaults) and dark opinions (configuration that must be set by user) to
./container-host-files/etc/hcf/config/opinions.ymland./container-host-files/etc/hcf/config/dark-opinions.yml, respectively. -
Change the
./Makefileso it builds the new release:
a. Add a new target `<release-name>-release`.
b. Add the new target as a dependency for `make releases`.
-
Test the changes.
-
Run the
make <release-name>-release compile images runcommand.
-
Make a change to component
X, in its respective release (X-release). -
Run
make X-release compile images runto build your changes and run them.
- Edit
./container-host-files/etc/hcf/config/role-manifest.yml:
a. Add the new exposed environment variables (`yaml path: /configuration/variables`).
b. Add or change configuration templates:
i. `yaml path: /configuration/templates`
ii. `yaml path: /roles/*/configuration/templates`
-
Add defaults for your new settings in
~/hcf/bin/settings/settings.env. -
If you need any extra default certificates, add them to
~/hcf/bin/dev-certs.env. -
Add generation code for the certificates here:
~/hcf/bin/generate-dev-certs.sh -
Rebuild the role images that need this new setting:
```bash
docker stop <role>-int
docker rmi -f fissile-<role>:<tab-for-completion>
make images run
```
__Tip:__ If you do not know which roles require your new settings, you can use the following catch-all:
```bash
make stop
docker rmi -f $(fissile show image)
make images run
```
Note: Because this process involves cloning and building a release, it may take a long time.
Cloud Foundry maintains a compatibility spreadsheet
for cf-release, diego-release, etcd-release, and garden-runc-release. If you are bumping
all of those modules simultaneously, you can run bin/update-cf-release.sh <RELEASE> and skip steps
1 and 2 in the example:
The following example is for cf-release. You can follow the same steps for other releases.
- On the host machine, clone the repository that you want to bump:
```bash
git clone src/cf-release/ ./src/cf-release-clone --recursive ```
- On the host, bump the clone to the desired version:
```bash
git checkout v217
git submodule update --init --recursive --force
```
- Create a release for the cloned repository:
__Important:__ From this point on, perform all actions on the Vagrant box.
```bash
cd ~/hcf
./bin/create-release.sh src/cf-release-clone cf
```
- Run the
config-diffcommand:
```bash
FISSILE_RELEASE='' fissile diff --release ${HOME}/hcf/src/cf-release,${HOME}/hcf/src/cf-release-clone
```
- Act on configuration changes:
__Important:__ If you are not sure how to treat a configuration setting, discuss it with the HCF team.
For any configuration changes discovered in step the previous step, you can do one of the following:
* Keep the defaults in the new specification.
* Add an opinion (static defaults) to `./container-host-files/etc/hcf/config/opinions.yml`.
* Add a template and an exposed environment variable to `./container-host-files/etc/hcf/config/role-manifest.yml`.
Define any secrets in the dark opinions file `./container-host-files/etc/hcf/config/dark-opinions.yml` and expose them as environment variables.
* If you need any extra default certificates, add them here: `~/hcf/bin/dev-certs.env`.
* Add generation code for the certificates here: `~/hcf/bin/generate-dev-certs.sh`.
- Evaluate role changes:
a. Consult the release notes of the new version of the release.
b. If there are any role changes, discuss them with the HCF team, [follow steps 3 and 4 from this guide](#how-do-i-add-a-new-bosh-release-to-hcf).
- Bump the real submodule:
a. Bump the real submodule and begin testing.
b. Remove the clone you used for the release.
- Test the release by running the
make <release-name>-release compile images runcommand.
-
Run the
vagrant reloadcommand. -
Run the
make runcommand.
-
If our submodules are close to the
HEADof upstream and no merge conflicts occur, follow the steps described here. -
If merge conflicts occur, or if the component is referenced as a submodule, and it is not compatible with the parent release, work with the HCF team to resolve the issue on a case-by-case basis.
-
fissilegeneratesboshandbosh-taskroles using BOSH releases while regularDockerfilescreatedockerroles. -
You can include both types of role in the role manifest, using the same run information.
-
Name your new role.
-
Create a directory named after your role in
./docker-images. -
Create a
Dockerfilein the new directory. -
Add your role to
role-manifest.yml -
Test using the
make docker-images runcommand.
-
Ensure that the Vagrant box is running.
-
sshinto the Vagrant box. -
To tag the images into the selected registry and to push them, run the
make tag publishcommand. -
This target uses the
makevariables listed below to construct the image names and tags:
|Variable |Meaning|Default|
| --- | --- | --- |
|IMAGE_REGISTRY | The name of the trusted registry to publish to (include a trailing slash) | _empty_|
|IMAGE_PREFIX | The prefix to use for image names (must not be empty) |hcf|
|IMAGE_ORG | The organization in the image registry |helioncf|
|BRANCH | The tag to use for the images | _Current git branch_ |
- To publish to the standard trusted registry run the
make tag publishcommand, for example:
```bash
make tag publish IMAGE_REGISTRY=docker.helion.lol/
```
-
Ensure that the Vagrant box is running.
-
sshinto the Vagrant box. -
To generate the SDL file that contains HCP service definition for the current set of roles, run the
make hcpcommand.
__Note:__ This target takes the same `make` variables as the `tag` and `publish` targets.
You can also read a step by step tutorial of running HCF on HCP using Vagrant.
-
Ensure that the Vagrant box is running.
-
sshinto the Vagrant box. -
To generate the
hcf.tffile that contains the Terraform definitions for an MPC_based, single-node microcloud, run themake mpccommand.
__Note:__ This target takes the same `make` variables as the `tag` and `publish` targets.
-
Ensure that the Vagrant box is running.
-
sshinto the Vagrant box. -
Build new configgin binary and install it into all role images
configginis installed as a binary in~/tools/configgin.tgz. In order to test a new version you have to install a new build in that location and recreate first the base image, and then all role images.In the
docker rmicommand below use tab-completion to also delete the image tagged with a version string:git clone [email protected]:hpcloud/hcf-configgin.git cd hcf-configgin/ make dist cp output/configgin*.tgz ~/tools/configgin.tgz docker rmi -f $(fissile show image) fissile-role-base fissile-role-base:<TAB> fissile build layer stemcell make images
-
Add the version to the last line of
docker-images/hcf-pipeline-ruby-bosh/versions.txt -
Edit the
HCF-PIPELINE-RUBY-BOSH DOCKER IMAGE TARGETsection ofMakefileUpdate the version from 2.3.1 to the desired version.
-
Run
make hcf-pipeline-ruby-bosh