Thanks to visit codestin.com
Credit goes to github.com

Skip to content

feat: add nomad template #9786

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cli/testdata/coder_templates_init_--help.golden
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ USAGE:
Get started with a templated template.

OPTIONS:
--id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes
--id aws-ecs-container|aws-linux|aws-windows|azure-linux|do-linux|docker|docker-with-dotfiles|gcp-linux|gcp-vm-container|gcp-windows|kubernetes|nomad-docker
Specify a given example template by ID.

———
Expand Down
4 changes: 2 additions & 2 deletions docs/cli/templates_init.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 12 additions & 0 deletions examples/examples.gen.json
Original file line number Diff line number Diff line change
Expand Up @@ -133,5 +133,17 @@
"kubernetes"
],
"markdown": "\n# Getting started\n\nThis template creates a deployment running the `codercom/enterprise-base:ubuntu` image.\n\n## Prerequisites\n\nThis template uses [`kubernetes_deployment`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment) terraform resource, which requires the `coder` service account to have permission to create deploymnets. For example if you are using [helm](https://coder.com/docs/v2/latest/install/kubernetes#install-coder-with-helm) to install Coder, you should set `coder.serviceAccount.enableDeployments=true` in your `values.yaml`\n\n```diff\ncoder:\nserviceAccount:\n workspacePerms: true\n- enableDeployments: false\n+ enableDeployments: true\n annotations: {}\n name: coder\n```\n\n\u003e Note: This is only required for Coder versions \u003c 0.28.0, as this will be the default value for Coder versions \u003e= 0.28.0\n\n## Authentication\n\nThis template can authenticate using in-cluster authentication, or using a kubeconfig local to the\nCoder host. For additional authentication options, consult the [Kubernetes provider\ndocumentation](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs).\n\n### kubeconfig on Coder host\n\nIf the Coder host has a local `~/.kube/config`, you can use this to authenticate\nwith Coder. Make sure this is done with same user that's running the `coder` service.\n\nTo use this authentication, set the parameter `use_kubeconfig` to true.\n\n### In-cluster authentication\n\nIf the Coder host runs in a Pod on the same Kubernetes cluster as you are creating workspaces in,\nyou can use in-cluster authentication.\n\nTo use this authentication, set the parameter `use_kubeconfig` to false.\n\nThe Terraform provisioner will automatically use the service account associated with the pod to\nauthenticate to Kubernetes. Be sure to bind a [role with appropriate permission](#rbac) to the\nservice account. For example, assuming the Coder host runs in the same namespace as you intend\nto create workspaces:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: coder\n\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: coder\nsubjects:\n - kind: ServiceAccount\n name: coder\nroleRef:\n kind: Role\n name: coder\n apiGroup: rbac.authorization.k8s.io\n```\n\nThen start the Coder host with `serviceAccountName: coder` in the pod spec.\n\n### Authenticate against external clusters\n\nYou may want to deploy workspaces on a cluster outside of the Coder control plane. Refer to the [Coder docs](https://coder.com/docs/v2/latest/platforms/kubernetes/additional-clusters) to learn how to modify your template to authenticate against external clusters.\n\n## Namespace\n\nThe target namespace in which the deployment will be deployed is defined via the `coder_workspace`\nvariable. The namespace must exist prior to creating workspaces.\n\n## Persistence\n\nThe `/home/coder` directory in this example is persisted via the attached PersistentVolumeClaim.\nAny data saved outside of this directory will be wiped when the workspace stops.\n\nSince most binary installations and environment configurations live outside of\nthe `/home` directory, we suggest including these in the `startup_script` argument\nof the `coder_agent` resource block, which will run each time the workspace starts up.\n\nFor example, when installing the `aws` CLI, the install script will place the\n`aws` binary in `/usr/local/bin/aws`. To ensure the `aws` CLI is persisted across\nworkspace starts/stops, include the following code in the `coder_agent` resource\nblock of your workspace template:\n\n```terraform\nresource \"coder_agent\" \"main\" {\n startup_script = \u003c\u003c-EOT\n set -e\n # install AWS CLI\n curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\n unzip awscliv2.zip\n sudo ./aws/install\n EOT\n}\n```\n\n## code-server\n\n`code-server` is installed via the `startup_script` argument in the `coder_agent`\nresource block. The `coder_app` resource is defined to access `code-server` through\nthe dashboard UI over `localhost:13337`.\n\n## Deployment logs\n\nTo stream kubernetes pods events from the deployment, you can use Coder's [`coder-logstream-kube`](https://github.com/coder/coder-logstream-kube) tool. This can stream logs from the deployment to Coder's workspace startup logs. You just need to install the `coder-logstream-kube` helm chart on the cluster where the deployment is running.\n\n```shell\nhelm repo add coder-logstream-kube https://helm.coder.com/logstream-kube\nhelm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \\\n --namespace coder \\\n --set url=\u003cyour-coder-url-including-http-or-https\u003e\n```\n\nFor detailed instructions, see [Deployment logs](https://coder.com/docs/v2/latest/platforms/kubernetes/deployment-logs)\n"
},
{
"id": "nomad-docker",
"url": "",
"name": "Develop in a Nomad Docker Container",
"description": "Get started with Nomad Workspaces.",
"icon": "/icon/nomad.svg",
"tags": [
"cloud",
"nomad"
],
"markdown": "\n# Develop in a Nomad Docker Container\n\nThis example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.\n\n## Prerequisites\n\n- [Nomad](https://www.nomadproject.io/downloads)\n- [Docker](https://docs.docker.com/get-docker/)\n\n## Setup\n\n### 1. Start the CSI Host Volume Plugin\n\nThe CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.\n\n1. Login to the Nomad server using SSH.\n\n2. Append the following stanza to your Nomad server configuration file and restart the nomad service.\n\n ```hcl\n plugin \"docker\" {\n config {\n allow_privileged = true\n }\n }\n ```\n\n ```shell\n sudo systemctl restart nomad\n ```\n\n3. Create a file `hostpath.nomad` with following content:\n\n ```hcl\n job \"hostpath-csi-plugin\" {\n datacenters = [\"dc1\"]\n type = \"system\"\n\n group \"csi\" {\n task \"plugin\" {\n driver = \"docker\"\n\n config {\n image = \"registry.k8s.io/sig-storage/hostpathplugin:v1.10.0\"\n\n args = [\n \"--drivername=csi-hostpath\",\n \"--v=5\",\n \"--endpoint=${CSI_ENDPOINT}\",\n \"--nodeid=node-${NOMAD_ALLOC_INDEX}\",\n ]\n\n privileged = true\n }\n\n csi_plugin {\n id = \"hostpath\"\n type = \"monolith\"\n mount_dir = \"/csi\"\n }\n\n resources {\n cpu = 256\n memory = 128\n }\n }\n }\n }\n ```\n\n4. Run the job:\n\n ```shell\n nomad job run hostpath.nomad\n ```\n\n### 2. Setup the Nomad Template\n\n1. Create the template by running the following command:\n\n ```shell\n coder template init nomad-docker\n cd nomad-docker\n coder template create\n ```\n\n2. Set up Nomad server address and optional authentication:\n\n3. Create a new workspace and start developing.\n"
}
]
1 change: 1 addition & 0 deletions examples/examples.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ var (
//go:embed templates/gcp-vm-container
//go:embed templates/gcp-windows
//go:embed templates/kubernetes
//go:embed templates/nomad-docker
files embed.FS

exampleBasePath = "https://github.com/coder/coder/tree/main/examples/templates/"
Expand Down
96 changes: 96 additions & 0 deletions examples/templates/nomad-docker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
name: Develop in a Nomad Docker Container
description: Get started with Nomad Workspaces.
tags: [cloud, nomad]
icon: /icon/nomad.svg
---

# Develop in a Nomad Docker Container

This example shows how to use Nomad service tasks to be used as a development environment using docker and host csi volumes.

## Prerequisites

- [Nomad](https://www.nomadproject.io/downloads)
- [Docker](https://docs.docker.com/get-docker/)

## Setup

### 1. Start the CSI Host Volume Plugin

The CSI Host Volume plugin is used to mount host volumes into Nomad tasks. This is useful for development environments where you want to mount persistent volumes into your container workspace.

1. Login to the Nomad server using SSH.

2. Append the following stanza to your Nomad server configuration file and restart the nomad service.

```hcl
plugin "docker" {
config {
allow_privileged = true
}
}
```

```shell
sudo systemctl restart nomad
```

3. Create a file `hostpath.nomad` with following content:

```hcl
job "hostpath-csi-plugin" {
datacenters = ["dc1"]
type = "system"

group "csi" {
task "plugin" {
driver = "docker"

config {
image = "registry.k8s.io/sig-storage/hostpathplugin:v1.10.0"

args = [
"--drivername=csi-hostpath",
"--v=5",
"--endpoint=${CSI_ENDPOINT}",
"--nodeid=node-${NOMAD_ALLOC_INDEX}",
]

privileged = true
}

csi_plugin {
id = "hostpath"
type = "monolith"
mount_dir = "/csi"
}

resources {
cpu = 256
memory = 128
}
}
}
}
```

4. Run the job:

```shell
nomad job run hostpath.nomad
```

### 2. Setup the Nomad Template

1. Create the template by running the following command:

```shell
coder template init nomad-docker
cd nomad-docker
coder template create
```

2. Set up Nomad server address and optional authentication:

3. Create a new workspace and start developing.
192 changes: 192 additions & 0 deletions examples/templates/nomad-docker/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
terraform {
required_providers {
coder = {
source = "coder/coder"
}
nomad = {
source = "hashicorp/nomad"
}
}
}

variable "nomad_provider_address" {
type = string
description = "Nomad provider address. e.g., http://IP:PORT"
default = "http://localhost:4646"
}

variable "nomad_provider_http_auth" {
type = string
description = "Nomad provider http_auth in the form of `user:password`"
sensitive = true
default = ""
}

provider "coder" {}

provider "nomad" {
address = var.nomad_provider_address
http_auth = var.nomad_provider_http_auth == "" ? null : var.nomad_provider_http_auth
}

data "coder_parameter" "cpu" {
name = "cpu"
display_name = "CPU"
description = "The number of CPU cores"
default = "1"
icon = "/icon/memory.svg"
mutable = true
option {
name = "1 Cores"
value = "1"
}
option {
name = "2 Cores"
value = "2"
}
option {
name = "3 Cores"
value = "3"
}
option {
name = "4 Cores"
value = "4"
}
}

data "coder_parameter" "memory" {
name = "memory"
display_name = "Memory"
description = "The amount of memory in GB"
default = "2"
icon = "/icon/memory.svg"
mutable = true
option {
name = "2 GB"
value = "2"
}
option {
name = "4 GB"
value = "4"
}
option {
name = "6 GB"
value = "6"
}
option {
name = "8 GB"
value = "8"
}
}

data "coder_workspace" "me" {}

resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
startup_script_timeout = 180
startup_script = <<-EOT
set -e
# install and start code-server
curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server
/tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
EOT

metadata {
display_name = "Load Average (Host)"
key = "load_host"
# get load avg scaled by number of cores
script = <<EOT
echo "`cat /proc/loadavg | awk '{ print $1 }'` `nproc`" | awk '{ printf "%0.2f", $1/$2 }'
EOT
interval = 60
timeout = 1
}
}

# code-server
resource "coder_app" "code-server" {
agent_id = coder_agent.main.id
slug = "code-server"
display_name = "code-server"
icon = "/icon/code.svg"
url = "http://localhost:13337?folder=/home/coder"
subdomain = false
share = "owner"

healthcheck {
url = "http://localhost:13337/healthz"
interval = 3
threshold = 10
}
}

locals {
workspace_tag = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
home_volume_name = "coder_${data.coder_workspace.me.id}_home"
}

resource "nomad_namespace" "coder_workspace" {
name = local.workspace_tag
description = "Coder workspace"
meta = {
owner = data.coder_workspace.me.owner
}
}

data "nomad_plugin" "hostpath" {
plugin_id = "hostpath"
wait_for_healthy = true
}

resource "nomad_csi_volume" "home_volume" {
depends_on = [data.nomad_plugin.hostpath]

lifecycle {
ignore_changes = all
}
plugin_id = "hostpath"
volume_id = local.home_volume_name
name = local.home_volume_name
namespace = nomad_namespace.coder_workspace.name

capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}

mount_options {
fs_type = "ext4"
}
}

resource "nomad_job" "workspace" {
count = data.coder_workspace.me.start_count
depends_on = [nomad_csi_volume.home_volume]
jobspec = templatefile("${path.module}/workspace.nomad.tpl", {
coder_workspace_owner = data.coder_workspace.me.owner
coder_workspace_name = data.coder_workspace.me.name
workspace_tag = local.workspace_tag
cores = tonumber(data.coder_parameter.cpu.value)
memory_mb = tonumber(data.coder_parameter.memory.value * 1024)
coder_init_script = coder_agent.main.init_script
coder_agent_token = coder_agent.main.token
workspace_name = data.coder_workspace.me.name
home_volume_name = local.home_volume_name
})
deregister_on_destroy = true
purge_on_destroy = true
}

resource "coder_metadata" "workspace_info" {
count = data.coder_workspace.me.start_count
resource_id = nomad_job.workspace[0].id
item {
key = "CPU (Cores)"
value = data.coder_parameter.cpu.value
}
item {
key = "Memory (GiB)"
value = data.coder_parameter.memory.value
}
}
53 changes: 53 additions & 0 deletions examples/templates/nomad-docker/workspace.nomad.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
job "workspace" {
datacenters = ["dc1"]
namespace = "${workspace_tag}"
type = "service"
group "workspace" {
volume "home_volume" {
type = "csi"
source = "${home_volume_name}"
read_only = false
attachment_mode = "file-system"
access_mode = "single-node-writer"
}
network {
port "http" {}
}
task "workspace" {
driver = "docker"
config {
image = "codercom/enterprise-base:ubuntu"
ports = ["http"]
labels {
name = "${workspace_tag}"
managed_by = "coder"
}
hostname = "${workspace_name}"
entrypoint = ["sh", "-c", "sudo chown coder:coder -R /home/coder && echo '${base64encode(coder_init_script)}' | base64 --decode | sh"]
}
volume_mount {
volume = "home_volume"
destination = "/home/coder"
}
resources {
cores = ${cores}
memory = ${memory_mb}
}
env {
CODER_AGENT_TOKEN = "${coder_agent_token}"
}
meta {
tag = "${workspace_tag}"
managed_by = "coder"
}
}
meta {
tag = "${workspace_tag}"
managed_by = "coder"
}
}
meta {
tag = "${workspace_tag}"
managed_by = "coder"
}
}
2 changes: 2 additions & 0 deletions site/static/icon/nomad.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.