Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit d9a7501

Browse files
committed
Merge remote-tracking branch 'upstream/master'
Conflicts: group_vars/all.sample roles/cdh_hadoop_config/tasks/main.yml roles/cdh_hbase_config/templates/regionservers roles/common/tasks/main.yml roles/common/templates/hosts roles/oracle_jdk/files/install_debian_webupd8team_repo.sh roles/oracle_jdk/tasks/main.yml roles/presto_common/tasks/main.yml roles/td_agent/files/install_debian_libssl0.9.8.sh roles/td_agent/tasks/main.yml roles/td_agent/templates/td-agent.conf site.sh
2 parents ad2bf7a + 1301c13 commit d9a7501

File tree

68 files changed

+457
-214
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+457
-214
lines changed

.travis.yml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,13 @@ language: python
22
python: '2.6'
33
addons:
44
firefox: "25.0.1"
5+
cache:
6+
directories:
7+
- $HOME/.pip-cache/
58
install:
69
- pip install python-keyczar==0.71c
7-
- pip install ansible
8-
- sudo pip install dopy
10+
- pip install ansible --download-cache $HOME/.pip-cache
11+
- pip install dopy --download-cache $HOME/.pip-cache
912
before_script:
1013
- ansible-playbook -i localhost --extra-vars "api_key_password=$DO_API_KEY client_id=$DO_CLIENT_ID" do_cluster.yml
1114
- "export DISPLAY=:99.0"

README.md

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
1-
Hadoop Ansible Playbook [![Build Status](https://travis-ci.org/analytically/hadoop-ansible.png)](https://travis-ci.org/analytically/hadoop-ansible) [![Bitdeli Badge](https://d2weczhvl823v0.cloudfront.net/analytically/hadoop-ansible/trend.png)](https://bitdeli.com/free "Bitdeli Badge")
1+
Hadoop Ansible Playbook [![Build Status](https://travis-ci.org/analytically/hadoop-ansible.svg?branch=master)](https://travis-ci.org/analytically/hadoop-ansible)
22
=======================
33

4-
[Ansible](http://www.ansibleworks.com/) playbook that installs a CDH 4.5 [Hadoop](http://hadoop.apache.org/)
4+
[Ansible](http://www.ansibleworks.com/) playbook that installs a CDH 4.6.0 [Hadoop](http://hadoop.apache.org/)
55
cluster (running on Java 7, supported from [CDH 4.4](http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Release-Notes/Whats_New_in_4-4.html)),
66
with [HBase](http://hbase.apache.org/), Hive, [Presto](http://prestodb.io/) for analytics, and [Ganglia](http://ganglia.sourceforge.net/),
77
[Smokeping](http://oss.oetiker.ch/smokeping/), [Fluentd](http://fluentd.org/), [Elasticsearch](http://www.elasticsearch.org/)
88
and [Kibana](http://www.elasticsearch.org/overview/kibana/) for monitoring and centralized log indexing.
99

10-
Hire/Follow [@analytically](http://twitter.com/analytically). Browse the CI [build screenshots](http://hadoop-ansible.s3-website-us-east-1.amazonaws.com/#artifacts/).
10+
Follow [@analytically](http://twitter.com/analytically). Browse the CI [build screenshots](http://hadoop-ansible.s3-website-us-east-1.amazonaws.com/#artifacts/).
1111

1212
### Requirements
1313

14-
- [Ansible](http://www.ansibleworks.com/) 1.4 or later (`pip install ansible`)
15-
- 6 + 1 Ubuntu 12.04 LTS, 13.04 or 13.10 hosts - see [ubuntu-netboot-tftp](https://github.com/analytically/ubuntu-netboot-tftp) if you need automated server installation
14+
- [Ansible](http://www.ansibleworks.com/) 1.5 or later (`pip install ansible`)
15+
- 6 + 1 Ubuntu 12.04 LTS/13.04/13.10 or Debian "wheezy" hosts - see [ubuntu-netboot-tftp](https://github.com/analytically/ubuntu-netboot-tftp) if you need automated server installation
1616
- [Mandrill](http://mandrill.com/) username and API key for sending email notifications
1717
- `ansibler` user in sudo group without sudo password prompt (see Bootstrapping section below)
1818

@@ -59,6 +59,9 @@ Required:
5959

6060
Optional:
6161

62+
- Network interface: if you'd like to use a different IP address per host (eg. internal interface), change `site.yml` and
63+
change `set_fact: ipv4_address=...` to determine the correct IP address to use per host. If this fact is not set,
64+
`ansible_default_ipv4.address` will be used.
6265
- Email notification: `notify_email`, `postfix_domain`, `mandrill_username`, `mandrill_api_key`
6366
- [`roles/common`](roles/common/defaults/main.yml): `kernel_swappiness`(0), `nofile` limits, ntp servers and `rsyslog_polling_interval_secs`(10)
6467
- [`roles/2_aggregated_links`](roles/2_aggregated_links/defaults/main.yml): `bond_mode` (balance-alb) and `mtu` (9216)
@@ -123,8 +126,14 @@ After the installation, go here:
123126
- Ganglia at [monitor01/ganglia](http://monitor01/ganglia/)
124127
- Kibana at [monitor01/kibana/index.html#/dashboard/file/logstash.json](http://monitor01/kibana/index.html#/dashboard/file/logstash.json)
125128
- Smokeping at [monitor01/smokeping/smokeping.cgi](http://monitor01/smokeping/smokeping.cgi)
126-
- hmaster01 at [hmaster01:50070](http://hmaster01:50070) - active namenode
127-
- hmaster02 at [hmaster02:50070](http://hmaster01:50070) - standby namenode
129+
- Hadoop Active Namenode at [hmaster01:50070](http://hmaster01:50070)
130+
- Hadoop Standby Namenode at [hmaster02:50070](http://hmaster02:50070)
131+
- Presto Coordinator at [hmaster02:8081](http://hmaster02:8081)
132+
- Hadoop Job History at [hslave01:19888](http://hslave01:19888)
133+
- Hadoop Node Manager at [hslave01:8042]](http://hslave01:8042)
134+
- ElasticSearch at (monitor01:9200)[http://monitor01:9200/]
135+
- Hive CLI at (hmaster02) - ssh hmaster02; hive
136+
- Also see [cloudera.com](http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_ports_cdh4.html) - CDH4 Ports
128137

129138
### Performance testing
130139

@@ -142,7 +151,7 @@ Instructions on how to test the performance of your CDH4 cluster.
142151

143152
##### DFSIO
144153

145-
- `hadoop jar hadoop-mapreduce-client-jobclient-2.0.0-cdh4.5.0-tests.jar TestDFSIO -write`
154+
- `hadoop jar hadoop-mapreduce-client-jobclient-2.0.0-cdh4.6.0-tests.jar TestDFSIO -write`
146155

147156
### Bootstrapping
148157

ansible.cfg

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,7 @@
11
[defaults]
2-
timeout = 20
2+
timeout=20
3+
forks=20
4+
5+
[ssh_connection]
6+
ssh_args=-o ControlMaster=auto -o ControlPersist=1800s -o ForwardAgent=yes
7+
pipelining=True

bootstrap/ansible.cfg

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
[defaults]
2+
timeout=20
3+
forks=20
4+
5+
[ssh_connection]
6+
ssh_args=-o ControlMaster=auto -o ControlPersist=1800s
7+
pipelining=True

bootstrap/bootstrap.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,8 @@ fi
77

88
if [ ! -f "hosts" ]; then
99
echo "Please create a hosts inventory file (see hosts.sample)."
10-
exit
10+
exit
11+
1112
fi
1213

1314
export ANSIBLE_HOST_KEY_CHECKING=False

bootstrap/bootstrap.yml

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,10 @@
1919
hostname: name={{ inventory_hostname }}
2020

2121
- name: create user 'ansibler'
22-
user: name=ansibler groups=sudo generate_ssh_key=yes shell=/bin/bash
22+
user: name=ansibler groups=sudo shell=/bin/bash
2323

2424
- name: add 'ansibler' RSA SSH key
2525
authorized_key: user=ansibler key="{{ authorized_rsa_key }}"
2626

2727
- name: change sudoers to contains NOPASSWD for sudo group
28-
shell: "creates=/etc/sudoers.bak chdir=/etc cp sudoers sudoers.bak && sed -ri -e 's/(%sudo\\s+ALL=\\(ALL:ALL\\))\\s+ALL/\\1 NOPASSWD: ALL/' /etc/sudoers"
29-
30-
- name: install python-keyczar via apt (for Ansible's Accelerated Mode)
31-
apt: pkg=python-keyczar
28+
shell: "creates=/etc/sudoers.bak chdir=/etc cp sudoers sudoers.bak && sed -ri -e 's/(%sudo\\s+ALL=\\(ALL:ALL\\))\\s+ALL/\\1 NOPASSWD: ALL/' /etc/sudoers"

do_cluster.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
api_key={{ api_key_password }}
2121
size_id=62
2222
region_id=4
23-
image_id=1505699
23+
image_id=3101918
2424
register: hosts
2525
with_items:
2626
- monitor01

group_vars/all.sample

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,3 +10,13 @@ mtu: 9216
1010
# mandrill_username: your_username
1111
# mandrill_api_key: your_api_key
1212

13+
# Upgrade kernel to 3.13, much improved epoll performance
14+
upgrade_kernel: no
15+
16+
# replace the /etc/hosts file with the hosts and ip addresses of the cluster
17+
muck_up_hosts: yes
18+
19+
#use the IP address in this index
20+
#0 is the first one, 1 is the second one,
21+
#-2 is the second to last one, -1 is the last one
22+
ansible_all_ipv4_addresses_index: 0

hosts.sample

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ hslave[01:04]
2525
datanodes
2626

2727
[historyserver]
28-
hslave01
28+
hmaster01
2929

3030
# HBase Nodes
3131
# ===========

localhost

Lines changed: 0 additions & 2 deletions
This file was deleted.

roles/apache2/tasks/main.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,11 @@
77
- apache2
88
tags: apache
99

10+
- name: delete default site
11+
file: dest=/etc/apache2/sites-enabled/000-default.conf state=absent
12+
1013
- name: configure apache2 so it doesn't complain 'can't determine fqdn'
1114
lineinfile: dest=/etc/apache2/apache2.conf regexp="{{ ansible_fqdn }}" line="ServerName {{ ansible_fqdn }}"
15+
notify:
16+
- reload apache config
1217
tags: apache

roles/cdh_hadoop_config/defaults/main.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
# file: roles/cdh_hadoop_config/vars/main.yml
2+
# file: roles/cdh_hadoop_config/defaults/main.yml
33

44
# The default block size for new files, in bytes - here 256 MB
55
dfs_blocksize: 268435456

roles/cdh_hadoop_config/files/dfs.hosts.exclude

Whitespace-only changes.

roles/cdh_hadoop_config/handlers/main.yml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,20 @@
1212
- name: restart hadoop-hdfs-journalnode
1313
service: name=hadoop-hdfs-journalnode state=restarted
1414
ignore_errors: yes
15+
16+
- name: restart hadoop-mapreduce-historyserver
17+
service: name=hadoop-mapreduce-historyserver state=restarted
18+
ignore_errors: yes
19+
20+
- name: restart hadoop-yarn-nodemanager
21+
service: name=hadoop-yarn-nodemanager state=restarted
22+
ignore_errors: yes
23+
24+
- name: restart hadoop-yarn-resourcemanager
25+
service: name=hadoop-yarn-resourcemanager state=restarted
26+
ignore_errors: yes
27+
28+
- name: refresh datanodes
29+
sudo_user: hdfs
30+
command: hdfs dfsadmin -refreshNodes
31+
ignore_errors: yes

roles/cdh_hadoop_config/tasks/main.yml

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# file: roles/cdh_hadoop_config/tasks/main.yml
33

44
- name: copy /etc/hadoop/conf.empty to /etc/hadoop/conf.{{ site_name|lower }}
5-
shell: creates=/etc/hadoop/conf.{{ site_name|lower }} cp -R -p /etc/hadoop/conf.empty /etc/hadoop/conf.{{ site_name|lower }}
5+
command: creates=/etc/hadoop/conf.{{ site_name|lower }} cp -R -p /etc/hadoop/conf.empty /etc/hadoop/conf.{{ site_name|lower }}
66
tags:
77
- hadoop
88
- configuration
@@ -23,14 +23,25 @@
2323
- restart hadoop-hdfs-namenode
2424
- restart hadoop-hdfs-journalnode
2525
- restart hadoop-hdfs-datanode
26+
- restart hadoop-mapreduce-historyserver
27+
- restart hadoop-yarn-nodemanager
28+
- restart hadoop-yarn-resourcemanager
29+
tags:
30+
- hadoop
31+
- configuration
32+
33+
- name: update excluded datanodes
34+
copy: src=dfs.hosts.exclude dest=/etc/hadoop/conf/dfs.hosts.exclude owner=root group=root mode=644
35+
notify:
36+
- refresh datanodes
2637
tags:
2738
- hadoop
2839
- configuration
2940

3041
- name: run 'update-alternatives' to install hadoop configuration
31-
shell: update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.{{ site_name|lower }} 50
42+
command: update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.{{ site_name|lower }} 50
3243
tags: hadoop
3344

3445
- name: run 'update-alternatives' to set hadoop configuration
35-
shell: update-alternatives --set hadoop-conf /etc/hadoop/conf.{{ site_name|lower }}
46+
command: update-alternatives --set hadoop-conf /etc/hadoop/conf.{{ site_name|lower }}
3647
tags: hadoop

roles/cdh_hadoop_config/templates/core-site.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
<property>
99
<name>fs.defaultFS</name>
10-
<value>hdfs://{{ hostvars[groups['namenodes'][0]]['ansible_fqdn'] }}</value>
10+
<value>hdfs://{{ site_name|lower }}</value>
1111
<final>true</final>
1212
</property>
1313

roles/cdh_hadoop_config/templates/hadoop-metrics2.properties

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,22 +35,22 @@
3535
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
3636

3737
# journalnodes
38-
journalnode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
38+
journalnode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
3939

4040
# namenodes
41-
namenode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
41+
namenode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
4242

4343
# datanodes
44-
datanode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
44+
datanode.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
4545

4646
# jobtrackers
47-
jobtracker.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
47+
jobtracker.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
4848

4949
# tasktrackers
50-
tasktracker.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
50+
tasktracker.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
5151

5252
# maptasks
53-
maptask.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
53+
maptask.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}
5454

5555
# reducetasks
56-
reducetask.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}:8649{% if not loop.last %},{% endif %}{% endfor %}
56+
reducetask.sink.ganglia.servers={% for host in groups['namenodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}:8649{% if not loop.last %},{% endif %}{% endfor %}

roles/cdh_hadoop_config/templates/hdfs-site.xml

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,11 @@
44
<!-- {{ ansible_managed }} -->
55

66
<configuration>
7+
<property>
8+
<name>dfs.hosts.exclude</name>
9+
<value>/etc/hadoop/conf/dfs.hosts.exclude</value>
10+
</property>
11+
712
<!-- common server name -->
813
<property>
914
<name>dfs.nameservices</name>
@@ -26,7 +31,7 @@
2631
{% for host in groups['namenodes'] %}
2732
<property>
2833
<name>dfs.namenode.http-address.{{ site_name|lower }}.nn{{ loop.index }}</name>
29-
<value>0.0.0.0:50070</value>
34+
<value>{{ host }}:50070</value>
3035
</property>
3136
{% endfor %}
3237

roles/cdh_hadoop_config/templates/mapred-site.xml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@
99
<value>yarn</value>
1010
</property>
1111

12+
<property>
13+
<name>yarn.app.mapreduce.am.staging-dir</name>
14+
<value>/user</value>
15+
</property>
16+
1217
<property>
1318
<name>mapreduce.jobhistory.address</name>
1419
<value>{{ hostvars[groups['historyserver'][0]]['ansible_fqdn'] }}:10020</value>
@@ -73,4 +78,10 @@
7378
<name>mapreduce.output.fileoutputformat.compress.type</name>
7479
<value>BLOCK</value>
7580
</property>
81+
82+
<property>
83+
<name>mapreduce.map.java.opts</name>
84+
<value>-Xmx1024m</value>
85+
<description>Higher Java heap for mapper to work.</description>
86+
</property>
7687
</configuration>
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
{% for host in groups['datanodes'] %}{{ hostvars[host]['ansible_default_ipv4']['address'] }}
1+
{% for host in groups['datanodes'] %}{{ hostvars[host].ipv4_address|default(hostvars[host].ansible_default_ipv4.address) }}
22
{% endfor %}

roles/cdh_hadoop_config/templates/yarn-site.xml

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,22 @@
44
<!-- {{ ansible_managed }} -->
55

66
<configuration>
7+
<!-- CPU Cores -->
8+
<property>
9+
<name>yarn.nodemanager.resource.cpu-vcores</name>
10+
<value>{{ ansible_processor_count * ansible_processor_cores * ansible_processor_threads_per_core }}</value>
11+
</property>
12+
13+
<!-- Memory limits -->
14+
<property>
15+
<name>yarn.scheduler.maximum-allocation-mb</name>
16+
<value>{{ ansible_memtotal_mb - 1024 }}</value>
17+
</property>
18+
<property>
19+
<name>yarn.nodemanager.resource.memory-mb</name>
20+
<value>{{ ansible_memtotal_mb - 1024 }}</value>
21+
</property>
22+
723
<property>
824
<name>yarn.resourcemanager.resource-tracker.address</name>
925
<value>{{ hostvars[groups['resourcemanager'][0]]['ansible_fqdn'] }}:8031</value>
@@ -56,10 +72,6 @@
5672
<name>yarn.nodemanager.remote-app-log-dir</name>
5773
<value>/var/log/hadoop-yarn/apps</value>
5874
</property>
59-
<property>
60-
<name>yarn.app.mapreduce.am.staging-dir</name>
61-
<value>/user</value>
62-
</property>
6375

6476
<!-- Fair scheduling is a method of assigning resources to jobs such that all jobs get, on average, an equal
6577
share of resources over time. When there is a single job running, that job uses the entire cluster. -->
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
2-
# file: roles/cdh_hbase_config/vars/main.yml
2+
# file: roles/cdh_hbase_config/defaults/main.yml
33

4-
hbase:
5-
# The HBase heap size
6-
heapsize: 8192
4+
# The HBase heap size
5+
hbase_heapsize: 4096

roles/cdh_hbase_config/tasks/main.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# file: roles/cdh_hbase_config/tasks/main.yml
33

44
- name: copy /etc/hbase/conf.empty to /etc/hbase/conf.{{ site_name|lower }}
5-
shell: creates=/etc/hbase/conf.{{ site_name|lower }} cp -R -p /etc/hbase/conf.dist /etc/hbase/conf.{{ site_name|lower }}
5+
command: creates=/etc/hbase/conf.{{ site_name|lower }} cp -R -p /etc/hbase/conf.dist /etc/hbase/conf.{{ site_name|lower }}
66
tags: hbase
77

88
- name: configure HBase in /etc/hbase/conf.{{ site_name|lower }}
@@ -19,9 +19,9 @@
1919
- configuration
2020

2121
- name: run 'update-alternatives' to install HBase configuration
22-
shell: update-alternatives --install /etc/hbase/conf hbase-conf /etc/hbase/conf.{{ site_name|lower }} 50
22+
command: update-alternatives --install /etc/hbase/conf hbase-conf /etc/hbase/conf.{{ site_name|lower }} 50
2323
tags: hbase
2424

2525
- name: run 'update-alternatives' to set HBase configuration
26-
shell: update-alternatives --set hbase-conf /etc/hbase/conf.{{ site_name|lower }}
26+
command: update-alternatives --set hbase-conf /etc/hbase/conf.{{ site_name|lower }}
2727
tags: hbase

0 commit comments

Comments
 (0)