1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374 | juju-test.conductor DEBUG : Starting a bootstrap for osci-sv07, kill after 300
juju-test.conductor DEBUG : Running the following: juju bootstrap -e osci-sv07
Bootstrapping environment "osci-sv07"
Starting new instance for initial state server
Launching instance
- f38abc16-f540-4ce2-9805-e71d287da7ec
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 172.17.107.71:22
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: rsyslog-gnutls
Installing package: cloud-utils
Installing package: cloud-image-utils
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/releases/juju-1.23.3-trusty-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap complete
juju-test.conductor DEBUG : Waiting for bootstrap
juju-test.conductor DEBUG : Still not bootstrapped
juju-test.conductor DEBUG : Running the following: juju status -e osci-sv07
juju-test.conductor DEBUG : State for 1.23.3: started
juju-test.conductor.014-basic-precise-icehouse DEBUG : Running 014-basic-precise-icehouse (tests/014-basic-precise-icehouse)
2015-06-16 13:35:46 Starting deployment of osci-sv07
2015-06-16 13:36:23 Deploying services...
2015-06-16 13:36:23 Deploying service ceph using local:precise/ceph
2015-06-16 13:36:31 Deploying service cinder using local:precise/cinder
2015-06-16 13:36:36 Deploying service glance using local:precise/glance
2015-06-16 13:36:41 Deploying service keystone using local:precise/keystone
2015-06-16 13:36:46 Deploying service mysql using local:precise/mysql
2015-06-16 13:36:51 Deploying service nova-compute using local:precise/nova-compute
2015-06-16 13:36:56 Deploying service rabbitmq-server using local:precise/rabbitmq-server
2015-06-16 13:42:16 Adding relations...
2015-06-16 13:42:17 Adding relation glance:shared-db <-> mysql:shared-db
2015-06-16 13:42:17 Adding relation cinder:ceph <-> ceph:client
2015-06-16 13:42:17 Adding relation nova-compute:amqp <-> rabbitmq-server:amqp
2015-06-16 13:42:18 Adding relation cinder:amqp <-> rabbitmq-server:amqp
2015-06-16 13:42:18 Adding relation cinder:identity-service <-> keystone:identity-service
2015-06-16 13:42:18 Adding relation glance:identity-service <-> keystone:identity-service
2015-06-16 13:42:19 Adding relation nova-compute:image-service <-> glance:image-service
2015-06-16 13:42:19 Adding relation nova-compute:shared-db <-> mysql:shared-db
2015-06-16 13:42:19 Adding relation glance:ceph <-> ceph:client
2015-06-16 13:42:20 Adding relation nova-compute:ceph <-> ceph:client
2015-06-16 13:42:20 Adding relation glance:amqp <-> rabbitmq-server:amqp
2015-06-16 13:42:20 Adding relation keystone:shared-db <-> mysql:shared-db
2015-06-16 13:42:21 Adding relation cinder:image-service <-> glance:image-service
2015-06-16 13:42:21 Adding relation cinder:shared-db <-> mysql:shared-db
2015-06-16 13:43:24 Deployment complete in 458.43 seconds
juju-test.conductor.014-basic-precise-icehouse DEBUG : 2015-06-16 13:35:18,697 get_ubuntu_releases DEBUG: Ubuntu release list: ['warty', 'hoary', 'breezy', 'dapper', 'edgy', 'feisty', 'gutsy', 'hardy', 'intrepid', 'jaunty', 'karmic', 'lucid', 'maverick', 'natty', 'oneiric', 'precise', 'quantal', 'raring', 'saucy', 'trusty', 'utopic', 'vivid']
2015-06-16 13:46:16,535 _initialize_tests DEBUG: openstack release val: 4
2015-06-16 13:46:16,537 _initialize_tests DEBUG: openstack release str: icehouse
2015-06-16 13:46:46,554 authenticate_keystone_admin DEBUG: Authenticating keystone admin...
2015-06-16 13:46:57,963 authenticate_glance_admin DEBUG: Authenticating glance admin...
2015-06-16 13:46:57,964 authenticate_nova_user DEBUG: Authenticating nova user (admin)...
2015-06-16 13:46:57,965 tenant_exists DEBUG: Checking if tenant exists (demoTenant)...
2015-06-16 13:46:58,129 authenticate_keystone_user DEBUG: Authenticating keystone user (demoUser)...
2015-06-16 13:46:58,232 authenticate_nova_user DEBUG: Authenticating nova user (demoUser)...
2015-06-16 13:46:58,233 validate_services_by_name DEBUG: Checking status of system services...
2015-06-16 13:46:59,236 get_ubuntu_release_from_sentry DEBUG: cinder/0 lsb_release: precise
2015-06-16 13:47:00,174 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-api` returned 0
2015-06-16 13:47:01,108 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-scheduler` returned 0
2015-06-16 13:47:02,066 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-volume` returned 0
2015-06-16 13:47:03,139 get_ubuntu_release_from_sentry DEBUG: ceph/2 lsb_release: precise
2015-06-16 13:47:04,212 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-mon-all` returned 0
2015-06-16 13:47:05,135 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 13:47:06,109 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd-all` returned 0
2015-06-16 13:47:07,037 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:08,000 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:08,959 get_ubuntu_release_from_sentry DEBUG: rabbitmq-server/0 lsb_release: precise
2015-06-16 13:47:10,130 validate_services_by_name DEBUG: rabbitmq-server/0 `sudo service rabbitmq-server status` returned 0
2015-06-16 13:47:11,129 get_ubuntu_release_from_sentry DEBUG: ceph/0 lsb_release: precise
2015-06-16 13:47:12,255 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-mon-all` returned 0
2015-06-16 13:47:13,211 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 13:47:14,129 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd-all` returned 0
2015-06-16 13:47:15,080 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:16,077 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:17,008 get_ubuntu_release_from_sentry DEBUG: keystone/0 lsb_release: precise
2015-06-16 13:47:17,928 validate_services_by_name DEBUG: keystone/0 `sudo status keystone` returned 0
2015-06-16 13:47:18,922 get_ubuntu_release_from_sentry DEBUG: nova-compute/0 lsb_release: precise
2015-06-16 13:47:20,027 validate_services_by_name DEBUG: nova-compute/0 `sudo status nova-compute` returned 0
2015-06-16 13:47:20,977 get_ubuntu_release_from_sentry DEBUG: ceph/1 lsb_release: precise
2015-06-16 13:47:22,029 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-mon-all` returned 0
2015-06-16 13:47:22,960 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 13:47:23,898 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd-all` returned 0
2015-06-16 13:47:24,910 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:25,863 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 13:47:26,959 get_ubuntu_release_from_sentry DEBUG: mysql/0 lsb_release: precise
2015-06-16 13:47:28,056 validate_services_by_name DEBUG: mysql/0 `sudo status mysql` returned 0
2015-06-16 13:47:29,033 get_ubuntu_release_from_sentry DEBUG: glance/0 lsb_release: precise
2015-06-16 13:47:30,116 validate_services_by_name DEBUG: glance/0 `sudo status glance-registry` returned 0
2015-06-16 13:47:31,084 validate_services_by_name DEBUG: glance/0 `sudo status glance-api` returned 0
2015-06-16 13:47:31,084 test_200_ceph_nova_client_relation DEBUG: Checking ceph:nova-compute ceph relation data...
2015-06-16 13:47:38,127 _validate_dict_data DEBUG: actual: {u'key': u'AQDHJ4BV8HgwORAAbVfGbB9hVGgTgftaV4fphQ==', u'private-address': u'172.17.107.72', u'ceph-public-address': u'172.17.107.72', u'auth': u'none'}
2015-06-16 13:47:38,127 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'auth': 'none'}
2015-06-16 13:47:38,128 test_201_nova_ceph_client_relation DEBUG: Checking nova-compute:ceph ceph-client relation data...
2015-06-16 13:47:43,925 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.79'}
2015-06-16 13:47:43,926 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>}
2015-06-16 13:47:43,926 test_202_ceph_glance_client_relation DEBUG: Checking ceph:glance client relation data...
2015-06-16 13:47:50,873 _validate_dict_data DEBUG: actual: {u'key': u'AQDJJ4BVIAGAExAAmyQdS4ADE2Y15WnNyfQleQ==', u'private-address': u'172.17.107.73', u'ceph-public-address': u'172.17.107.73', u'auth': u'none'}
2015-06-16 13:47:50,873 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'auth': 'none'}
2015-06-16 13:47:50,873 test_203_glance_ceph_client_relation DEBUG: Checking glance:ceph client relation data...
2015-06-16 13:47:55,940 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.76', u'broker_req': u'{"api-version": 1, "ops": [{"replicas": 3, "name": "glance", "op": "create-pool"}]}'}
2015-06-16 13:47:55,940 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>}
2015-06-16 13:47:55,941 test_204_ceph_cinder_client_relation DEBUG: Checking ceph:cinder ceph relation data...
2015-06-16 13:48:02,629 _validate_dict_data DEBUG: actual: {u'key': u'AQDEJ4BVSF90AhAAqOndOKsoy40x9KH/l6AtYA==', u'private-address': u'172.17.107.74', u'ceph-public-address': u'172.17.107.74', u'auth': u'none'}
2015-06-16 13:48:02,629 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>, 'auth': 'none'}
2015-06-16 13:48:02,630 test_205_cinder_ceph_client_relation DEBUG: Checking cinder:ceph ceph relation data...
2015-06-16 13:48:09,992 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.75', u'broker_req': u'{"api-version": 1, "ops": [{"replicas": 3, "name": "cinder", "op": "create-pool"}]}'}
2015-06-16 13:48:09,992 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ca0012790>>}
2015-06-16 13:48:09,993 test_300_ceph_config DEBUG: Checking ceph config file data...
2015-06-16 13:48:09,993 validate_config_data DEBUG: Validating config file data (mds in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 13:48:10,948 validate_config_data DEBUG: Validating config file data (global in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 13:48:11,869 validate_config_data DEBUG: Validating config file data (mon in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 13:48:12,818 validate_config_data DEBUG: Validating config file data (osd in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 13:48:13,805 test_302_cinder_rbd_config DEBUG: Checking cinder (rbd) config file data...
2015-06-16 13:48:13,805 validate_config_data DEBUG: Validating config file data (DEFAULT in /etc/cinder/cinder.conf on cinder/0)...
2015-06-16 13:48:14,741 test_304_glance_rbd_config DEBUG: Checking glance (rbd) config file data...
2015-06-16 13:48:14,741 validate_config_data DEBUG: Validating config file data (DEFAULT in /etc/glance/glance-api.conf on glance/0)...
2015-06-16 13:48:15,673 test_306_nova_rbd_config DEBUG: Checking nova (rbd) config file data...
2015-06-16 13:48:15,673 validate_config_data DEBUG: Validating config file data (libvirt in /etc/nova/nova.conf on nova-compute/0)...
2015-06-16 13:48:16,619 test_400_ceph_check_osd_pools DEBUG: Checking pools on ceph units...
2015-06-16 13:48:17,714 test_400_ceph_check_osd_pools DEBUG: ceph/0 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 13:48:17,714 test_400_ceph_check_osd_pools DEBUG: ceph/0 has the expected pools.
2015-06-16 13:48:18,820 test_400_ceph_check_osd_pools DEBUG: ceph/1 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 13:48:18,820 test_400_ceph_check_osd_pools DEBUG: ceph/1 has the expected pools.
2015-06-16 13:48:19,925 test_400_ceph_check_osd_pools DEBUG: ceph/2 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 13:48:19,925 test_400_ceph_check_osd_pools DEBUG: ceph/2 has the expected pools.
2015-06-16 13:48:19,925 test_400_ceph_check_osd_pools DEBUG: Pool list on all ceph units produced the same results (OK).
2015-06-16 13:48:19,925 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool original samples...
2015-06-16 13:48:21,065 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 0 objects, 0 kb used
2015-06-16 13:48:21,065 create_cinder_volume DEBUG: Creating volume (demo-vol|1GB)
2015-06-16 13:48:22,533 resource_reaches_status DEBUG: Create volume status wait: expected, actual status = available, available
2015-06-16 13:48:32,544 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool samples after volume create...
2015-06-16 13:48:33,745 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 3 objects, 1 kb used
2015-06-16 13:48:33,745 delete_resource DEBUG: Deleting OpenStack resource <Volume: f88148ff-a0b1-42fa-846e-e5b7d0ac015a> (cinder volume)
2015-06-16 13:48:34,023 delete_resource DEBUG: cinder volume delete check: 0 [1:1] <Volume: f88148ff-a0b1-42fa-846e-e5b7d0ac015a>
2015-06-16 13:48:38,055 delete_resource DEBUG: cinder volume: expected, actual count = 0, 0
2015-06-16 13:48:48,065 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool after volume delete...
2015-06-16 13:48:49,209 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 1 objects, 0 kb used
2015-06-16 13:48:49,209 _validate_pool_samples DEBUG: Ceph pool object count samples (OK): [0, 3, 1]
2015-06-16 13:48:49,209 _validate_pool_samples DEBUG: Ceph pool disk usage size samples (OK): [0, 1, 0]
2015-06-16 13:48:49,209 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool original samples...
2015-06-16 13:48:50,321 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 0 objects, 0 kb used
2015-06-16 13:48:50,321 create_cirros_image DEBUG: Creating glance image (cirros-image-1)...
2015-06-16 13:48:50,322 create_cirros_image DEBUG: AMULET_HTTP_PROXY: http://squid.internal:3128/
2015-06-16 13:49:09,529 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool samples after image create...
2015-06-16 13:49:10,751 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 5 objects, 12977 kb used
2015-06-16 13:49:10,751 delete_resource DEBUG: Deleting OpenStack resource <Image {u'status': u'active', u'created_at': u'2015-06-16T13:48:57', u'virtual_size': None, u'name': u'cirros-image-1', u'deleted': False, u'container_format': u'bare', u'min_ram': 0, u'disk_format': u'qcow2', u'updated_at': u'2015-06-16T13:48:59', u'properties': {}, u'min_disk': 0, u'protected': False, u'checksum': u'ee1eca47dc88f4879d8a229cc70a07c6', u'owner': u'654dc8e05ebc405285956faeb04b7381', u'is_public': True, u'deleted_at': None, u'id': u'f2a94e16-ea99-4038-b3e5-13a2c24a3d9e', u'size': 13287936}> (glance image)
2015-06-16 13:49:11,818 delete_resource DEBUG: glance image: expected, actual count = 0, 0
2015-06-16 13:49:21,825 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool samples after image delete...
2015-06-16 13:49:22,955 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 1 objects, 0 kb used
2015-06-16 13:49:22,956 _validate_pool_samples DEBUG: Ceph pool object count samples (OK): [0, 5, 1]
2015-06-16 13:49:22,956 _validate_pool_samples DEBUG: Ceph pool disk usage size samples (OK): [0, 12977, 0]
2015-06-16 13:49:22,956 check_commands_on_units DEBUG: Checking exit codes for 8 commands on 3 sentry units...
2015-06-16 13:49:24,056 check_commands_on_units DEBUG: ceph/0 `sudo ceph -s` returned 0 (OK)
2015-06-16 13:49:25,126 check_commands_on_units DEBUG: ceph/0 `sudo ceph health` returned 0 (OK)
2015-06-16 13:49:26,265 check_commands_on_units DEBUG: ceph/0 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 13:49:27,544 check_commands_on_units DEBUG: ceph/0 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 13:49:28,756 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 13:49:29,852 check_commands_on_units DEBUG: ceph/0 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 13:49:31,081 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 13:49:32,173 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd pool get data pg_num` returned 0 (OK)
2015-06-16 13:49:33,305 check_commands_on_units DEBUG: ceph/1 `sudo ceph -s` returned 0 (OK)
2015-06-16 13:49:34,371 check_commands_on_units DEBUG: ceph/1 `sudo ceph health` returned 0 (OK)
2015-06-16 13:49:35,475 check_commands_on_units DEBUG: ceph/1 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 13:49:36,568 check_commands_on_units DEBUG: ceph/1 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 13:49:37,632 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 13:49:38,681 check_commands_on_units DEBUG: ceph/1 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 13:49:39,761 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 13:49:40,826 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd pool get data pg_num` returned 0 (OK)
2015-06-16 13:49:41,966 check_commands_on_units DEBUG: ceph/2 `sudo ceph -s` returned 0 (OK)
2015-06-16 13:49:43,075 check_commands_on_units DEBUG: ceph/2 `sudo ceph health` returned 0 (OK)
2015-06-16 13:49:44,136 check_commands_on_units DEBUG: ceph/2 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 13:49:45,189 check_commands_on_units DEBUG: ceph/2 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 13:49:46,242 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 13:49:47,335 check_commands_on_units DEBUG: ceph/2 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 13:49:48,440 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 13:49:49,554 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd pool get data pg_num` returned 0 (OK)
juju-test.conductor.014-basic-precise-icehouse RESULT : PASS
juju-test.conductor DEBUG : Tearing down osci-sv07 juju environment
juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test.conductor DEBUG : Starting a bootstrap for osci-sv07, kill after 300
juju-test.conductor DEBUG : Running the following: juju bootstrap -e osci-sv07
Bootstrapping environment "osci-sv07"
Starting new instance for initial state server
Launching instance
- 68223fac-fb90-45ef-bc69-295b0a30d1b9
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 172.17.107.81:22
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: rsyslog-gnutls
Installing package: cloud-utils
Installing package: cloud-image-utils
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/releases/juju-1.23.3-trusty-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap complete
juju-test.conductor DEBUG : Waiting for bootstrap
juju-test.conductor DEBUG : Still not bootstrapped
juju-test.conductor DEBUG : Running the following: juju status -e osci-sv07
juju-test.conductor DEBUG : State for 1.23.3: started
juju-test.conductor.015-basic-trusty-icehouse DEBUG : Running 015-basic-trusty-icehouse (tests/015-basic-trusty-icehouse)
2015-06-16 13:52:50 Starting deployment of osci-sv07
2015-06-16 13:53:28 Deploying services...
2015-06-16 13:53:28 Deploying service ceph using local:trusty/ceph
2015-06-16 13:53:37 Deploying service cinder using local:trusty/cinder
2015-06-16 13:53:42 Deploying service glance using local:trusty/glance
2015-06-16 13:53:46 Deploying service keystone using local:trusty/keystone
2015-06-16 13:53:51 Deploying service mysql using local:trusty/mysql
2015-06-16 13:53:56 Deploying service nova-compute using local:trusty/nova-compute
2015-06-16 13:54:01 Deploying service rabbitmq-server using local:trusty/rabbitmq-server
2015-06-16 13:59:33 Adding relations...
2015-06-16 13:59:34 Adding relation glance:shared-db <-> mysql:shared-db
2015-06-16 13:59:34 Adding relation cinder:ceph <-> ceph:client
2015-06-16 13:59:34 Adding relation nova-compute:amqp <-> rabbitmq-server:amqp
2015-06-16 13:59:35 Adding relation cinder:amqp <-> rabbitmq-server:amqp
2015-06-16 13:59:35 Adding relation cinder:identity-service <-> keystone:identity-service
2015-06-16 13:59:35 Adding relation glance:identity-service <-> keystone:identity-service
2015-06-16 13:59:36 Adding relation nova-compute:image-service <-> glance:image-service
2015-06-16 13:59:36 Adding relation nova-compute:shared-db <-> mysql:shared-db
2015-06-16 13:59:36 Adding relation glance:ceph <-> ceph:client
2015-06-16 13:59:37 Adding relation nova-compute:ceph <-> ceph:client
2015-06-16 13:59:37 Adding relation glance:amqp <-> rabbitmq-server:amqp
2015-06-16 13:59:37 Adding relation keystone:shared-db <-> mysql:shared-db
2015-06-16 13:59:38 Adding relation cinder:image-service <-> glance:image-service
2015-06-16 13:59:38 Adding relation cinder:shared-db <-> mysql:shared-db
2015-06-16 14:00:41 Deployment complete in 471.01 seconds
juju-test.conductor.015-basic-trusty-icehouse DEBUG : 2015-06-16 13:52:25,797 get_ubuntu_releases DEBUG: Ubuntu release list: ['warty', 'hoary', 'breezy', 'dapper', 'edgy', 'feisty', 'gutsy', 'hardy', 'intrepid', 'jaunty', 'karmic', 'lucid', 'maverick', 'natty', 'oneiric', 'precise', 'quantal', 'raring', 'saucy', 'trusty', 'utopic', 'vivid']
2015-06-16 14:01:49,757 _initialize_tests DEBUG: openstack release val: 5
2015-06-16 14:01:49,758 _initialize_tests DEBUG: openstack release str: icehouse
2015-06-16 14:02:19,788 authenticate_keystone_admin DEBUG: Authenticating keystone admin...
2015-06-16 14:02:31,377 authenticate_glance_admin DEBUG: Authenticating glance admin...
2015-06-16 14:02:31,379 authenticate_nova_user DEBUG: Authenticating nova user (admin)...
2015-06-16 14:02:31,379 tenant_exists DEBUG: Checking if tenant exists (demoTenant)...
2015-06-16 14:02:31,548 authenticate_keystone_user DEBUG: Authenticating keystone user (demoUser)...
2015-06-16 14:02:31,654 authenticate_nova_user DEBUG: Authenticating nova user (demoUser)...
2015-06-16 14:02:31,655 validate_services_by_name DEBUG: Checking status of system services...
2015-06-16 14:02:32,669 get_ubuntu_release_from_sentry DEBUG: rabbitmq-server/0 lsb_release: trusty
2015-06-16 14:02:33,908 validate_services_by_name DEBUG: rabbitmq-server/0 `sudo service rabbitmq-server status` returned 0
2015-06-16 14:02:34,857 get_ubuntu_release_from_sentry DEBUG: ceph/0 lsb_release: trusty
2015-06-16 14:02:35,897 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-mon-all` returned 0
2015-06-16 14:02:36,806 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 14:02:37,715 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd-all` returned 0
2015-06-16 14:02:38,691 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:02:39,675 validate_services_by_name DEBUG: ceph/0 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:02:40,663 get_ubuntu_release_from_sentry DEBUG: keystone/0 lsb_release: trusty
2015-06-16 14:02:41,693 validate_services_by_name DEBUG: keystone/0 `sudo status keystone` returned 0
2015-06-16 14:02:42,803 get_ubuntu_release_from_sentry DEBUG: ceph/2 lsb_release: trusty
2015-06-16 14:02:43,945 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-mon-all` returned 0
2015-06-16 14:02:44,997 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 14:02:45,935 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd-all` returned 0
2015-06-16 14:02:46,894 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:02:47,836 validate_services_by_name DEBUG: ceph/2 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:02:49,825 get_ubuntu_release_from_sentry DEBUG: nova-compute/0 lsb_release: trusty
2015-06-16 14:02:50,886 validate_services_by_name DEBUG: nova-compute/0 `sudo status nova-compute` returned 0
2015-06-16 14:02:51,879 get_ubuntu_release_from_sentry DEBUG: mysql/0 lsb_release: trusty
2015-06-16 14:02:52,940 validate_services_by_name DEBUG: mysql/0 `sudo status mysql` returned 0
2015-06-16 14:02:53,926 get_ubuntu_release_from_sentry DEBUG: glance/0 lsb_release: trusty
2015-06-16 14:02:54,898 validate_services_by_name DEBUG: glance/0 `sudo status glance-registry` returned 0
2015-06-16 14:02:55,912 validate_services_by_name DEBUG: glance/0 `sudo status glance-api` returned 0
2015-06-16 14:02:56,905 get_ubuntu_release_from_sentry DEBUG: cinder/0 lsb_release: trusty
2015-06-16 14:02:57,898 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-api` returned 0
2015-06-16 14:02:58,807 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-scheduler` returned 0
2015-06-16 14:02:59,747 validate_services_by_name DEBUG: cinder/0 `sudo status cinder-volume` returned 0
2015-06-16 14:03:00,757 get_ubuntu_release_from_sentry DEBUG: ceph/1 lsb_release: trusty
2015-06-16 14:03:02,012 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-mon-all` returned 0
2015-06-16 14:03:03,214 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-mon id=`hostname`` returned 0
2015-06-16 14:03:04,209 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd-all` returned 0
2015-06-16 14:03:05,218 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==1 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:03:06,266 validate_services_by_name DEBUG: ceph/1 `sudo status ceph-osd id=`initctl list | grep 'ceph-osd ' | awk 'NR==2 { print $2 }' | grep -o '[0-9]*'`` returned 0
2015-06-16 14:03:06,266 test_200_ceph_nova_client_relation DEBUG: Checking ceph:nova-compute ceph relation data...
2015-06-16 14:03:13,175 _validate_dict_data DEBUG: actual: {u'key': u'AQDTK4BVyBmCLRAA2QjM3xtj4qGXQSi7BTRt9g==', u'private-address': u'172.17.107.82', u'ceph-public-address': u'172.17.107.82', u'auth': u'none'}
2015-06-16 14:03:13,175 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'auth': 'none'}
2015-06-16 14:03:13,175 test_201_nova_ceph_client_relation DEBUG: Checking nova-compute:ceph ceph-client relation data...
2015-06-16 14:03:19,049 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.89'}
2015-06-16 14:03:19,049 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>}
2015-06-16 14:03:19,049 test_202_ceph_glance_client_relation DEBUG: Checking ceph:glance client relation data...
2015-06-16 14:03:25,735 _validate_dict_data DEBUG: actual: {u'key': u'AQDUK4BVoAFrNBAAZwDAol1/iRWPpEO7qBjXeA==', u'private-address': u'172.17.107.83', u'ceph-public-address': u'172.17.107.83', u'auth': u'none'}
2015-06-16 14:03:25,735 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'auth': 'none'}
2015-06-16 14:03:25,736 test_203_glance_ceph_client_relation DEBUG: Checking glance:ceph client relation data...
2015-06-16 14:03:30,406 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.86', u'broker_req': u'{"api-version": 1, "ops": [{"replicas": 3, "name": "glance", "op": "create-pool"}]}'}
2015-06-16 14:03:30,406 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>}
2015-06-16 14:03:30,406 test_204_ceph_cinder_client_relation DEBUG: Checking ceph:cinder ceph relation data...
2015-06-16 14:03:37,079 _validate_dict_data DEBUG: actual: {u'key': u'AQDRK4BVSNwqOBAAyXYNnH0eNXqGJCWrkQ5NoA==', u'private-address': u'172.17.107.84', u'ceph-public-address': u'172.17.107.84', u'auth': u'none'}
2015-06-16 14:03:37,079 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'key': <bound method OpenStackAmuletUtils.not_null of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>, 'auth': 'none'}
2015-06-16 14:03:37,079 test_205_cinder_ceph_client_relation DEBUG: Checking cinder:ceph ceph relation data...
2015-06-16 14:03:43,617 _validate_dict_data DEBUG: actual: {u'private-address': u'172.17.107.85', u'broker_req': u'{"api-version": 1, "ops": [{"replicas": 3, "name": "cinder", "op": "create-pool"}]}'}
2015-06-16 14:03:43,617 _validate_dict_data DEBUG: expected: {'private-address': <bound method OpenStackAmuletUtils.valid_ip of <charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils object at 0x2b8ba8b89150>>}
2015-06-16 14:03:43,617 test_300_ceph_config DEBUG: Checking ceph config file data...
2015-06-16 14:03:43,617 validate_config_data DEBUG: Validating config file data (mds in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 14:03:44,506 validate_config_data DEBUG: Validating config file data (global in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 14:03:45,420 validate_config_data DEBUG: Validating config file data (mon in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 14:03:46,350 validate_config_data DEBUG: Validating config file data (osd in /etc/ceph/ceph.conf on ceph/0)...
2015-06-16 14:03:47,255 test_302_cinder_rbd_config DEBUG: Checking cinder (rbd) config file data...
2015-06-16 14:03:47,255 validate_config_data DEBUG: Validating config file data (DEFAULT in /etc/cinder/cinder.conf on cinder/0)...
2015-06-16 14:03:48,172 test_304_glance_rbd_config DEBUG: Checking glance (rbd) config file data...
2015-06-16 14:03:48,172 validate_config_data DEBUG: Validating config file data (DEFAULT in /etc/glance/glance-api.conf on glance/0)...
2015-06-16 14:03:49,086 test_306_nova_rbd_config DEBUG: Checking nova (rbd) config file data...
2015-06-16 14:03:49,086 validate_config_data DEBUG: Validating config file data (libvirt in /etc/nova/nova.conf on nova-compute/0)...
2015-06-16 14:03:50,040 test_400_ceph_check_osd_pools DEBUG: Checking pools on ceph units...
2015-06-16 14:03:51,102 test_400_ceph_check_osd_pools DEBUG: ceph/0 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 14:03:51,102 test_400_ceph_check_osd_pools DEBUG: ceph/0 has the expected pools.
2015-06-16 14:03:52,255 test_400_ceph_check_osd_pools DEBUG: ceph/1 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 14:03:52,256 test_400_ceph_check_osd_pools DEBUG: ceph/1 has the expected pools.
2015-06-16 14:03:53,404 test_400_ceph_check_osd_pools DEBUG: ceph/2 `sudo ceph osd lspools` returned 0 0 data,1 metadata,2 rbd,3 cinder,4 glance,
2015-06-16 14:03:53,404 test_400_ceph_check_osd_pools DEBUG: ceph/2 has the expected pools.
2015-06-16 14:03:53,404 test_400_ceph_check_osd_pools DEBUG: Pool list on all ceph units produced the same results (OK).
2015-06-16 14:03:53,404 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool original samples...
2015-06-16 14:03:54,536 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 0 objects, 0 kb used
2015-06-16 14:03:54,536 create_cinder_volume DEBUG: Creating volume (demo-vol|1GB)
2015-06-16 14:03:56,119 resource_reaches_status DEBUG: Create volume status wait: expected, actual status = available, available
2015-06-16 14:04:06,129 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool samples after volume create...
2015-06-16 14:04:07,274 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 3 objects, 1 kb used
2015-06-16 14:04:07,275 delete_resource DEBUG: Deleting OpenStack resource <Volume: cffc1b22-c537-4b37-92f8-dfe611d51307> (cinder volume)
2015-06-16 14:04:07,481 delete_resource DEBUG: cinder volume delete check: 0 [1:1] <Volume: cffc1b22-c537-4b37-92f8-dfe611d51307>
2015-06-16 14:04:11,509 delete_resource DEBUG: cinder volume: expected, actual count = 0, 0
2015-06-16 14:04:21,520 test_410_ceph_cinder_vol_create DEBUG: Checking ceph cinder pool after volume delete...
2015-06-16 14:04:22,650 _take_ceph_pool_sample DEBUG: Ceph cinder pool (ID 3): 1 objects, 0 kb used
2015-06-16 14:04:22,651 _validate_pool_samples DEBUG: Ceph pool object count samples (OK): [0, 3, 1]
2015-06-16 14:04:22,651 _validate_pool_samples DEBUG: Ceph pool disk usage size samples (OK): [0, 1, 0]
2015-06-16 14:04:22,651 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool original samples...
2015-06-16 14:04:23,751 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 0 objects, 0 kb used
2015-06-16 14:04:23,751 create_cirros_image DEBUG: Creating glance image (cirros-image-1)...
2015-06-16 14:04:23,751 create_cirros_image DEBUG: AMULET_HTTP_PROXY: http://squid.internal:3128/
2015-06-16 14:04:36,149 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool samples after image create...
2015-06-16 14:04:37,314 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 5 objects, 12977 kb used
2015-06-16 14:04:37,315 delete_resource DEBUG: Deleting OpenStack resource <Image {u'status': u'active', u'created_at': u'2015-06-16T14:04:24', u'virtual_size': None, u'name': u'cirros-image-1', u'deleted': False, u'container_format': u'bare', u'min_ram': 0, u'disk_format': u'qcow2', u'updated_at': u'2015-06-16T14:04:26', u'properties': {}, u'min_disk': 0, u'protected': False, u'checksum': u'ee1eca47dc88f4879d8a229cc70a07c6', u'owner': u'cbf2f95d18654d7e93bb30c692a75be9', u'is_public': True, u'deleted_at': None, u'id': u'aac1409e-8763-4ceb-96d7-3f17dfcfe264', u'size': 13287936}> (glance image)
2015-06-16 14:04:38,063 delete_resource DEBUG: glance image: expected, actual count = 0, 0
2015-06-16 14:04:48,074 test_412_ceph_glance_image_create_delete DEBUG: Checking ceph glance pool samples after image delete...
2015-06-16 14:04:49,207 _take_ceph_pool_sample DEBUG: Ceph glance pool (ID 4): 1 objects, 0 kb used
2015-06-16 14:04:49,207 _validate_pool_samples DEBUG: Ceph pool object count samples (OK): [0, 5, 1]
2015-06-16 14:04:49,208 _validate_pool_samples DEBUG: Ceph pool disk usage size samples (OK): [0, 12977, 0]
2015-06-16 14:04:49,208 check_commands_on_units DEBUG: Checking exit codes for 8 commands on 3 sentry units...
2015-06-16 14:04:50,292 check_commands_on_units DEBUG: ceph/0 `sudo ceph -s` returned 0 (OK)
2015-06-16 14:04:51,387 check_commands_on_units DEBUG: ceph/0 `sudo ceph health` returned 0 (OK)
2015-06-16 14:04:52,454 check_commands_on_units DEBUG: ceph/0 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 14:04:53,544 check_commands_on_units DEBUG: ceph/0 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 14:04:54,645 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 14:04:55,703 check_commands_on_units DEBUG: ceph/0 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 14:04:56,835 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 14:04:57,925 check_commands_on_units DEBUG: ceph/0 `sudo ceph osd pool get data pg_num` returned 0 (OK)
2015-06-16 14:04:59,148 check_commands_on_units DEBUG: ceph/1 `sudo ceph -s` returned 0 (OK)
2015-06-16 14:05:00,241 check_commands_on_units DEBUG: ceph/1 `sudo ceph health` returned 0 (OK)
2015-06-16 14:05:01,392 check_commands_on_units DEBUG: ceph/1 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 14:05:02,691 check_commands_on_units DEBUG: ceph/1 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 14:05:03,827 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 14:05:04,927 check_commands_on_units DEBUG: ceph/1 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 14:05:06,200 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 14:05:07,376 check_commands_on_units DEBUG: ceph/1 `sudo ceph osd pool get data pg_num` returned 0 (OK)
2015-06-16 14:05:08,520 check_commands_on_units DEBUG: ceph/2 `sudo ceph -s` returned 0 (OK)
2015-06-16 14:05:09,707 check_commands_on_units DEBUG: ceph/2 `sudo ceph health` returned 0 (OK)
2015-06-16 14:05:10,840 check_commands_on_units DEBUG: ceph/2 `sudo ceph mds stat` returned 0 (OK)
2015-06-16 14:05:12,008 check_commands_on_units DEBUG: ceph/2 `sudo ceph pg stat` returned 0 (OK)
2015-06-16 14:05:13,198 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd stat` returned 0 (OK)
2015-06-16 14:05:14,281 check_commands_on_units DEBUG: ceph/2 `sudo ceph mon stat` returned 0 (OK)
2015-06-16 14:05:15,357 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd pool get data size` returned 0 (OK)
2015-06-16 14:05:16,454 check_commands_on_units DEBUG: ceph/2 `sudo ceph osd pool get data pg_num` returned 0 (OK)
juju-test.conductor.015-basic-trusty-icehouse RESULT : PASS
juju-test.conductor DEBUG : Tearing down osci-sv07 juju environment
juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
|