Ceph cluster looks deployed and happy:
Every 2.0s: juju status Fri Apr 15 12:16:13 2016
[Services]
NAME STATUS EXPOSED CHARM
ceph active false cs:trusty/ceph-260
[Relations]
SERVICE1 SERVICE2 RELATION TYPE
ceph ceph mon peer
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
ceph/0 active idle 2.0-beta4.1 0 10.10.103.212 Unit is ready and clustered
ceph/1 active idle 2.0-beta4.1 1 10.10.132.205 Unit is ready and clustered
ceph/2 active idle 2.0-beta4.1 2 10.10.35.243 Unit is ready and clustered
[Machines]
ID STATE DNS INS-ID SERIES AZ
0 started 10.10.103.212 juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-0 trusty
1 started 10.10.132.205 juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-1 trusty
2 started 10.10.35.243 juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-2 trusty
However, the health status on the units shows:
$ juju ssh ceph/0 'sudo ceph -s'
Warning: Permanently added '10.10.103.212' (ECDSA) to the list of known hosts.
sudo: unable to resolve host juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-0
cluster ecbb8960-0e21-11e2-b495-83a88f44db01
health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
monmap e1: 3 mons at {juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-0=10.10.103.212:6789/0,juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-1=10.10.132.205:6789/0,juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-2=10.10.35.243:6789/0}, election epoch 6, quorum 0,1,2 juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-2,juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-0,juju-ca086f65-a3f2-4262-8e21-c6d55d4de67c-machine-1
osdmap e4: 3 osds: 0 up, 0 in
pgmap v5: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
192 creating
Connection to 10.10.103.212 closed.