1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178 | arosales@x230:~$ juju run-action spark/0 sparkpi
Action queued with id: b119d946-9932-4da1-83c1-f333b3c1908c
arosales@x230:~$ juju show-action-output b119d946-9932-4da1-83c1-f333b3c1908c
status: running
timing:
enqueued: 2017-01-24 17:28:00 +0000 UTC
started: 2017-01-24 17:28:04 +0000 UTC
arosales@x230:~$ juju show-action-output b119d946-9932-4da1-83c1-f333b3c1908c
results:
meta:
composite:
direction: asc
units: secs
value: "18"
raw: "17/01/24 17:28:05 INFO SparkContext: Running Spark version 1.5.1\n17/01/24
17:28:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable\n17/01/24 17:28:05 INFO
SecurityManager: Changing view acls to: root\n17/01/24 17:28:05 INFO SecurityManager:
Changing modify acls to: root\n17/01/24 17:28:05 INFO SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions: Set(root);
users with modify permissions: Set(root)\n17/01/24 17:28:06 INFO Slf4jLogger:
Slf4jLogger started\n17/01/24 17:28:06 INFO Remoting: Starting remoting\n17/01/24
17:28:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.31.0.195:42150]\n17/01/24
17:28:06 INFO Utils: Successfully started service 'sparkDriver' on port 42150.\n17/01/24
17:28:06 INFO SparkEnv: Registering MapOutputTracker\n17/01/24 17:28:06 INFO
SparkEnv: Registering BlockManagerMaster\n17/01/24 17:28:06 INFO DiskBlockManager:
Created local directory at /tmp/blockmgr-45eb93fa-3bf1-49aa-b87f-1fc8433e1d1e\n17/01/24
17:28:06 INFO MemoryStore: MemoryStore started with capacity 530.0 MB\n17/01/24
17:28:06 INFO HttpFileServer: HTTP File server directory is /tmp/spark-9e5f25f3-4869-4f02-8a30-8adaa5504f5f/httpd-0523c4cc-fe60-4f48-9852-e9e88e303b08\n17/01/24
17:28:06 INFO HttpServer: Starting HTTP Server\n17/01/24 17:28:07 INFO Utils:
Successfully started service 'HTTP file server' on port 33897.\n17/01/24 17:28:07
INFO SparkEnv: Registering OutputCommitCoordinator\n17/01/24 17:28:07 INFO Utils:
Successfully started service 'SparkUI' on port 4040.\n17/01/24 17:28:07 INFO
SparkUI: Started SparkUI at http://172.31.0.195:4040\n17/01/24 17:28:07 INFO
SparkContext: Added JAR file:/usr/lib/spark/lib/spark-examples.jar at http://172.31.0.195:33897/jars/spark-examples.jar
with timestamp 1485278887277\n17/01/24 17:28:07 WARN MetricsSystem: Using default
name DAGScheduler for source because spark.app.id is not set.\n17/01/24 17:28:07
INFO RMProxy: Connecting to ResourceManager at ip-172-31-17-241.us-west-2.compute.internal/172.31.17.241:8032\n17/01/24
17:28:07 INFO Client: Requesting a new application from cluster with 3 NodeManagers\n17/01/24
17:28:07 INFO Client: Verifying our application has not requested more than
the maximum memory capability of the cluster (8192 MB per container)\n17/01/24
17:28:07 INFO Client: Will allocate AM container, with 896 MB memory including
384 MB overhead\n17/01/24 17:28:07 INFO Client: Setting up container launch
context for our AM\n17/01/24 17:28:07 INFO Client: Setting up the launch environment
for our AM container\n17/01/24 17:28:07 INFO Client: Preparing resources for
our AM container\n17/01/24 17:28:08 INFO Client: Uploading resource file:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.7.1.jar
-> hdfs://ip-172-31-17-241.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1485212793368_0004/spark-assembly-1.5.1-hadoop2.7.1.jar\n17/01/24
17:28:10 INFO Client: Uploading resource file:/tmp/spark-9e5f25f3-4869-4f02-8a30-8adaa5504f5f/__spark_conf__4210535578219634354.zip
-> hdfs://ip-172-31-17-241.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1485212793368_0004/__spark_conf__4210535578219634354.zip\n17/01/24
17:28:10 INFO SecurityManager: Changing view acls to: root\n17/01/24 17:28:10
INFO SecurityManager: Changing modify acls to: root\n17/01/24 17:28:10 INFO
SecurityManager: SecurityManager: authentication disabled; ui acls disabled;
users with view permissions: Set(root); users with modify permissions: Set(root)\n17/01/24
17:28:10 INFO Client: Submitting application 4 to ResourceManager\n17/01/24
17:28:10 INFO YarnClientImpl: Submitted application application_1485212793368_0004\n17/01/24
17:28:11 INFO Client: Application report for application_1485212793368_0004
(state: ACCEPTED)\n17/01/24 17:28:11 INFO Client: \n\t client token: N/A\n\t
diagnostics: N/A\n\t ApplicationMaster host: N/A\n\t ApplicationMaster RPC port:
-1\n\t queue: default\n\t start time: 1485278890271\n\t final status: UNDEFINED\n\t
tracking URL: http://ip-172-31-17-241.us-west-2.compute.internal:20888/proxy/application_1485212793368_0004/\n\t
user: root\n17/01/24 17:28:12 INFO Client: Application report for application_1485212793368_0004
(state: ACCEPTED)\n17/01/24 17:28:13 INFO Client: Application report for application_1485212793368_0004
(state: ACCEPTED)\n17/01/24 17:28:14 INFO Client: Application report for application_1485212793368_0004
(state: ACCEPTED)\n17/01/24 17:28:14 INFO YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@172.31.40.6:39620/user/YarnAM#1275690249])\n17/01/24
17:28:14 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
Map(PROXY_HOSTS -> ip-172-31-17-241.us-west-2.compute.internal, PROXY_URI_BASES
-> http://ip-172-31-17-241.us-west-2.compute.internal:20888/proxy/application_1485212793368_0004),
/proxy/application_1485212793368_0004\n17/01/24 17:28:14 INFO JettyUtils: Adding
filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter\n17/01/24
17:28:15 INFO Client: Application report for application_1485212793368_0004
(state: RUNNING)\n17/01/24 17:28:15 INFO Client: \n\t client token: N/A\n\t
diagnostics: N/A\n\t ApplicationMaster host: 172.31.40.6\n\t ApplicationMaster
RPC port: 0\n\t queue: default\n\t start time: 1485278890271\n\t final status:
UNDEFINED\n\t tracking URL: http://ip-172-31-17-241.us-west-2.compute.internal:20888/proxy/application_1485212793368_0004/\n\t
user: root\n17/01/24 17:28:15 INFO YarnClientSchedulerBackend: Application application_1485212793368_0004
has started running.\n17/01/24 17:28:15 INFO Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 44025.\n17/01/24
17:28:15 INFO NettyBlockTransferService: Server created on 44025\n17/01/24 17:28:15
INFO BlockManagerMaster: Trying to register BlockManager\n17/01/24 17:28:15
INFO BlockManagerMasterEndpoint: Registering block manager 172.31.0.195:44025
with 530.0 MB RAM, BlockManagerId(driver, 172.31.0.195, 44025)\n17/01/24 17:28:15
INFO BlockManagerMaster: Registered BlockManager\n17/01/24 17:28:15 INFO EventLoggingListener:
Logging events to hdfs:///var/log/spark/apps/application_1485212793368_0004\n17/01/24
17:28:19 INFO YarnClientSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-172-31-19-146.us-west-2.compute.internal:38182/user/Executor#-1516525117])
with ID 2\n17/01/24 17:28:19 INFO YarnClientSchedulerBackend: Registered executor:
AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-172-31-8-49.us-west-2.compute.internal:45356/user/Executor#92588209])
with ID 1\n17/01/24 17:28:19 INFO YarnClientSchedulerBackend: SchedulerBackend
is ready for scheduling beginning after reached minRegisteredResourcesRatio:
0.8\n17/01/24 17:28:19 INFO BlockManagerMasterEndpoint: Registering block manager
ip-172-31-19-146.us-west-2.compute.internal:37178 with 530.0 MB RAM, BlockManagerId(2,
ip-172-31-19-146.us-west-2.compute.internal, 37178)\n17/01/24 17:28:19 INFO
BlockManagerMasterEndpoint: Registering block manager ip-172-31-8-49.us-west-2.compute.internal:39241
with 530.0 MB RAM, BlockManagerId(1, ip-172-31-8-49.us-west-2.compute.internal,
39241)\n17/01/24 17:28:19 INFO SparkContext: Starting job: reduce at SparkPi.scala:36\n17/01/24
17:28:19 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output
partitions\n17/01/24 17:28:19 INFO DAGScheduler: Final stage: ResultStage 0(reduce
at SparkPi.scala:36)\n17/01/24 17:28:19 INFO DAGScheduler: Parents of final
stage: List()\n17/01/24 17:28:19 INFO DAGScheduler: Missing parents: List()\n17/01/24
17:28:19 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at
map at SparkPi.scala:32), which has no missing parents\n17/01/24 17:28:19 INFO
MemoryStore: ensureFreeSpace(1888) called with curMem=0, maxMem=555755765\n17/01/24
17:28:19 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated
size 1888.0 B, free 530.0 MB)\n17/01/24 17:28:19 INFO MemoryStore: ensureFreeSpace(1202)
called with curMem=1888, maxMem=555755765\n17/01/24 17:28:19 INFO MemoryStore:
Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B,
free 530.0 MB)\n17/01/24 17:28:19 INFO BlockManagerInfo: Added broadcast_0_piece0
in memory on 172.31.0.195:44025 (size: 1202.0 B, free: 530.0 MB)\n17/01/24 17:28:19
INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861\n17/01/24
17:28:19 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1]
at map at SparkPi.scala:32)\n17/01/24 17:28:19 INFO YarnScheduler: Adding task
set 0.0 with 10 tasks\n17/01/24 17:28:19 INFO TaskSetManager: Starting task
0.0 in stage 0.0 (TID 0, ip-172-31-19-146.us-west-2.compute.internal, PROCESS_LOCAL,
2144 bytes)\n17/01/24 17:28:19 INFO TaskSetManager: Starting task 1.0 in stage
0.0 (TID 1, ip-172-31-8-49.us-west-2.compute.internal, PROCESS_LOCAL, 2144 bytes)\n17/01/24
17:28:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-172-31-19-146.us-west-2.compute.internal:37178
(size: 1202.0 B, free: 530.0 MB)\n17/01/24 17:28:21 INFO BlockManagerInfo: Added
broadcast_0_piece0 in memory on ip-172-31-8-49.us-west-2.compute.internal:39241
(size: 1202.0 B, free: 530.0 MB)\n17/01/24 17:28:21 INFO TaskSetManager: Starting
task 2.0 in stage 0.0 (TID 2, ip-172-31-19-146.us-west-2.compute.internal, PROCESS_LOCAL,
2144 bytes)\n17/01/24 17:28:21 INFO TaskSetManager: Finished task 0.0 in stage
0.0 (TID 0) in 1237 ms on ip-172-31-19-146.us-west-2.compute.internal (1/10)\n17/01/24
17:28:21 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, ip-172-31-19-146.us-west-2.compute.internal,
PROCESS_LOCAL, 2144 bytes)\n17/01/24 17:28:21 INFO TaskSetManager: Finished
task 2.0 in stage 0.0 (TID 2) in 74 ms on ip-172-31-19-146.us-west-2.compute.internal
(2/10)\n17/01/24 17:28:21 INFO TaskSetManager: Starting task 4.0 in stage 0.0
(TID 4, ip-172-31-19-146.us-west-2.compute.internal, PROCESS_LOCAL, 2144 bytes)\n17/01/24
17:28:21 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 48 ms
on ip-172-31-19-146.us-west-2.compute.internal (3/10)\n17/01/24 17:28:21 INFO
TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, ip-172-31-19-146.us-west-2.compute.internal,
PROCESS_LOCAL, 2144 bytes)\n17/01/24 17:28:21 INFO TaskSetManager: Finished
task 4.0 in stage 0.0 (TID 4) in 30 ms on ip-172-31-19-146.us-west-2.compute.internal
(4/10)\n17/01/24 17:28:21 INFO TaskSetManager: Starting task 6.0 in stage 0.0
(TID 6, ip-172-31-19-146.us-west-2.compute.internal, PROCESS_LOCAL, 2144 bytes)\n17/01/24
17:28:21 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 25 ms
on ip-172-31-19-146.us-west-2.compute.internal (5/10)\n17/01/24 17:28:21 INFO
TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, ip-172-31-19-146.us-west-2.compute.internal,
PROCESS_LOCAL, 2144 bytes)\n17/01/24 17:28:21 INFO TaskSetManager: Finished
task 6.0 in stage 0.0 (TID 6) in 24 ms on ip-172-31-19-146.us-west-2.compute.internal
(6/10)\n17/01/24 17:28:21 INFO TaskSetManager: Starting task 8.0 in stage 0.0
(TID 8, ip-172-31-19-146.us-west-2.compute.internal, PROCESS_LOCAL, 2144 bytes)\n17/01/24
17:28:21 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 28 ms
on ip-172-31-19-146.us-west-2.compute.internal (7/10)\n17/01/24 17:28:21 INFO
TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, ip-172-31-19-146.us-west-2.compute.internal,
PROCESS_LOCAL, 2144 bytes)\n17/01/24 17:28:21 INFO TaskSetManager: Finished
task 8.0 in stage 0.0 (TID 8) in 22 ms on ip-172-31-19-146.us-west-2.compute.internal
(8/10)\n17/01/24 17:28:21 INFO TaskSetManager: Finished task 9.0 in stage 0.0
(TID 9) in 24 ms on ip-172-31-19-146.us-west-2.compute.internal (9/10)\n17/01/24
17:28:21 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1586
ms on ip-172-31-8-49.us-west-2.compute.internal (10/10)\n17/01/24 17:28:21 INFO
DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 1.614 s\n17/01/24
17:28:21 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took
1.812573 s\n17/01/24 17:28:21 INFO YarnScheduler: Removed TaskSet 0.0, whose
tasks have all completed, from pool \nPi is roughly 3.142128\n17/01/24 17:28:21
INFO SparkUI: Stopped Spark web UI at http://172.31.0.195:4040\n17/01/24 17:28:21
INFO DAGScheduler: Stopping DAGScheduler\n17/01/24 17:28:21 INFO YarnClientSchedulerBackend:
Interrupting monitor thread\n17/01/24 17:28:21 INFO YarnClientSchedulerBackend:
Shutting down all executors\n17/01/24 17:28:21 INFO YarnClientSchedulerBackend:
Asking each executor to shut down\n17/01/24 17:28:21 INFO YarnClientSchedulerBackend:
Stopped\n17/01/24 17:28:21 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint
stopped!\n17/01/24 17:28:21 INFO MemoryStore: MemoryStore cleared\n17/01/24
17:28:21 INFO BlockManager: BlockManager stopped\n17/01/24 17:28:21 INFO BlockManagerMaster:
BlockManagerMaster stopped\n17/01/24 17:28:21 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!\n17/01/24 17:28:21 INFO SparkContext: Successfully
stopped SparkContext\n17/01/24 17:28:21 INFO RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.\n17/01/24 17:28:21 INFO RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with flushing remote transports.\n17/01/24
17:28:21 INFO ShutdownHookManager: Shutdown hook called\n17/01/24 17:28:21 INFO
ShutdownHookManager: Deleting directory /tmp/spark-9e5f25f3-4869-4f02-8a30-8adaa5504f5f\n"
start: 2017-01-24T17:28:04Z
stop: 2017-01-24T17:28:22Z
output: '{''status'': ''completed''}'
status: completed
timing:
completed: 2017-01-24 17:28:22 +0000 UTC
enqueued: 2017-01-24 17:28:00 +0000 UTC
started: 2017-01-24 17:28:04 +0000 UTC
arosales@x230:~$
|