Android开源项目分析-Volley

摘要:Volley是在2013年Google I/O大会上推出了一个新的网络通信框架,目的是简化网络操作流程。

1.简介

  Volley是在2013年Google I/O大会上推出了一个新的网络通信框架,目的是简化网络操作流程。它提供了如下的便利功能:
   - JSON,图像等的异步下载;
   - 网络请求的排序(scheduling)
   - 网络请求的优先级处理
   - 数据缓存
   - 多级别取消请求
   - 和Activity和生命周期的联动(Activity结束时同时取消所有网络请求)

2.源码获取

  git clone https://android.googlesource.com/platform/frameworks/volley

3.基本流程

3.1.创建请求队列RequestQueue对象

1
RequestQueue requestQueue = Volley.newRequestQueue(context);

3.2.创建请求Request对象

  Request类是一个抽象模板类,子类有StringRequest、ImageRequest以及JsonRequest模板类,可以继承Request实现自己的请求返回类型。以StringRequest为例:

1
2
3
4
5
6
7
8
9
10
11
12
13
StringRequest stringRequest = new StringRequest("http://www.eebbk.com",
new Response.Listener<String>() {
@Override
public void onResponse(String response) {
// 正常返回
}
},
new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
// 错误返回
}
});

3.3.添加请求到队列

1
requestQueue.add(stringRequest);

  执行完这一步后续操作都Volley框架会进行异步处理,当请求执行完成会回调Request中的设置的Response.Listener或Response.ErrorListener。
  接下来将分析Volley对应这些操作的处理流程。

4.流程分析

4.1.请求创建请求队列

  主线程中调用Volley.newRequestQueue来创建请求队列:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
@SuppressLint("NewApi")
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}

4.1.1. RequestQueue对象创建

1
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);

  RequestQueue对象创建关联了本地磁盘缓存DiskBasedCache以及抽象的网络访问类Network,可以理解设置了三级缓存的L2以及网络访问对象。

4.1.1.1. RequestQueue构造函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}

  构造函数核心的地方是创建了NetworkDispatcher数组对象以及ExecutorDelivery对象,分别用做记录网络请求线程以及主线程回调处理。

4.1.1.2. RequestQueue.start启动异步线程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

  start函数创建了内存缓存请求工作线程CacheDispatcher线程,以及多个网络请求工作线程NetworkDispatcher线程组成的线程池。

4.1.2. 网络访问对象Network

  回到Volley.newRequestQueue中Network创建代码:

1
Network network = new BasicNetwork(stack);

  Network为接口类,它只有一个方法,用来表示执行网络请求:

1
2
3
public interface Network {
public NetworkResponse performRequest(Request<?> request) throws VolleyError;
}

4.1.2.1 BasicNetwork

  子类BasicNetwork构造函数关联了接口类HttpStack:

1
2
3
4
5
6
7
8
public BasicNetwork(HttpStack httpStack, ByteArrayPool pool) {
mHttpStack = httpStack;
mPool = pool;
}
public BasicNetwork(HttpStack httpStack) {
this(httpStack, new ByteArrayPool(DEFAULT_POOL_SIZE));
}

4.1.2.2 HttpStack

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}

  HttpStack两个子类实现分别为HurlStack以及HttpClientStack,实际访问网络操作分别在这两个类中。HurlStack访问网络通过HttpUrlConnection实现,而HttpClientStack实现是通过HttpClient。
  当SDK大于等于9时访问网络是通过HurlStack即HttpUrlConnection,小于9的时候是通过HttpClientStack即HttpClient。这么做据说是因为HttpUrlConnection加入了缓存机制,更轻量,具体什么原因不得而知。

4.2. 添加请求处理

4.2.1. 加入请求队列RequestQueue.add

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}

核心过程:
  首先会根据等待请求队列mWaitingRequests来判断是否有相同请求发送了,如果有那么就加入到等待队列中,否则加入到内存缓存队列mCacheQueue中。
  mWaitingRequests的目的是当一个请求发生时,如果有相同请求正在处理,那么就记录下来,当前前一个相同请求结束时会同时调用这个在等待队列中请求的回调,这样省却了再次请求。

mCacheQueue定义在RequestQueue中:

1
2
private final PriorityBlockingQueue<Request<?>> mCacheQueue =
new PriorityBlockingQueue<Request<?>>();

  在RequestQueue.start()函数中,它被mCacheDispatcher引用:

1
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);

4.2.1. 缓存请求线程CacheDispatcher

  查看线程执行代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
//缓存初始化,会遍历整个缓存文件夹
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
// 阻塞等待
final Request<?> request = mCacheQueue.take();
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
// 判断是否被取消
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
// 从本地缓存中获取,如果本地缓存没获取到将请求插入到网络请求队列中
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
// 缓存项过期了,需要重新从网络获取
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
// 将获取到的缓存项转换成网络返回数据对象
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
// 过期判断,如果未过期那么通过mDelivery将结果发送到主线程,否则让主线程重新请求
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}

  核心过程:
    mCacheQueue引用了RequestQueue中创建的缓存请求队列mCacheQueue,线程启动后会一直通过mCacheQueue.take()阻塞等待队列中插入Request请求。
    如果请求在本地缓存中命中了并且未过期,那么直接将数据传递到主线程,否则将请求插入到网络请求队列。

  在前面RequestQueue.start()中同样创建了网络请求线程在等待网络请求队列mNetworkQueue:

1
2
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);

4.2.2. 网络请求线程NetworkDispatcher

  查看线程执行代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request<?> request;
while (true) {
try {
// Take a request from the queue.
// 阻塞等待请求
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
// 当前获取的请求被取消了,继续等待其它请求
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
// 执行网络请求
NetworkResponse networkResponse = mNetwork
.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response
// already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified
&& request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
// 将网络响应对象转换成主线程处理的响应对象
Response<?> response = request
.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for
// 304s.
// 需要缓存下来那么需要记录到本地缓存
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
// 标记请求已传递,并将响应的数据发送到主线程
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e));
}
}
}

  核心过程:
    mCache引用了RequestQueue中创建的网络请求队列mNetworkQueue,线程启动后会一直通过mNetworkQueue.take()阻塞等待队列中插入Request请求。
    在CacheDispatcher线程分析可知,当缓存未命中时,请求被插入到网络请求队列mNetworkQueue。
  接下来mNetwork.performRequest(request)会调用HttpStack具体实现从网络下载数据,当下载结束后会根据request设置是否需要存入本地缓存DiskBasedCache,然后通过ExecutorDelivery对象mDelivery将数据传递到主线程。

4.2.3.主线程传递ExecutorDelivery

  在RequestQueue构造函数中创建了ExecutorDelivery对象,并关联了主线程Looper。

1
2
3
4
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}

  查看ExecutorDelivery构造函数:

1
2
3
4
5
6
7
8
9
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}

  构造函数创建了Executor对象mResponsePoster。

  当缓存线程或网络线程请求成功或失败需要通知主线程时,各线程会调用ExecutorDelivery.postResponse函数或ExecutorDelivery.postError函数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
@Override
public void postError(Request<?> request, VolleyError error) {
request.addMarker("post-error");
Response<?> response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}

  这里创建了Runnable子类ResponseDeliveryRunnable,它将通过handler发送到主线程Looper中执行。

4.3. 请求结果返回

4.3.1 回调响应结果ResponseDeliveryRunnable

  在前面分析中已知:当ResponseDeliveryRunnable被Handler.post出去后,ResponseDeliveryRunnable.run函数会被执行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
// 被取消了
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
// 正常返回或错误返回
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) { // 本地缓存过期
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
// 还需要继续执行某些操作,比如之前本地缓存虽然获取到,但已过期,这时需要重新将请求转发到网络请求线程中。
if (mRunnable != null) {
mRunnable.run();
}
}

4.3.2 主线程中Request回调

  Request抽象类中deliverResponse以及deliverError定义为:

1
2
3
4
5
6
7
8
abstract protected void deliverResponse(T response);
public void deliverError(VolleyError error) {
if (mErrorListener != null) {
mErrorListener.onErrorResponse(error);
}
}

  以之前Request子类StringRequest为例,它重载了deliverResponse:

1
2
3
4
@Override
protected void deliverResponse(String response) {
mListener.onResponse(response);
}

  因此回到StringRequest示例代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
StringRequest stringRequest = new StringRequest("http://www.eebbk.com",
new Response.Listener<String>() {
@Override
public void onResponse(String response) {
// 正常返回
}
},
new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
// 错误返回
}
});

  onResponse和onErrorResponse会根据响应结果被调用。

4.4 收尾处理Request.finish

  在ResponseDeliveryRunnable中当下载完成时,Request.finish会被调用:

1
2
3
4
5
6
void finish(final String tag) {
if (mRequestQueue != null) {
mRequestQueue.finish(this);
}
...
}

  接下来将执行RequestQueue.finish:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
void finish(Request<?> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
if (request.shouldCache()) {
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
if (waitingRequests != null) {
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests);
}
}
}
}

  这里回到mWaitingRequests,如果之前保存有相同请求,那么将所有请求插入到缓存队列中,这样通过缓存请求线程CacheDispatcher能够从本地缓存中很快将数据获取,并走完整个请求流程而无需继续网络请求。

5.框架回顾

官方流程图
  通过该图可以清楚的知道Volley有主线程、缓存线程、多个网络线程(线程池)三层组成,往下通过阻塞队列传递请求,工作线程则通过主线程Looper往主线程传递结果消息。
  一个请求首先会在主线程中设置,然后插入到缓存线程CacheDispatcher中的阻塞队列,未命中则选择一个网络线程NetworkDispatcher进行网络下载。如果缓存线程命中请求或网络线程下载结束,都会通过ExecutorDelivery将结果传递到主线程。

6.总结

  通过Volley能够学习到如何用消息队列实现的工作线程的异步处理,以及如何抽象网络请求、缓存等,例如请求Request、网络实现HttpStack。
  Volley提供了NetworkImageView以及ImageLoader两个类用于简化ImageView与网络图片请求操作。Volley中虽然未实现内存缓存管理(L1),但ImageLoader提供了接口,因此可以很方便的加入LruCache这样的缓存管理类。

7.相关文章

  http://blog.csdn.net/guolin_blog/article/details/17482095
  http://blog.csdn.net/guolin_blog/article/details/17482165
  http://blog.csdn.net/guolin_blog/article/details/17612763
  http://blog.csdn.net/guolin_blog/article/details/17656437