A HTTP message consists of a header and an optional body. The message header of an HTTP request consists of a request line and a collection of header fields. The message header of an HTTP response consists of a status line and a collection of header fields. All HTTP messages must include the protocol version. Some HTTP messages can optionally enclose a content body.
HttpCore defines the HTTP message object model to follow closely this definition and provides extensive support for serialization (formatting) and deserialization (parsing) of HTTP message elements.
HTTP request is a message sent from the client to the server. The first line of that message includes the method to apply to the resource, the identifier of the resource, and the protocol version in use.
HttpRequest request = new BasicHttpRequest("GET", "/", HttpVersion.HTTP_1_1); System.out.println(request.getRequestLine().getMethod()); System.out.println(request.getRequestLine().getUri()); System.out.println(request.getProtocolVersion()); System.out.println(request.getRequestLine().toString());
stdout >
GET / HTTP/1.1 GET / HTTP/1.1
HTTP response is a message sent by the server back to the client after having received and interpreted a request message. The first line of that message consists of the protocol version followed by a numeric status code and its associated textual phrase.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); System.out.println(response.getProtocolVersion()); System.out.println(response.getStatusLine().getStatusCode()); System.out.println(response.getStatusLine().getReasonPhrase()); System.out.println(response.getStatusLine().toString());
stdout >
HTTP/1.1 200 OK HTTP/1.1 200 OK
An HTTP message can contain a number of headers describing properties of the message such as the content length, content type, and so on. HttpCore provides methods to retrieve, add, remove, and enumerate such headers.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); Header h1 = response.getFirstHeader("Set-Cookie"); System.out.println(h1); Header h2 = response.getLastHeader("Set-Cookie"); System.out.println(h2); Header[] hs = response.getHeaders("Set-Cookie"); System.out.println(hs.length);
stdout >
Set-Cookie: c1=a; path=/; domain=localhost Set-Cookie: c2=b; path="/", c3=c; domain="localhost" 2
There is an efficient way to obtain all headers of a given type using the
HeaderIterator
interface.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); HeaderIterator it = response.headerIterator("Set-Cookie"); while (it.hasNext()) { System.out.println(it.next()); }
stdout >
Set-Cookie: c1=a; path=/; domain=localhost Set-Cookie: c2=b; path="/", c3=c; domain="localhost"
It also provides convenience methods to parse HTTP messages into individual header elements.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); HeaderElementIterator it = new BasicHeaderElementIterator( response.headerIterator("Set-Cookie")); while (it.hasNext()) { HeaderElement elem = it.nextElement(); System.out.println(elem.getName() + " = " + elem.getValue()); NameValuePair[] params = elem.getParameters(); for (int i = 0; i < params.length; i++) { System.out.println(" " + params[i]); } }
stdout >
c1 = a path=/ domain=localhost c2 = b path=/ c3 = c domain=localhost
HTTP headers are tokenized into individual header elements only on demand. HTTP headers received over an HTTP connection are stored internally as an array of characters and parsed lazily only when you access their properties.
HTTP messages can carry a content entity associated with the request or response. Entities can be found in some requests and in some responses, as they are optional. Requests that use entities are referred to as entity-enclosing requests. The HTTP specification defines two entity-enclosing methods: POST and PUT. Responses are usually expected to enclose a content entity. There are exceptions to this rule such as responses to HEAD method and 204 No Content, 304 Not Modified, 205 Reset Content responses.
HttpCore distinguishes three kinds of entities, depending on where their content originates:
streamed: The content is received from a stream, or generated on the fly. In particular, this category includes entities being received from a connection. Streamed entities are generally not repeatable.
self-contained: The content is in memory or obtained by means that are independent from a connection or other entity. Self-contained entities are generally repeatable.
wrapping: The content is obtained from another entity.
This distinction is important for connection management with incoming entities. For an application that creates entities and only sends them using the HttpCore framework, the difference between streamed and self-contained is of little importance. In that case, we suggest you consider non-repeatable entities as streamed, and those that are repeatable as self-contained.
An entity can be repeatable, meaning you can read its content more than once. This
is only possible with self-contained entities (like
ByteArrayEntity
or StringEntity
).
Since an entity can represent both binary and character content, it has support for character encodings (to support the latter, i.e. character content).
The entity is created when executing a request with enclosed content or when the request was successful and the response body is used to send the result back to the client.
To read the content from the entity, one can either retrieve the input stream via
the HttpEntity#getContent()
method, which returns an
java.io.InputStream
, or one can supply an output stream to
the HttpEntity#writeTo(OutputStream)
method, which will
return once all content has been written to the given stream.
The EntityUtils
class exposes several static methods to read
more easily the content or information from an entity. Instead of reading
the java.io.InputStream
directly, one can retrieve the complete
content body in a string or byte array by using the methods from this class.
When the entity has been received with an incoming message, the methods
HttpEntity#getContentType()
and
HttpEntity#getContentLength()
methods can be used for
reading the common metadata such as Content-Type
and
Content-Length
headers (if they are available). Since the
Content-Type
header can contain a character encoding for text
mime-types like text/plain
or text/html
,
the HttpEntity#getContentEncoding()
method is used to
read this information. If the headers aren't available, a length of -1 will be
returned, and NULL
for the content type. If the
Content-Type
header is available, a Header object will be
returned.
When creating an entity for a outgoing message, this meta data has to be supplied by the creator of the entity.
StringEntity myEntity = new StringEntity("important message", Consts.UTF_8); System.out.println(myEntity.getContentType()); System.out.println(myEntity.getContentLength()); System.out.println(EntityUtils.toString(myEntity)); System.out.println(EntityUtils.toByteArray(myEntity).length);
stdout >
Content-Type: text/plain; charset=UTF-8 17 important message 17
In order to ensure proper release of system resources one must close the content stream associated with the entity.
HttpResponse response; HttpEntity entity = response.getEntity(); if (entity != null) { InputStream instream = entity.getContent(); try { // do something useful } finally { instream.close(); } }
Please note that HttpEntity#writeTo(OutputStream)
method is also required to ensure proper release of system resources once the
entity has been fully written out. If this method obtains an instance of
java.io.InputStream
by calling
HttpEntity#getContent()
, it is also expected to close
the stream in a finally clause.
When working with streaming entities, one can use the
EntityUtils#consume(HttpEntity)
method to ensure that
the entity content has been fully consumed and the underlying stream has been
closed.
There are a few ways to create entities. HttpCore provides the following implementations:
Exactly as the name implies, this basic entity represents an underlying stream. In general, use this class for entities received from HTTP messages.
This entity has an empty constructor. After construction, it represents no content, and has a negative content length.
One needs to set the content stream, and optionally the length. This can be done
with the BasicHttpEntity#setContent(InputStream)
and
BasicHttpEntity#setContentLength(long)
methods
respectively.
BasicHttpEntity myEntity = new BasicHttpEntity(); myEntity.setContent(someInputStream); myEntity.setContentLength(340); // sets the length to 340
ByteArrayEntity
is a self-contained, repeatable entity
that obtains its content from a given byte array. Supply the byte array to the
constructor.
ByteArrayEntity myEntity = new ByteArrayEntity(new byte[] {1,2,3}, ContentType.APPLICATION_OCTET_STREAM);
StringEntity
is a self-contained, repeatable entity that
obtains its content from a java.lang.String
object. It has
three constructors, one simply constructs with a given java.lang.String
object; the second also takes a character encoding for the data in the
string; the third allows the mime type to be specified.
StringBuilder sb = new StringBuilder(); Map<String, String> env = System.getenv(); for (Map.Entry<String, String> envEntry : env.entrySet()) { sb.append(envEntry.getKey()) .append(": ").append(envEntry.getValue()) .append("\r\n"); } // construct without a character encoding (defaults to ISO-8859-1) HttpEntity myEntity1 = new StringEntity(sb.toString()); // alternatively construct with an encoding (mime type defaults to "text/plain") HttpEntity myEntity2 = new StringEntity(sb.toString(), Consts.UTF_8); // alternatively construct with an encoding and a mime type HttpEntity myEntity3 = new StringEntity(sb.toString(), ContentType.create("text/plain", Consts.UTF_8));
InputStreamEntity
is a streamed, non-repeatable entity that
obtains its content from an input stream. Construct it by supplying the input
stream and the content length. Use the content length to limit the amount of data
read from the java.io.InputStream
. If the length matches
the content length available on the input stream, then all data will be sent.
Alternatively, a negative content length will read all data from the input stream,
which is the same as supplying the exact content length, so use the length to limit
the amount of data to read.
InputStream instream = getSomeInputStream(); InputStreamEntity myEntity = new InputStreamEntity(instream, 16);
FileEntity
is a self-contained, repeatable entity that
obtains its content from a file. Use this mostly to stream large files of different
types, where you need to supply the content type of the file, for
instance, sending a zip file would require the content type
application/zip
, for XML application/xml
.
HttpEntity entity = new FileEntity(staticFile, ContentType.create("application/java-archive"));
This is the base class for creating wrapped entities. The wrapping entity holds a reference to a wrapped entity and delegates all calls to it. Implementations of wrapping entities can derive from this class and need to override only those methods that should not be delegated to the wrapped entity.
BufferedHttpEntity
is a subclass of
HttpEntityWrapper
. Construct it by supplying another entity. It
reads the content from the supplied entity, and buffers it in memory.
This makes it possible to make a repeatable entity, from a non-repeatable entity. If the supplied entity is already repeatable, it simply passes calls through to the underlying entity.
myNonRepeatableEntity.setContent(someInputStream); BufferedHttpEntity myBufferedEntity = new BufferedHttpEntity( myNonRepeatableEntity);
HTTP connections are responsible for HTTP message serialization and deserialization. One should rarely need to use HTTP connection objects directly. There are higher level protocol components intended for execution and processing of HTTP requests. However, in some cases direct interaction with HTTP connections may be necessary, for instance, to access properties such as the connection status, the socket timeout or the local and remote addresses.
It is important to bear in mind that HTTP connections are not thread-safe. We strongly
recommend limiting all interactions with HTTP connection objects to one thread. The only
method of HttpConnection
interface and its sub-interfaces
which is safe to invoke from another thread is HttpConnection#shutdown()
.
HttpCore does not provide full support for opening connections because the process of establishing a new connection - especially on the client side - can be very complex when it involves one or more authenticating or/and tunneling proxies. Instead, blocking HTTP connections can be bound to any arbitrary network socket.
Socket socket = <...> DefaultBHttpClientConnection conn = new DefaultBHttpClientConnection(8 * 1024); conn.bind(socket); System.out.println(conn.isOpen()); HttpConnectionMetrics metrics = conn.getMetrics(); System.out.println(metrics.getRequestCount()); System.out.println(metrics.getResponseCount()); System.out.println(metrics.getReceivedBytesCount()); System.out.println(metrics.getSentBytesCount());
HTTP connection interfaces, both client and server, send and receive messages in two stages. The message head is transmitted first. Depending on properties of the message head, a message body may follow it. Please note it is very important to always close the underlying content stream in order to signal that the processing of the message is complete. HTTP entities that stream out their content directly from the input stream of the underlying connection must ensure they fully consume the content of the message body for that connection to be potentially re-usable.
Over-simplified process of request execution on the client side may look like this:
Socket socket = <...> DefaultBHttpClientConnection conn = new DefaultBHttpClientConnection(8 * 1024); conn.bind(socket); HttpRequest request = new BasicHttpRequest("GET", "/"); conn.sendRequestHeader(request); HttpResponse response = conn.receiveResponseHeader(); conn.receiveResponseEntity(response); HttpEntity entity = response.getEntity(); if (entity != null) { // Do something useful with the entity and, when done, ensure all // content has been consumed, so that the underlying connection // can be re-used EntityUtils.consume(entity); }
Over-simplified process of request handling on the server side may look like this:
Socket socket = <...> DefaultBHttpServerConnection conn = new DefaultBHttpServerConnection(8 * 1024); conn.bind(socket); HttpRequest request = conn.receiveRequestHeader(); if (request instanceof HttpEntityEnclosingRequest) { conn.receiveRequestEntity((HttpEntityEnclosingRequest) request); HttpEntity entity = ((HttpEntityEnclosingRequest) request) .getEntity(); if (entity != null) { // Do something useful with the entity and, when done, ensure all // content has been consumed, so that the underlying connection // could be re-used EntityUtils.consume(entity); } } HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, 200, "OK") ; response.setEntity(new StringEntity("Got it") ); conn.sendResponseHeader(response); conn.sendResponseEntity(response);
Please note that one should rarely need to transmit messages using these low level methods and should normally use the appropriate higher level HTTP service implementations instead.
HTTP connections manage the process of the content transfer using the
HttpEntity
interface. HTTP connections generate an entity object that
encapsulates the content stream of the incoming message. Please note that
HttpServerConnection#receiveRequestEntity()
and
HttpClientConnection#receiveResponseEntity()
do not retrieve or buffer any
incoming data. They merely inject an appropriate content codec based on the properties
of the incoming message. The content can be retrieved by reading from the content input
stream of the enclosed entity using HttpEntity#getContent()
.
The incoming data will be decoded automatically and completely transparently to the data
consumer. Likewise, HTTP connections rely on
HttpEntity#writeTo(OutputStream)
method to generate the content of an
outgoing message. If an outgoing message encloses an entity, the content will be
encoded automatically based on the properties of the message.
Default implementations of HTTP connections support three content transfer mechanisms defined by the HTTP/1.1 specification:
Content-Length
delimited:
The end of the content entity is determined by the value of the
Content-Length
header. Maximum entity length:
Long#MAX_VALUE
.
Identity coding: The end of the content entity is demarcated by closing the underlying connection (end of stream condition). For obvious reasons the identity encoding can only be used on the server side. Maximum entity length: unlimited.
Chunk coding: The content is sent in small chunks. Maximum entity length: unlimited.
The appropriate content stream class will be created automatically depending on properties of the entity enclosed with the message.
HTTP connections can be terminated either gracefully by calling
HttpConnection#close()
or forcibly by calling
HttpConnection#shutdown()
. The former tries to flush all buffered data
prior to terminating the connection and may block indefinitely. The
HttpConnection#close()
method is not thread-safe. The latter terminates
the connection without flushing internal buffers and returns control to the caller as
soon as possible without blocking for long. The HttpConnection#shutdown()
method is thread-safe.
All HttpCore components potentially throw two types of exceptions: IOException
in case of an I/O failure such as socket timeout or an socket reset and
HttpException
that signals an HTTP failure such as a violation of
the HTTP protocol. Usually I/O errors are considered non-fatal and recoverable, whereas
HTTP protocol errors are considered fatal and cannot be automatically recovered from.
HTTP protocol interceptor is a routine that implements a specific aspect of the HTTP protocol. Usually protocol interceptors are expected to act upon one specific header or a group of related headers of the incoming message or populate the outgoing message with one specific header or a group of related headers. Protocol interceptors can also manipulate content entities enclosed with messages; transparent content compression / decompression being a good example. Usually this is accomplished by using the 'Decorator' pattern where a wrapper entity class is used to decorate the original entity. Several protocol interceptors can be combined to form one logical unit.
HTTP protocol processor is a collection of protocol interceptors that implements the 'Chain of Responsibility' pattern, where each individual protocol interceptor is expected to work on the particular aspect of the HTTP protocol it is responsible for.
Usually the order in which interceptors are executed should not matter as long as they do not depend on a particular state of the execution context. If protocol interceptors have interdependencies and therefore must be executed in a particular order, they should be added to the protocol processor in the same sequence as their expected execution order.
Protocol interceptors must be implemented as thread-safe. Similarly to servlets, protocol interceptors should not use instance variables unless access to those variables is synchronized.
HttpCore comes with a number of most essential protocol interceptors for client and server HTTP processing.
RequestContent
is the most important interceptor for
outgoing requests. It is responsible for delimiting content length by adding
the Content-Length
or Transfer-Content
headers
based on the properties of the enclosed entity and the protocol version. This
interceptor is required for correct functioning of client side protocol processors.
ResponseContent
is the most important interceptor for
outgoing responses. It is responsible for delimiting content length by adding
Content-Length
or Transfer-Content
headers
based on the properties of the enclosed entity and the protocol version. This
interceptor is required for correct functioning of server side protocol processors.
RequestConnControl
is responsible for adding the
Connection
header to the outgoing requests, which is essential
for managing persistence of HTTP/1.0
connections. This
interceptor is recommended for client side protocol processors.
ResponseConnControl
is responsible for adding
the Connection
header to the outgoing responses, which is essential
for managing persistence of HTTP/1.0
connections. This
interceptor is recommended for server side protocol processors.
RequestDate
is responsible for adding the
Date
header to the outgoing requests. This interceptor is
optional for client side protocol processors.
ResponseDate
is responsible for adding the
Date
header to the outgoing responses. This interceptor is
recommended for server side protocol processors.
RequestExpectContinue
is responsible for enabling the
'expect-continue' handshake by adding the Expect
header. This
interceptor is recommended for client side protocol processors.
RequestTargetHost
is responsible for adding the
Host
header. This interceptor is required for client side
protocol processors.
RequestUserAgent
is responsible for adding the
User-Agent
header. This interceptor is recommended for client
side protocol processors.
Usually HTTP protocol processors are used to pre-process incoming messages prior to executing application specific processing logic and to post-process outgoing messages.
HttpProcessor httpproc = HttpProcessorBuilder.create() // Required protocol interceptors .add(new RequestContent()) .add(new RequestTargetHost()) // Recommended protocol interceptors .add(new RequestConnControl()) .add(new RequestUserAgent("MyAgent-HTTP/1.1")) // Optional protocol interceptors .add(new RequestExpectContinue(true)) .build(); HttpCoreContext context = HttpCoreContext.create(); HttpRequest request = new BasicHttpRequest("GET", "/"); httpproc.process(request, context);
Send the request to the target host and get a response.
HttpResponse = <...> httpproc.process(response, context);
Please note the BasicHttpProcessor
class does not synchronize
access to its internal structures and therefore may not be thread-safe.
Protocol interceptors can collaborate by sharing information - such as a processing
state - through an HTTP execution context. HTTP context is a structure that can be
used to map an attribute name to an attribute value. Internally HTTP context
implementations are usually backed by a HashMap
. The primary
purpose of the HTTP context is to facilitate information sharing among various
logically related components. HTTP context can be used to store a processing state for
one message or several consecutive messages. Multiple logically related messages can
participate in a logical session if the same context is reused between consecutive
messages.
HttpProcessor httpproc = HttpProcessorBuilder.create() .add(new HttpRequestInterceptor() { public void process( HttpRequest request, HttpContext context) throws HttpException, IOException { String id = (String) context.getAttribute("session-id"); if (id != null) { request.addHeader("Session-ID", id); } } }) .build(); HttpCoreContext context = HttpCoreContext.create(); HttpRequest request = new BasicHttpRequest("GET", "/"); httpproc.process(request, context);
HttpService
is a server side HTTP protocol handler based on the
blocking I/O model that implements the essential requirements of the HTTP protocol for
the server side message processing as described by RFC 2616.
HttpService
relies on HttpProcessor
instance to generate mandatory protocol headers for all outgoing
messages and apply common, cross-cutting message transformations to all incoming and
outgoing messages, whereas HTTP request handlers are expected to take care of
application specific content generation and processing.
HttpProcessor httpproc = HttpProcessorBuilder.create() .add(new ResponseDate()) .add(new ResponseServer("MyServer-HTTP/1.1")) .add(new ResponseContent()) .add(new ResponseConnControl()) .build(); HttpService httpService = new HttpService(httpproc, null);
The HttpRequestHandler
interface represents a
routine for processing of a specific group of HTTP requests. HttpService
is designed to take care of protocol specific aspects, whereas
individual request handlers are expected to take care of application specific HTTP
processing. The main purpose of a request handler is to generate a response object
with a content entity to be sent back to the client in response to the given
request.
HttpRequestHandler myRequestHandler = new HttpRequestHandler() { public void handle( HttpRequest request, HttpResponse response, HttpContext context) throws HttpException, IOException { response.setStatusCode(HttpStatus.SC_OK); response.setEntity( new StringEntity("some important message", ContentType.TEXT_PLAIN)); } };
HTTP request handlers are usually managed by a
HttpRequestHandlerResolver
that matches a request URI to a request
handler. HttpCore includes a very simple implementation of the request handler
resolver based on a trivial pattern matching algorithm:
HttpRequestHandlerRegistry
supports only three formats:
*
, <uri>*
and
*<uri>
.
HttpProcessor httpproc = <...> HttpRequestHandler myRequestHandler1 = <...> HttpRequestHandler myRequestHandler2 = <...> HttpRequestHandler myRequestHandler3 = <...> UriHttpRequestHandlerMapper handlerMapper = new UriHttpRequestHandlerMapper(); handlerMapper.register("/service/*", myRequestHandler1); handlerMapper.register("*.do", myRequestHandler2); handlerMapper.register("*", myRequestHandler3); HttpService httpService = new HttpService(httpproc, handlerMapper);
Users are encouraged to provide more sophisticated implementations of
HttpRequestHandlerResolver
- for instance, based on
regular expressions.
When fully initialized and configured, the HttpService
can
be used to execute and handle requests for active HTTP connections. The
HttpService#handleRequest()
method reads an incoming
request, generates a response and sends it back to the client. This method can be
executed in a loop to handle multiple requests on a persistent connection. The
HttpService#handleRequest()
method is safe to execute from
multiple threads. This allows processing of requests on several connections
simultaneously, as long as all the protocol interceptors and requests handlers used
by the HttpService
are thread-safe.
HttpService httpService = <...> HttpServerConnection conn = <...> HttpContext context = <...> boolean active = true; try { while (active && conn.isOpen()) { httpService.handleRequest(conn, context); } } finally { conn.shutdown(); }
HttpRequestExecutor
is a client side HTTP protocol handler based
on the blocking I/O model that implements the essential requirements of the HTTP
protocol for the client side message processing, as described by RFC 2616.
The HttpRequestExecutor
relies on the HttpProcessor
instance to generate mandatory protocol headers for all outgoing
messages and apply common, cross-cutting message transformations to all incoming and
outgoing messages. Application specific processing can be implemented outside
HttpRequestExecutor
once the request has been executed and a
response has been received.
HttpClientConnection conn = <...> HttpProcessor httpproc = HttpProcessorBuilder.create() .add(new RequestContent()) .add(new RequestTargetHost()) .add(new RequestConnControl()) .add(new RequestUserAgent("MyClient/1.1")) .add(new RequestExpectContinue(true)) .build(); HttpRequestExecutor httpexecutor = new HttpRequestExecutor(); HttpRequest request = new BasicHttpRequest("GET", "/"); HttpCoreContext context = HttpCoreContext.create(); httpexecutor.preProcess(request, httpproc, context); HttpResponse response = httpexecutor.execute(request, conn, context); httpexecutor.postProcess(response, httpproc, context); HttpEntity entity = response.getEntity(); EntityUtils.consume(entity);
Methods of HttpRequestExecutor
are safe to execute from multiple
threads. This allows execution of requests on several connections simultaneously, as
long as all the protocol interceptors used by the HttpRequestExecutor
are thread-safe.
The ConnectionReuseStrategy
interface is intended to
determine whether the underlying connection can be re-used for processing of further
messages after the transmission of the current message has been completed. The default
connection re-use strategy attempts to keep connections alive whenever possible.
Firstly, it examines the version of the HTTP protocol used to transmit the message.
HTTP/1.1
connections are persistent by default, while
HTTP/1.0
connections are not. Secondly, it examines the value of the
Connection
header. The peer can indicate whether it intends to
re-use the connection on the opposite side by sending Keep-Alive
or
Close
values in the Connection
header. Thirdly,
the strategy makes the decision whether the connection is safe to re-use based on the
properties of the enclosed entity, if available.
Efficient client-side HTTP transports often requires effective re-use of persistent connections. HttpCore facilitates the process of connection re-use by providing support for managing pools of persistent HTTP connections. Connection pool implementations are thread-safe and can be used concurrently by multiple consumers.
By default the pool allows only 20 concurrent connections in total and two concurrent connections per a unique route. The two connection limit is due to the requirements of the HTTP specification. However, in practical terms this can often be too restrictive. One can change the pool configuration at runtime to allow for more concurrent connections depending on a particular application context.
HttpHost target = new HttpHost("localhost"); BasicConnPool connpool = new BasicConnPool(); connpool.setMaxTotal(200); connpool.setDefaultMaxPerRoute(10); connpool.setMaxPerRoute(target, 20); Future<BasicPoolEntry> future = connpool.lease(target, null); BasicPoolEntry poolEntry = future.get(); HttpClientConnection conn = poolEntry.getConnection();
Please note that the connection pool has no way of knowing whether or not a leased connection is still being used. It is the responsibility of the connection pool user to ensure that the connection is released back to the pool once it is not longer needed, even if the connection is not reusable.
BasicConnPool connpool = <...> Future<BasicPoolEntry> future = connpool.lease(target, null); BasicPoolEntry poolEntry = future.get(); try { HttpClientConnection conn = poolEntry.getConnection(); } finally { connpool.release(poolEntry, conn.isOpen()); }
The state of the connection pool can be interrogated at runtime.
HttpHost target = new HttpHost("localhost"); BasicConnPool connpool = <...> PoolStats totalStats = connpool.getTotalStats(); System.out.println("total available: " + totalStats.getAvailable()); System.out.println("total leased: " + totalStats.getLeased()); System.out.println("total pending: " + totalStats.getPending()); PoolStats targetStats = connpool.getStats(target); System.out.println("target available: " + targetStats.getAvailable()); System.out.println("target leased: " + targetStats.getLeased()); System.out.println("target pending: " + targetStats.getPending());
Please note that connection pools do not pro-actively evict expired connections. Even though expired connection cannot be leased to the requester, the pool may accumulate stale connections over time especially after a period of inactivity. It is generally advisable to force eviction of expired and idle connections from the pool after an extensive period of inactivity.
BasicConnPool connpool = <...> connpool.closeExpired(); connpool.closeIdle(1, TimeUnit.MINUTES);
Blocking connections can be bound to any arbitrary socket. This makes SSL support quite
straight-forward. Any SSLSocket
instance can be bound to a blocking
connection in order to make all messages transmitted over than connection secured by
TLS/SSL.
SSLContext sslcontext = SSLContext.getInstance("Default"); sslcontext.init(null, null, null); SocketFactory sf = sslcontext.getSocketFactory(); SSLSocket socket = (SSLSocket) sf.createSocket("somehost", 443); socket.setEnabledCipherSuites(new String[] { "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA" }); DefaultBHttpClientConnection conn = new DefaultBHttpClientConnection(8 * 1204); conn.bind(socket);