Connecting your web application’s server-side logic with a 3rd party service usually introduces some type of a performance penalty. Preventing such issues requires the creation of task queues, worker processes, or other methods that allow slower processes to do their work while maintaining a responsive interface for the user.
Turret.IO supports an API for sending user data that’s used to create email distribution lists based on an arbitrary number of data points provided by a web application. A common problem that’s faced by developers with this usage pattern is that a task queue is almost always necessary to prevent the API calls from introducing additional processing time while a visitor is interacting. Even the most carefully optimized systems may exhibit slower than normal response times given the nature of a web request (DNS, network latency, drive failures, unbalanced load, etc.) so relying on a 3rd party service to respond quickly is simply a bad idea.
Docker to the rescue
Docker is a tool for building and running linux containers and lets us quickly create secure environments without the extra time and cost of creating a virtual machine. Our container will include all of the dependencies and configuration necessary to run a self-contained service. Better yet, containers can be linked to one another by name and communicate over specifically exposed ports.
We’ve created a dockerized Turret.IO connector specifically for developers that don’t currently have the need for a task queue, simply want to keep things organized by placing services in their own containers, or are already using Docker and require better API performance. This connector bootstraps all of the necessary components to support a locally available message queue that forwards messages to our message broker over SSL.
Connections to the container occur an order of magnitude faster than requests sent directly to our system and messages are also queued locally. In the case where our service becomes unavailable for a period of time, the connector will re-send the messages when the link becomes active again. This is especially useful in applications where response time is crucial and even the slightest performance degradation for business intelligence and marketing purposes is simply unacceptable.
Running the container
Install Docker (if it’s not already)
Start the container
> [sudo] docker run -d -p 5672 --name="turret_io" turret_io_queue
Find the referenced port
> [sudo} docker port turret_io 5672
Send data using Python
import pika
import TurretIO
u = TurretIO.User('API_KEY', 'API_SECRET')
# Build the message queue payload
payload = u.queue_set('email@company.com', {'contact_name':'John Smith', 'status':'3', 'logins':'35', 'location': 'san francisco, ca'})
# Connect to the container using pika (or another RabbitMQ client)
url = pika.connection.URLParameters('amqp://turret_io_user:turret_io_pass@localhost/turret_io')
conn = pika.BlockingConnection(url)
# Create a channel, bind the exchange and queue to the routing key
chan = conn.channel()
chan.queue_bind(exchange='turret_io_exch', queue='turret_io_push_queue', routing_key='turret_io_rk')
# Publish the payload to the container
chan.basic_publish(exchange='turret_io_exch', routing_key='turret_io_rk', body=payload)
Stop the container
> [sudo] docker stop turret_io
Remove the container (deletes all data)
> [sudo] docker rm turret_io
(the exchange and queue binding is intentionally left verbose — we realize not everybody uses pika, but other clients should work in a similar manner)
In our tests on a fresh container, we’ve seen requests (connection + publish) occur in roughly 30ms on an underpowered system running inside a virtual machine — more than 4x an improvement over a plain API call to the same endpoint.
The Docker image will be available by the end of August 2014.
Happy docking!
Sign up for Turret.IO – the only email marketing platform specifically for developers
Leave a Reply