Thursday, May 31, 2012

Getting Captcha to do useful work



Getting Captcha to digitize books. Well solution is simple and elegant. show two words, one is  known  while other is one to digitize. Users do not know which is which. If the control (known word) is good, we can trust the second with high probability.

This is done by many sites, and they have digitized about 2.5 million books per year this way. See the above Ted talk by Louis Ahn for details.

They are also trying to use language learners to translate the web, which shows the samples to translate and combine translations to create the final version.

This, in my opinion, is a truly creative solution.

May be we can use the same idea to tag images. For example, show a picture and ask how many elephants are in this picture. It is not very easy for a program to answer such random questions on pictures. Now play the same tick with controls and without controls as above. (Well need some thinking to get it right .. but)

Friday, May 25, 2012

Debugging Hadoop Task tracker, Job tracker, Data Node or Name Node

Hadoop conf/hadoop-env.sh has following environment variables 

  1. HADOOP_NAMENODE_OPTS
  2. HADOOP_SECONDARYNAMENODE_OPTS, 
  3. HADOOP_DATANODE_OPTS
  4. HADOOP_BALANCER_OPTS
  5. HADOOP_JOBTRACKER_OPTS
  6. HADOOP_TASKTRACKER_OPTS


You can use them to start the remote debugger so that you can connection and debug any of the above servers. Unfortunately, Hadoop tasks are started through a separate JVM by the task tracker, and you cannot use this method to debug your map or reduce function as they run in separate JVMs. 

To debug task tracker, do following steps. 

1. Edit conf/hadoop-env.sh to have following

export HADOOP_TASKTRACKER_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n"

2. Start Hadoop (bin/start-dfs.sh and bin/start-mapred.sh)
3. It will block waiting for debug connection
4. Connect to the server using Eclipse "Remote Java Application" in the Debug configurations and add the break points
5. Run a map reduce Job 

Friday, May 18, 2012

How to scale Complex Event Processing (CEP) Systems?

What is CEP?

Complex event processing (CEP) systems query events on the fly without storing them.
  • For an introduction and definition of CEP, please refer to CEP Blog and Wikipedia.) . 
  • If you need a real comprehensive coverage of CEP, read the paper "Processing flows of information: from Data stream to Complex Event Processing" [1]. (or the slide deck). 
In CEP, we think in terms of event streams. Event stream is a logical sequence of events that become available over time. For example, stock event steam consists of events that notify changes to stock price. Users provide queries to the CEP engine, which implements the CEP logic, and the CEP engine matches those queries against events coming through event streams.

CEP differs from other paradigms like event processing, filtering etc., by its support for temporal queries that reason in terms of temporal concepts like "time windows" and "before and after relationships" etc. For example, a typical CEP query will say that

“If IBM stock value increased by more than 10% within an hour, please notify me".

Such a CEP query has few defining characteristics.
  1. CEP queries generally keep running, and keep emitting events when events match the condition given in the query.
  2. CEP query operates on the fly and stores only minimal amount of events into a storage.
  3. CEP Engines responds to conditions generally within milliseconds range.

What is Scaling CEP?

There are many CEP Engine implementations (see CEP Players list 2012). However, mostly CEP engines run in a large box, scaling up horizontally. Vertically scaling CEP engines is still an open problem. Reminder of this post discusses what vertically scaling CEP engine means and some of the potential answers.

We use the term Scaling to describe the ability for a CEP system to hande larger or complex queries by adding more resources. Scaling CEP has several dimensions.
  1. Handling Large number of queries
  2. Queries that needs large working memory
  3. Handling a complex query that might not fit within a single machine
  4. Handling large number of events

How to Scale CEP?

Let us consider each dimension separately.

Handling Large Number of Queries

This is the easiest of the four since we can use the shared nothing architecture. Here we can run multiple CEP Engine instances (each instance runs in a single host) each running a subset of queries.
  1. Trivial implementation will send all events (streams) to all the CEP engines
  2. More optimized implementations can use a Publish/subscribe message broker network (like Narada Broker). Here each CEP engine should analyze the deployed queries and subscribe to required event streams in the broker network. Generally, we match each event stream to a topic in the publish/subscribe system.
  3. Third option is to delegate the event distribution to a Stream Processing system (e.g. Apache S4 or Strom). For instance, links [4] and [5] describe such a scenario to run Esper within Strom.

Queries that need large working memory

For instance, a long running complex query that needs to maintain a large window and all events in the window would need a large working memory. Potential answer to this problem is to use a distributed cache to store the working memory. Reference [6] describes such a scenario.

Handling large number of events handling a complex query that might not fit within a single machine

We will handle the both scenarios together as both are two sides of the same coin. In both cases, we have trouble fitting a single query into a single host such that it can support the given event rate.
To handle such scenarios, we have to distribute the query across many computers.

We can do this by breaking the query into several steps in a pipeline that matches events against some conditions and republish the matching events to steps further in the pipeline. Then we can deploy different steps of the pipeline into different machines.

For example, lets consider the following query. This query matches if there are two events within 30 seconds from IBM stocks that having price greater than 70 and having prize increase more than 10%.

select a1.symbol, a1.prize, a2.prize from every 
    a1=StockStream[price > 70  symbol =’IBM’] -> 
    a2=StockStream[price > 70  symbol =’IBM’]
        [a1.price < 1.1*a2.prize][within.time=30]


As shown by the figure, we can break the query into three nodes, and each node will have to republish the matching events to the next node. (Option 1)

CEP Query as a Pipeline

However, queries often have other properties that allow further optimization. For example, although the last step of matching prize increase is stateful other two steps are stateless. Stateful operations remember information after processing an event so that earlier events affect the processing of later events while stateless operations only depends on the event being processed.

Therefore, we can add multiple instances in the place of those statless instances using a shared-nothing architecture. For example, we can break the query into five nodes as shown by the bottom part of the picture (Option 2).

Also another favorable fact is that CEP processing generally happens through filtering where amount of events reduce as we progress through the pipeline. Therefore, pushing stateless filter like operations (e.g. matching against symbol ="IBM") to the first parts of the pipeline and scaling them in shared nothing manner should allow us to scale up the system for much higher event rates. For example, lets say that the StockQuote event stream generates 100,000 events per seconds, but only 5% of them are about IBM. Therefore, only 5000 events will make it past the first filter, which we can handle much easier than 100k events.

However, it is worth noting that above method only works with some queries. For example, if we have a query that has a single stateful operation like window-based pattern, we cannot use this method.
Unfortunately, there is no framework that can do this out of the box (let me know if I am wrong). So if you want to do this, you will have to code it. If you choose to do that, using a pub/sub network or stream processing system might reduce most of the complexities.

Please shared your thoughts!
  1. Alessandro Margara and Gianpaolo Cugola. 2011. Processing flows of information: from data stream to complex event processing. InProceedings of the 5th ACM international conference on Distributed event-based system(DEBS '11).
  2. http://www.thetibcoblog.com/2009/08/21/cep-versus-esp-an-essay-or-maybe-a-rant/
  3. http://www.slideshare.net/TimBassCEP/mythbusters-event-stream-processing-v-complex-event-processing-presentation
  4. Run Esper with Storm -http://stackoverflow.com/questions/9164785/how-to-scale-out-with-esper
  5. http://tomdzk.wordpress.com/2011/09/28/storm-esper/
  6. Distributed Cache to scale CEP -http://magmasystems.blogspot.com/2008/02/cep-engines-and-object-caches.html

Thursday, May 3, 2012

How to measure the Performance of a Server?

I have repeated following too many times in last few years and decide to write this up. If I have missed something, please add a comment.

Understanding Server Performance

Characteristic Performance Graph's of a Server

Above graphs capture the characteristic behavior of a server. As shown by the graph, server performance is gauged by measuring latency and throughput against latency.
  • Latency measures the end-to-end time processing time. In a messaging environment, teams determine latency by measuring the time between sending a request and receiving the response. Latency is measured from the client machine and includes the network overhead as well.
  • Throughput measures the amount of messages that a server processes during a specific time interval (e.g. per second). Throughput is calculated by measuring the time taken to processes a set of messages and then using the following equation.

Throughput = number of completed requests / time to complete the requests

It is worth noting that these two values are often loosely related. However, a we cannot directly derive one measurement from the other.

As shown by the figure, a server has an initial range where throughput increases at a roughly linear rate and latency either remains constant or linear. As concurrency increases, the approximately linear relationship decays, and system performance rapidly degrades. Performance tuning attempts to modify the relationship between concurrency and throughput and/or latency, and maintain a linear relationship as long as possible.

For more details about latency and throughput, read the following online resources:
  1. Understanding Latency versus Throughput
  2. Latency vs Throughput
Unlike static server capacity measurements (e.g. CPU processing speed, memory size), performance is a dynamic measurement. Latency and throughput are strongly influenced by concurrency and work unit size. Larger work unit size usually negatively influence latency and throughput. Concurrency is the number of aggregate work units (e.g. message, business process, transformation, or rule) processed in parallel (e.g. per second). Higher concurrency values have a tendency to increase latency (wait time) and decrease throughput (units processed).

To visualize server performance across the range of possible workloads, we draw a graph of latency or throughput against concurrency or work unit size as shown by the above graph.

Doing the Performance Test

Your goal of running a performance test is to draw a graph like above. To do that you have to run the performance test multiple times with different concurrency and for each test measure latency and throughput.

Following are some of the common steps and a checklist.

Workload and client Setup

  1. Each concurrent client simulates a different user. Each run in a separate thread, and run a series of operations (requests) against the server.
  2. First step is finding a workload. If there is a well-known benchmark for the particular server you are using, use that. If not, create a benchmark by simulating the real user operations as closely as possible.
  3. Messages generated by the test must not be identical. Otherwise, caching might come to play and provide too optimistic results. Best method is to capture and replay a real workload. If that is not possible, generate a randomized workload. Use a known data set whenever it makes sense.
  4. We measure latency and throughput from the client. For each test run we need to measure following.
  5. End to end time taken by each operation. Latency is the AVERAGE of all end-to-end latencies.
  6. For each client, we collect the test-started time, test-end time, and the number of completed messages. Throughout is the SUM of throughput measured at each client.
  7. To measure the time, if you are in Java, you can use System.nanoTime() or System. currentTimeInMillis () and with other programing languages you should use equivalent methods.
  8. For each test run, it is best to take readings for about 10,000 messages. For example, with concurrency 10, each client should send at least 1000 messages. Even if there are many clients, each client should at least send 200 messages.
  9. They are many tools that can do the performance test. Examples are JMeter, LoadUI, javabench, ab. Use them when applicable.

Experimental Setup

  1. You may need to tune the server for best perforce with settings like enough Heap memory, open file limits etc.
  2. Do not run both client and the server on the same machine (they interfere with each other and results and affected)
  3. You need at least 1GB network to avoid the interference of the network.
  4. Generally, you should not run more than 200 clients from the same machine. For some cases, you might need multiple machines to run the client.
  5. You have to note down and report the environment (Memory, CPU, number of cores, operating system of each machines) with the results. It is a good practice to measure the CPU usage and memory while test is running. You can use JConsole (if it is Java) and if you are in a linux machine run “watch cat /proc/loadavg” command to track load average. CPU usage is a very unreliable matrix as it changes very fast. However, load average is a very reliable matrix.

Running the test

  1. Make sure you restart the server between each two test runs
  2. When you start the server, first send it few hundred requests before starting the real test to warm up the server.
  3. Automate as much as possible. Ideally running one command should run the test, collect results, verify the results, and print summery/ graphs.
  4. Make sure nothing else is running in the machines at the same time.
  5. After test run has finished, check the logs and results to make sure operations were really successful.

Verifying your results

  1. You must catch and print the errors both at the server and the client. If there are too many errors, you results might be useless. Also it is a good idea to verify the results at the client side.
  2. Often performance tests are the first time you stress your system, and more often than not you will run into errors. Leave time to fix them.
  3. You might want to use a profiler to detect any obvious performance bottlenecks before you actually run the tests. (Java Profilers: JProfiler, YourKit)

Analyze your results and write it up

  1. Data cleanup (Optional) – it is a common practice to remove the outliers. General method is to either remove anything that is more than 3 X stddev different from the mean or remove 1% of furthest data from mean.
  2. Draw the graphs. Make sure you have a title, captions for both X and Y-axis with Units, and legend if you have more than one dataset in the same graph. (You can use Excel, OpenOffice, or GNU Plot). Rule of thumb is that reader needs to be able to understand the graph without reading the text).
  3. Optionally, draw 90% or 95% confidence intervals (Error bars)
  4. Try to interpret what results mean
    • Generalize
    • Understand the trends and explain them
    • Look for any odd results and explain them
    • Make sure to have a conclusion
Hope this was useful. If you enjoyed this post you might also like Mastering the 4 Balancing Acts in Microservices Architecture and Distributed Caching Woes: Cache Invalidation



Wednesday, May 2, 2012

Scaling Distributed Queues: A Short Survey

Following is a part of the related works survey from the paper, "Andes: a highly scalable persistent messaging system". However, following has nothing about Andes, but only how different distributed queue implementations work. I will write about Andes later.

What is a Distributed Queue?

Distributed Queue is a FIFO data structure that is accessed by entities in a distributed environment. Working of a distributed queue will be as follows.
  1. There are two types of users (publishers and subscribers)
  2. A users creates a queue (or queues may be created on demand)
  3. Subscribers subscribe to a queue
  4. Publisher send a message (publish) to the queue
  5. Published message is sent to a one of the subscribers who has subscribed to the queue
Distributed Queues provides strict or best effort support for in-order delivery where subscribers receives messages at the same order they have been published. (It is very hard to enforce this across all subscribers, and therefore, often implementations enforce this within each subscriber. For example, if messages m1, m2 .. m100 are published in order, each subscriber will see a subset of messages in ascending order. But there are no guarantee about the global order seen across subscribers).

What does Scaling Distributed Queues means?

Scaling is handling larger workload by adding more resources. Workload can be increased in many ways, and we call those different dimensions of scale. There are three main dimensions.
  1. Scaling to Handle large number of queues
  2. Scaling to handle a queue that has large workload
  3. Scaling to handle large messages

Distributed Queue Implementations

There are many distributed queue implementations in JMS servers like ActiveMQ, HorentMQ etc. Focus of our discussion is that how can they scale up.

There are four choices
  1. Master-Salve topology – queue is assigned to a master node, and all changes to the queue are also replicated to a salve node. If the master has failed, the slave can take over. (e.g. Qpid and ActiveMQ, RabbitMQ).
  2. Queue Distribution - queues are created and live in a single node, and all nodes know about all the queues in the system. When a node receives a request to a queue that is not available in the current node, it routes the request to the node that has the queue. (e.g. RabbitMQ)
  3. Cluster Connections – Clients may define cluster connections giving a list of broker nodes, and messages are distributed across those nodes based on a defined policy (e.g. Fault Tolerance Policy, Load Balancing Policy). It also supports message redistribution, which means if the broker does not have any subscriptions, the messages received by that broker are rerouted to other brokers that have subscriptions. It is worth nothing that server side (brokers) plays a minor role in this setup.
  4. Broker networks - The brokers are arranged in a topology, and messages are propagated through the topology until the messages reach a subscriber. Usually, this uses Consumer priority mode where brokers that are close to the point of origin are more likely to receive the messages. The challenge is how to load balance those messages. (e.g. ActiveMQ)
Replication in distributed queues is inefficient as delivering messages in-order needs replication of state immediately.

In cluster connections and broker networks, in order message delivery provides a best effort guarantee only. If a subscriber has failed or subscription has been deleted, the broker nodes are force to either drop the message or to redistribute them out of order to the other brokers in the network.

Any of the above modes do not handle scaling for large messages

Summery

TopologyProsConsSupporting Systems
Master SlaveSupport HA No Scalability Qpid, ActiveMQ, RabbitMQ
Queue DistributionScale to large number of QueuesDoes not scale for large number of messages for a queueRabbitMQ
Cluster ConnectionsSupport HAMight not support in-order delivery Logic runs in the client side takes local decisions.HorentMQ
Broker/Queue NetworksLoad balancing and distributionFair load balancing is hardActiveMQ

Authentication and Authorization Choices in WSO2 Platform

Following diagram come out of a chat with Prabath, and it shows most of the public APIs of WSO2 Identity Server, and typical design and deployment choices with implementing authentication and authorization with WSO2 platform.

Authentication and Authorization Choices in WSO2 Platform


Each server in the WSO2 platform is built using the Carbon platform. We use the term “Carbon server” to denote any Carbon based server like ESB, AS, BPS.

Techniques explained here are applicable across most of the WSO2 products. In the following figure, and the circles with branching out paths shows different options.

As shown by the figure, Carbon server may receive two types of messages: messages with credentials (like passwords), and messages with tokens. When a server receives a message with credentials, the server first authenticates the request and optionally authorizes the action. When the server receives a message with tokens, generally there is no authentication step, and the token is directly validated against permissions and request is either granted or denied.

Authentication

Authentication needs a User store that holds the information about users and “Enforcement Point” that verifies the credentials against the User store.
Carbon Servers support two user stores.
  1. Database based user store
  2. LDAP based user store
It is a common deployment pattern for multiple carbon servers in a single deployments to point to the same user store, and this provide a single point to control and manage the users.

We can configure any Carbon server to authenticate any incoming requests. It supports many options like HTTP Basic Authentication over SSL for HTTP, WS-Security User Name Tokens, Web SAML SSO etc. This authentication is done against the users that reside the user store.

Also, each Carbon server has a Web Service called Authentication Admin Web Service, which exposes the authentication as a Web Service to the outside. The client can invoke the Authentication Admin Web Service and get a HTTP Cookie after logging in and reuse the Cookie to do authenticated calls to a Carbon Server.

Authorization

In Authorization Scenarios, Carbon server receives a request that is generally already authenticated or a request that include a token. In either case, we want to check weather the authenticated user have enough permission to carry out a given action.
Using XACML terminology, we can define three roles in such a scenario. (XACML includes other roles, which we will ignore on this discussion).
  1. PEP (Policy enforcement Point) intercepts requests and makes sure only authorized requests are allowed to proceed.
  2. PDP (Policy definition Point) stores the permissions and verify that given user have enough permissions
  3. PAP (Policy Administration Point) let users define and change permissions.
Carbon servers support the Policy Enforcement Point (PEP) role using a WSO2 ESB Mediator or Apache Axis2 Handler or through a custom user code.
For Policy Definition Point (PDP), we support three ways to define permissions.
  1. Database based permission stores – permissions are stored in the Database
  2. XACML – permissions are described using XACML specification
  3. KDC - Permissions are provided as Kerberos Tokens
We support policy administration (PAP) through WSO2 Identity Server, which enables users to edit the permission definitions through the management console.
These gives rise to several scenarios
  1. If the Database based permission store is used, we can configure any Carbon Server to connect to the permission database directly and load the permissions to the memory. Then it authorizes user actions using those permission descriptions. Carbon servers also have an Authorization Admin Web Service that let users check for permissions of a given user remotely.
  2. If XACML based authorizations are used, there must be an Identity Server that acts as a PDP (Policy Definition Point). Each Carbon server (acting as the PEP, Policy Enforcement Point) invokes an Entitlement Service available in the Identity Server to check the permissions. Entitlement service is available as a Web Service or a Thrift Service.
  3. If Carbon server receives a Kerberos token, it talks to a configured Kerberos Server and verifies token. WSO2 IS come bundled with Apache KDC out of the box.
More information out WSO2 Identity Server can be found from http://wso2.com/products/identity-server/ and if there are any missing features, please drop a note to WSO2 architecture List.