Weighted Round Robin (WRR)

Before talking about Weighted Round Robin (WRR), I would like to introduce background knowledge related to it:

Fair queuing: a family of scheduling algorithm used in some process and network scheduler. It uses separate data packet queue for each traffic flow as opposed to the traditional approach with one FIFO queue for all packet flows. The purpose is to achieve fairness when a limited resource is shared. For instance to avoid that flows with large packets achieve more throughput than other flows (or processes). Thus a traffic flow cannot take more than its fair share of the link capacity. So commonly we can define the capacity of each queue to (link capacity/queue size). It groups the packets into classes and share the service capacity between them.

Generalized Processor Sharing (GPS): based on fair queuing, we may add weight to every single queue of the whole fair queuing system. And then it is named to Generalized Processor Sharing.

Weighted Round Robin (WRR): it is the simplest approximation of GPS. The difference between these two is that GPS serves infinitesimal amounts of data from each non-empty queue while WRR serves a number of packets for each nonempty queue.

Normally, WRR is applied as a load balance solution. For instance, when we deployed a couple of services to several servers and some machines are more powerful than others, we hope that the better machines handle more service requests than those less powerful machines so that we need to find a way to route more requests to them.

So how to implement a WRR algorithm? We can follow the two steps below:

1. Get the greatest common divisor. A simple recursive method would be as below:
public static int gcd(int a, int b) {
if (a == 0 || b == 0) return a + b;
return gcd(b, a % b);

2. Get the most weighty server node when handling a new request:
2.1 Define global variables: currentIndex = -1, currentWeight = 0, maxWeight = the greatest weight of all the server nodes, gcd = greatest common divisor of all the server nodes
2.2 Write a method to get the next most weighty server node using the variables above:
public String getNextServer(Server[] servers) {
while (true) {
currentIndex = (currentIndex + 1) % nodes.length;
if (0 == currentIndex) {
currentWeight = currentWeight – gcd;
if (0 >= currentWeight) {
currentWeight = maxWeight;
if (0 == currentWeight) return null;
if (servers[currentIndex].getWeight() >= currentWeight) return servers[currentIndex].getServerName();

Fundamental of AtomicInteger

Before mention AtomicInteger, I need to clarify why i ++ or ++ i is not thread safe. Actually, these two statements are executed by dividing them into two statements respectively, which are i + 1 and then assign the result to i. Apparently it is not atomic operation and can bring thread safe issue. In this case we need AtomicInteger. AtomicInteger uses CAS (Compare and Swap) to insure thread safe.

AtomicInteger defines a volatile int value, which will return the latest value when being used instead of returning a cached old value. Then uses sun.misc.Unsafe to manipulate the memory directly for changing it. Let’s check the below code:

Code of AtomicInteger
public final int getAndSet(int newValue) {
    for (;;) {
        int current = get();
        if (compareAndSet(current, newValue))
            return current;

public final boolean compareAndSet(int expect, int update) {
    return unsafe.compareAndSwapInt(this, valueOffset, expect, update);

Code of Unsafe (native code)
jboolean sun::misc::Unsafe::compareAndSwapInt (jobject obj, jlong offset, jint expect, jint update) {
    jint *addr = (jint *)((char *)obj + offset);
    return compareAndSwap (addr, expect, update);

static inline bool compareAndSwap (volatile jint *addr, jint old, jint new_val) {
    jboolean result = false;
    spinlock lock;
    if ((result = (*addr == old)))
        *addr = new_val;
        return result;

1. First, the current value will be got by invoking get() method and key word volatile insures the value is latest.
2. Then invoke compareAndSet to do CAS calculation.
3. From the unsafe code we see that it compares the memory address between the expected value and current value, if true then set the memory to newValue.

So finally we get that AtomicInteger is implemented by manipulating memory through native method directly, this can be more efficient than use a lock.

CAP theory

Professor Eric Brewer proposed a famous CAP guess on the ACM Principles of Distributed Computing meeting in July, 2000, which is proven by Seth Gilbert and Nancy Lynch of MIT from theory.

The theory tell us that a distributed system can not sastified the following three basic requirements: Consistency, Availability and Partition tolerance.

Consistency means that data remains consistent in multiple copies. Under this requirement, data from different copies is consistent after updating operation. For instance: when a piece of data has two copies, each copy of data should be the same after updating operation.

Availability means service must be available all the time. For any request from any user must be responded in a limited time. Limited time is a time predefined during system design.

Partition tolerance means when a node (we can also call it a partition) gets disconnected from others, the system can also provide consistent and available service.


CAP theorem

Application of CAP theorem

  • Violate C: this normally means violate strong consistency and keep eventual consistency, not entirely consistency. But eventual consistency brings a time window which makes data is not consitent between different copies, and system designers must take care of this window.
  • Violate A: this means when network partition or something else fails, user would have to wait some time to be serviced.
  • Violate P: this can be implemented by installed all the data in a single distributed node. This would not insure that the system works well, though, it can avoid fails brought by network partition. Nevertheless, this makes system unextendable.




Yvone is my wife, who is a traditional Chinese girl. Small and cute!

We have to live in different cities at the moment. I miss her a lot!

Mathematical concepts in algorithm

Since algorithm relates to mathematics tightly, before we get into algorithm, I strongly recommend us to review some mathematical concepts which algorithm may refer. OK, let’s get started:

1. Big O notation Big O notation is widely used in algorithm evaluation. What exactly is Big O notation? Big O notation is used for describing limiting behavior, which in another word is asymptotic analysis. For example: we assume function f(n) = n2 + 2n + 1, then while n approaches infinity, we can see f(n) approaches n2. In this situation, we express n2 = O(f(n))



Space complexity

Space complexity provides us an approach to evaluate the extra memory space comsuming of our program. For instance, when we exchange values of the two variables a and b, we do it in the following way: var temp = a; a = b; b = temp. From the code we can see that we have to allocate extra temporary memory space to store variable temp for our program. The space complexity is used to evaluate the extra memory space usage of our program.

We use S(n) to represent the space cost of our program. Like T(n) of time complexity, S(n) is a function of the input size n, S(n) = O(f(n)). Function f() reflects the relationship between input size and extra memory consuming of code while function O() reflects the relationship between extra memory consuming of code and he space complexity of code. One last thing, space complexity doen’t equals the actual extra memory consuming of code, it’s just a measure unit.

PS: please refer to my another post to find out what function O() means

How to implement printing in a B/S system

1. Use the default print functionality of browser

2. Use the embedded widget of browser (for instance: WebBrowser in IE, we can find its CLSID by searching regedit)

3. Use third party report tools and export the page which need to be printed to a pdf/excel/word (for instance: Crystal Report, Jasper Report, POI)

Frequently-used vim command

Action command
check current directory :pwd
system clipboard paste “+p
system clipboard copy “+y
Column Edit Mode ctrl-q enter visual block->select the rows you want to edit->edit it->strike Esc key
times input type a number->then type i->then type anything->Esc
Paste Command Ctrl+Shift+R then strike P
Apply one command to all files which are opened bufdo and then command


Differences between html and xhtml

Case Sensitive
Yes, must be written in lower case
Attributes need to be quoted
attributes must be quoted

[To be updated]



Time Complexity

Time complexity is a methodology for us to evaluate the efficiency of our program. It relates to the running time of a piece of code.

We use T(n) to represent the time cost of our program. Normally, T(n) is a function of the input size n. So, T(n) = O(f(n)). Function f() reflects the relationship between input size and ran times of code, and function O() reflects the relationship between running time of code and time complexity of code.

There are following common results of time complexity: O(1), O(log2n), O(n), O(nlog2n), O(n^2), O(n^3). Pease notice that O(1) means a constant time consuming.

PS: Please refer to my another post to find out what function O() means.