Advertisement

Sri Lanka's First and Only Platform for Luxury Houses and Apartment for Sale, Rent

Tuesday, June 24, 2014

Simple Redis Master/Slave Replication

Redis is an in memory key value store developed to be scale-able from ground up. Redis enables this through client side sharding as well as server side master/slave replication. In this post I am going to explain a Simple 1 Master 2 Slave Server Replication Setup for Redis in a Single Host. This can be easily moved to multiple Hosts if and when the need arises with very minor changes. For this you must have downloaded redis and compiled it. If you haven't done that please follow my previous post.
The setup I am going to create looks like the above figure, where there is a Redis master listening on port 6379 and 2 Redis slaves connecting to the Master and will be listening on ports 6380 and 6381 respectively.

You need to follow the below steps to create 2 different configuration files for the slaves. 
  1. Open a Shell Terminal
  2. Navigate to redis home directory
  3. Issue "cp redis.conf redis-slave1.conf"
  4. Issue "cp redis.conf redis-slave2.conf"
  5. Open the "redis-slave1.conf" file using a Text Editor and find the line which has the port configuration and change it so that it looks like the following;  
          port 6380

     6.  And then find the line which has the slaveof configuration and change it so that it looks like the following; 

          slaveof 127.0.0.1 6379

       7.   And then find dbfilename configuration and change to "slave1-dump.rdb" as following;

          dbfilename slave1-dump.rdb

      8.  Repeat the steps 5 - 7 with "redis-slave2.conf" but this time the port must be 6381, dbfilename becomes "slave2-dump.rdb" and slaveof configuration remains the same.
       9.  Now you can start the Master followed by the 2 Slave by issuing the following commands;          

          nohup ./src/redis-server ./redis.conf > master.log 2>&1&

          nohup ./src/redis-server ./redis-slave1.conf > slave1.log 2>&1&

          nohup ./src/redis-server ./redis-slave2.conf > slave2.log 2>&1&

     10.  Now you can issue a "./src/redis-cli" and can connect to the master or any slave by providing the "./src/redis-cli -p <Slave Port>"

The recommended setup is that All writes are done to Master and All reads are performed on Slaves.
As you can see you just need to change the "127.0.0.1" host ip in your Slaves' slaveof command if you want to move this replication setup to multiple hosts. 

Sunday, June 22, 2014

Getting Started with Redis on Ubuntu

Redis, an in-memory key value data store. It comes under the category of NoSQL or Non Relational Database where storing, manipulating and retrieving of data in Redis doesn't require and use any SQL or Relations (Tables).

Relational Databases look at solving real world domain representation inside computers using a Relation/Table. A Relation/Table is a Set where the following properties are guaranteed.
  1. Every Cell contains one atomic value.
  2. Every Row is uniquely identifiable.
  3. Order of the columns has no significance.
  4. Order of the rows has no significance.
Furthermore relational databases allow to define constraints within and among relations. These constraints can be;
  1. Entity Integrity Constraints - Primary Key/Unique Columns 
  2. Referential Integrity Constraints - Foreign Key
On top of this there is a language called Structured Query Language which is a standardized way of Retrieving, Manipulating of data in those Relation/Table.

With the growth of high volume websites such as Twitter, Facebook and the emergence of IAAS (Infrastructure As A Service) solutions such as Amazon AWS, which uses distributed data centers around the globe, the need for NoSQL databases are ever so high! This is because issuing of a single SQL which covers data in across multiple data centers in different countries is not possible. Furthermore relational databases depends a lot on JOINs and distributed data centers pose a problem to this also. 

Redis, on the other hand doesn't use SQL or Relation/Table. Redis has built-in data structures and algorithms to solve frequently faced design issues.

Redis has Key/Value, Lists, Hashes (Maps), Set and Sorted Set data structures.

To get started with Redis in Ubuntu;
  1. You just need to download Redis.
  2. Unzip the File
  3. Open a Shell Terminal
  4. Navigate into the extracted directory
  5. Issue "make" command
  6. Optionally Issue "make test" to verify everything is working fine.
  7. Navigate into "src" folder
  8. Issue "chmod +x redis-server" and "chmod +x redis-cli"
  9. Issue "./redis-server"
  10. Now the Redis server will be up and listening in port 6379
  11. You can issue "./redis-cli" to start a Client where you can issue commands to Redis server
Trackbacks/Pings

Friday, June 13, 2014

Custom Dozer Mapper Factory Bean

As part of my work I had to learn and make use of Dozer mapper. A Java Bean to Java Bean Mapper which enables Converting from one type of Java Bean (Ex:- Spring hateoas Resource) to another Java Bean (Ex:- Entity) easy and less error prone. You can learn more about this library from Dozer page.

The tricky part doing the job was that the requirement was that every JPA Entity had a corresponding Mutable and Non Mutable interface. Mutable interface extends from the Non Mutable interface. So the mapping needed to be done by using the Mutable interface to Spring hateoas Resource.

<?xml version="1.0" encoding="utf-8"?>
<mappings xmlns="http://dozer.sourceforge.net" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
          http://dozer.sourceforge.net/schema/beanmapping.xsd">

    <mapping>
        <class-a>com.shazin.example.domain.MutableItem</class-a>
        <class-b>com.shazin.example.web.resource.ItemEditResource</class-b>
        <field>
            <a>..</a>
            <b>..</b>
        </field>
        ..
    </mapping>
</mappings>

Furthermore when mapping the two back using the DozerBeanMapper API inside of the Service, the following needed to be done, completely avoiding the Entity class because at the service level only interfaces must be used not entities.

mapper.map(sourceItemResource, MutableItem.class);

This posed an issue because map method expected a second argument of implementation class of the destination bean.

The solution was writing an Custom implementation of org.dozer.BeanFactory which will provide the correct Implementation class based on the interface. So wrote the following AbstractMapperBeanFactory class.

import java.util.HashMap;
import java.util.Map;

import org.dozer.BeanFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public abstract class AbstractMapperBeanFactory implements BeanFactory {
    private static final Logger logger = LoggerFactory
            .getLogger(AbstractMapperBeanFactory.class);

    protected AbstractMapperBeanFactory() {
        register(sourceToDestinationMap);
    }

    private final Map<String, Class<?>> sourceToDestinationMap = new HashMap<>();

    protected abstract void register(
            Map<String, Class<?>> sourceToDestinationMap);

    @Override
    public final Object createBean(Object source, Class<?> sourceClass,
            String targetBeanId) {
        Class<?> destinationClass = null;
        Object result = null;
        try {
            Class<?> targetBeanClass = Thread.currentThread()
                    .getContextClassLoader().loadClass(targetBeanId);
            if (targetBeanClass.isInterface()) {
                destinationClass = sourceToDestinationMap.get(targetBeanId);
            } else {
                destinationClass = targetBeanClass;
            }

            if (logger.isDebugEnabled()) {
                logger.debug("Source Object : " + source);
                logger.debug("Source Class : " + sourceClass);
                logger.debug("Target Bean Id : " + targetBeanId);
                logger.debug("Destination Class : " + destinationClass);
                logger.debug("Target Bean Class : " + targetBeanClass);
            }

            if (destinationClass == null) {
                logger.warn(String.format(
                        "No matching destination class found for class %s",
                        targetBeanId));
            } else {
                result = destinationClass.newInstance();
            }

        } catch (ClassNotFoundException | InstantiationException
                | IllegalAccessException e) {
            logger.error(String.format(
                    "Error while creating target bean for class %s",
                    targetBeanId), e);
        }
        return result;
    }
}


And for each Module we could register Interfaces and corresponding implementations by Sub classing the above abstract class and implementing the register method as following.

public class CustomMappingBeanFactory extends AbstractMapperBeanFactory {

    protected void register(Map<String, Class<?>> sourceToDestinationMap) {
        sourceToDestinationMap.put(MutableItem.class.getName(), ItemEntity.class);
        ..
    }

}

And finally making use of the CustomMappingBeanFactory in the mapping configuration file as below.

<?xml version="1.0" encoding="utf-8"?>
<mappings xmlns="http://dozer.sourceforge.net" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
          http://dozer.sourceforge.net/schema/beanmapping.xsd">

    <mapping bean-factory="CustomMappingBeanFactory">
        <class-a>com.shazin.example.domain.MutableItem</class-a>
        <class-b>com.shazin.example.web.resource.ItemEditResource</class-b>
        <field>
            <a>..</a>
            <b>..</b>
        </field>
        ..
    </mapping>
</mappings>