Skip to main content

Interface Forwarding from one interface to Another using `MASQUERADE`

[Interface Forwarding] from eth1 to eth0 on EDGE node.

Adding route to all the slaves which reside on a private network to communicate with External Server directly using an EDGE node using Interface Forwarding.
NOTE : Below testing was done on RHEL 6.6
What we are trying to do.
  1. All the slave nodes will send their data to Edge nodes on a private interface.
  2. Edge Node will take the data arriving on the private interface and forward it over a external interface.
NOTE: below I have used slaves for all the nodes which are communicating with EDGE, in the this case making EDGE as the master which acts like a router.

Datanodes ifconfig

Slaves will only run on Private network.
  1. 192.168.0.11 aka eth1 Private Interface.
Here is the ifconfig.
[root@slave-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.8  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1efe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:131581 errors:0 dropped:0 overruns:0 frame:0
          TX packets:148636 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11583580 (11.0 MiB)  TX bytes:35866144 (34.2 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:245626 errors:0 dropped:0 overruns:0 frame:0
          TX packets:245626 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:286415155 (273.1 MiB)  TX bytes:286415155 (273.1 MiB)
Edge Node ifconfig
  1. 172.14.14.214 aka eth0 External Interface
  2. 192.168.0.11 aka eth1 Private Interface.
Here is the ifconfig.
[root@edge-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:172.14.14.214  Bcast:172.14.14.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:908442 errors:0 dropped:0 overruns:0 frame:0
          TX packets:235173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:77363514 (73.7 MiB)  TX bytes:33167098 (31.6 MiB)

eth1      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.11  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:210510 errors:0 dropped:0 overruns:0 frame:0
          TX packets:177170 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:61583138 (58.7 MiB)  TX bytes:16125613 (15.3 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:13799253 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13799253 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:27734863794 (25.8 GiB)  TX bytes:27734863794 (25.8 GiB)

[root@edge-node ~]#

Configuration.

  1. Create FORWARDer on the Edge node.
  2. Create route on all the slave.
  3. Update /etc/hosts on slave nodes.

1. Create FORWARDer on the Edge node.

  1. If you haven’t already enabled forwarding in the kernel, do so.
  2. Open /etc/sysctl.conf and uncomment net.ipv4.ip_forward = 1
  3. Then execute $ sudo sysctl -p
  4. Add the following rules to iptables
Commands.
[root@edge-node ~]# iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE  
[root@edge-node ~]# iptables -A FORWARD --in-interface eth1 -j ACCEPT

2. Create route on all the slave.

Here is the command to add the route in slaves.
[root@datanode ~]# route add -net 172.0.0.0 netmask 255.0.0.0 gw 192.168.0.11 eth0
We are tell all the traffic trying to go to 172.x.x.x will have to use 192.168.0.11 as the gateway. Which is the Private Interface on the Edge Node.

3. Update /etc/hosts on slave nodes.

And then we update the /etc/hosts file with the direct IP of External Server 172.14.14.174, as slave node now should be able to communicate to the External Server.
[root@slave-nodes ~]# ping  172.14.14.174
PING 172.14.14.174 (172.14.14.174) 56(84) bytes of data.
64 bytes from 172.14.14.174: icmp_seq=1 ttl=127 time=1.02 ms
64 bytes from 172.14.14.174: icmp_seq=2 ttl=127 time=1.04 ms
64 bytes from 172.14.14.174: icmp_seq=3 ttl=127 time=1.03 ms
^C
--- 172.14.14.174 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2892ms
rtt min/avg/max/mdev = 1.020/1.033/1.049/0.038 ms
[root@slave-nodes ~]# ping  172.14.14.141
PING 172.14.14.141 (172.14.14.141) 56(84) bytes of data.
64 bytes from 172.14.14.141: icmp_seq=1 ttl=127 time=0.968 ms
64 bytes from 172.14.14.141: icmp_seq=2 ttl=127 time=1.01 ms
64 bytes from 172.14.14.141: icmp_seq=3 ttl=127 time=3.73 ms
^C
--- 172.14.14.141 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2350ms
rtt min/avg/max/mdev = 0.968/1.906/3.732/1.291 ms

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa