Archive for the ‘asa’ tag
RichardF commented on an article I wrote last November and mentioned the prompt command in the ASA. I never set aside any time to research it, but I finally took the time today while waiting for a maintenance window.
This is one of those little things in life that make me happy. Since the active ASA always has the same hostname and IP address, I find it hard to keep track of to which firewall I'm actually connected. That "configurtions are no long in sync" message you get when you conf t on the standby firewall really irks me. With the prompt command, I can see which firewall I'm on and in what state it is.
Here are the options you can use.
firewall(config)# prompt ?
configure mode commands/options:
context Display the context in the session prompt (multimode only)
domain Display the domain in the session prompt
hostname Display the hostname in the session prompt
priority Display the priority in the session prompt
state Display the traffic passing state in the session prompt
Note that the command is similar to the service timestamps in IOS where you can stack options. I wound up setting my prompts to "hostname priority state" so I can see that information without having to do a show failover. If you run contexts, I'm sure that would be a good one to include as well. I imagine adding "domain" may make the prompt too long for use, though. Heh.
candy hearts questions my way.
I was exploring commands on the ASA a while back and discovered that you can run commands on the standby unit from the active. Read the rest of this entry »
A buddy asked for some help on configuring a pair of ASAs in active/passive mode, and, by pure coincidence, my newest project is to set up the same. I've done it many time, but it's one of those things that you don't really do every day (unless you're a VAR or something). These things always get covered in rust very quickly in my head, but, once I get one or two details back to the surface, it all comes flooding back. I better take the time to jot down the details. Read the rest of this entry »
We're working on an data center design for a customer, and they've dropped in two ISP links – each with it's own managed router and public IP space off one of the Ethernet interfaces. The idea is that they want to use the Internet links in an active-passive setup without getting their own IP addresses to avoid running BGP with the ISPs. To top it off, the headend of their control is an ASA cluster, so we wind up with two interface on the Internet to treat with a local security level. Oh, the joys of doing network design.
Your first thought is probably to use the old fashioned floating static route where you have a weighted route that takes over if the primary route is withdrawn from the routing table. That only works if the next-hop of that route is no longer available…like when serial interface goes down and the next-hop isn't directly connected any more. This is Ethernet, though, so there's no way for the firewall doesn't know or doesn't care if a host on the network isn't there any more. This config has another problem, too. What about a scenario where the ISP's router is up, but it's interfaces are down? How about if there are routing issues farther upstream? You surely don't want to send traffic to a provider's router is the provider is having issues, right?
If you've ever tried to do something similar on an IOS router, then you've probably done IP SLA. An ASA has the same functionality, but it's just called SLA monitoring. You wind up with a config that is a very similar to IP SLA stuff on IOS routers, actually. I wrote a terrible blog post about it a few years back, and several other bloggers talk about it as well, but the idea is that you have a process, called an SLA monitor on the ASA, that monitors an IP address by pinging it. You then create a track object that watches the monitor's status. This track object is applied to a static route, and, if the SLA monitor fails, the route is removed from the routing table. We've all done something like this with HSRP tracking, so this shouldn't be totally foreign.
Let's take a look at the test network that I've used to simulate the setup at the customer site.
The test is to have INSIDE1 communicate with TARGET. Each ISP knows where TARGET is through a huge EIGRP AS, but we want to detect any routing problems on ISP1. If we find a problem, we want to roll over to ISP2 on the BACKUP interface. What do we monitor, though? We can monitor the IP of the ISP's router at the data center, but we'd miss any issues upstream. Let's monitor the IP of the second router on ISP1, which is 10.0.0.2. In the real world, we'd fine a host somewhere deep on the Intertubes that we think won't go down very often. In our test, 10.0.0.2 is the closest thing we can find to that.
Let's create a beautiful symphony of ICMP generation. First, we create the SLA monitor.
sla monitor 1
type echo protocol ipIcmpEcho 10.0.0.2 interface OUTSIDE
sla monitor schedule 1 life forever start-time now
I think you can see that we are creating an ICMP echo process that will ping 10.0.0.2 on the OUTSIDE interface. The third line is what controls the start and stop of the process; in this case, we start now and don't ever finish thanks to the word forever. We can't use the SLA monitor directly on our routes, so let's create a track object.
track 100 rtr 1 reachability
Now we have track object 100 that looks to SLA monitor 1 for reachability. We apply this to the route just like we do on IOS. We'll go ahead and add the weighted route as well.
route OUTSIDE 0.0.0.0 0.0.0.0 192.0.2.1 1 track 100
route BACKUP 0.0.0.0 0.0.0.0 192.0.2.129 240
Now the default will go through 192.0.2.1 until 10.0.0.2 is unreachable. If that happens, the route is removed from the routing table, and the weighted default route will take over. That's all you need. Of course, I would create another track object for ISP2 so you can at least get a syslog message or SNMP trap if a problem happens over there, but you can probably get away with just the one.
If you've ever done IP SLA on a router, you would call me on the fact that there's some stuff missing. If you don't force the ICMP packets to ISP1's router, the state of the SLA monitor will keep flopping; you flip to ISP2, the SLA check is healthy again, you flip back, the SLA check dies again…ad nauseum. That's not the case for the ASA, actually. Even though the default route has rolled over to the backup, the monitoring process continues to send requests to the old gateway.
Sometime I like it when my gear knows what I'm trying to do; this is one of those times.
stray ICMP packets questions my way.
A few years ago, I developed a Perl-based application that take a template file and pukes out standardized access rules for new hosts as they’re added to the network. This works great for making sure that each host is able to be managed properly. This solution, however, is not very flexible. If I need to remove a host’s access, I may have to take out 20 rules individually. That’s not really cool, so, at the suggestion of a coworker, I’m working on a solution that uses objects, object-groups, and nested object-groups. This should minimize the configured rules and allow new host rules to be added and removed by simply adding hosts to object-groups.
Example time. Let’s say you have a bunch of RFC1918 addresses behind your firewall that all need HTTP access to one network on the InterTubes. First thing to do is to create the objects that will be involved; in this case, that’s all the networks and/or ranges. To be more specific, 192.0.2.0/24 is the public IP to which the hosts need access. The internal hosts are 192.168.0.0/24 and the IP range 10.0.0.1-25. Yes, I know the names are terrible.
object network NET1 subnet 192.0.2.0 255.255.255.0 object network NET2 subnet 192.168.0.0 255.255.255.0 object network NET3 range 10.0.0.1 10.0.0.25
Now, we can use some Snort-like configuration to create object-groups that include the objects we just created. In this case, we’re creating an InterWebs-based object-group and another for local addresses.
object-group network REMOTE-NETS network-object object NET1 object-group network LOCAL-NETS network-object object NET2 network-object object NET3
Now we can use these object-groups to create ACLs. You’ve done this before, right?
access-list TEST-ACL extended permit tcp object- group LOCAL-NETS object-group REMOTE-NETS eq www
To be sure it worked as expected, let’s take a look at the ACLs. The format sucks because the lines are so long; sorry about that.
firewall# show access-list TEST-ACL access-list TEST-ACL; 7 elements; name hash: 0x5329ed72 access-list TEST-ACL line 1 extended permit tcp object-group LOCAL-NETS object-group REMOTE-NETS eq www 0x1abfa4a0 access-list TEST-ACL line 1 extended permit tcp 192.168.0.0 255.255.255.0 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0x50797e0c access-list TEST-ACL line 1 extended permit tcp host 10.0.0.1 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0xa2159c9d access-list TEST-ACL line 1 extended permit tcp 10.0.0.2 255.255.255.254 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0x93f1c362 access-list TEST-ACL line 1 extended permit tcp 10.0.0.4 255.255.255.252 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0x512fc827 access-list TEST-ACL line 1 extended permit tcp 10.0.0.8 255.255.255.248 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0x7b11e96f access-list TEST-ACL line 1 extended permit tcp 10.0.0.16 255.255.255.248 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0xc302aa0e access-list TEST-ACL line 1 extended permit tcp 10.0.0.24 255.255.255.254 192.0.2.0 255.255.255.0 eq www (hitcnt=0) 0x2ea75962
Cool. Everything looks great, and everyone should have the access they need. If a new host with the IP of 172.16.0.28 comes online inside the network, you add a new nested object-group that includes that host. Access is automagically updated, so there’s no need for more ACL lines. Another method is to add the new host directly to the LOCAL-NETS object-group, but that’s going to limit the ways to address that box and related hosts in an ACL. I suggest you just add the new object to the object-group.
As a bonus, you can also nest object-groups into each other. For example, we can create an object-group that includes our the LOCAL-NETS and REMOTE-NETS object-groups.
object-group network ALL-NETS group-object LOCAL-NETS group-object REMOTE-NETS
I don’t know where you’d ever use that specific object-group, but you could use this technique in other ways. I’m looking to create object-groups for each interface of the firewall and creating a super-object (my term) to allow the standard access stuff. You could do the same for office networks; each office has it’s own object-group for access that is also nested in an object that provides basic access to the TubeWebs or something. Use your imagination. :)
Send any questions my way.
This article is based on an ASA 5505 running 8.3.1. Most of the config above should be portable to any 8.x except for declaring the objects. In other versions of 8.x, you may have to add host directly to the object-group. Running on 7.x and below may be a different story.
I’ll start off with a warning. I’ve been running 8.3.1 on my home 5505 for a few hours now. Not only is this not really enough time for a thorough review, it’s also not the environment to test enterprise-level configurations. There are also a lot of details missing that I just don’t know about yet, so please do some research on your own to figure out what’s going to break if you upgrade your ASA.
If you haven’t heard, Cisco has released version 8.3.1 of their ASA operating system. I’m excited about this for only one reason – Smart Tunnels with tunnel policies.
If you’ve never heard of Smart Tunnels, you’re probably not alone. I don’t know why they’re not more popular than they are, but I dig them. A user connects to a URL, logs in, and a little applet loads on the machine that is used to proxy traffic through the ASA. It doesn’t proxy all your traffic, though; only traffic from applications that you define are sent through the tunnel. There is a huge problem that I can’t stand, though. What if you need to SSH through the firewall and to your local LAN at the same time? The smart tunnel applet doesn’t care or even know what you want to do; it tunnels all the traffic from the application. Not good, eh?
The big change to this in 8.3.1 is the addition of tunnel policies to the smart tunnels. According to the release notes, you can now dictate which connections do and don’t go through the smart tunnel. Now, I can configure the tunnel so that some traffic goes through the ASA to get to the production gear, but other traffic pukes out the NIC normally. I know a lot of users who are going to like not having to log in and out all day.
Note: I may do an article on smart tunnels once everything slows down a bit. It’s a solid way to implement a clientless VPN that doesn’t require administrative access on the machine to run.
The big feature that everyone is talking about, though, is the change to the way NAT is done. Back in the day (that means earlier this morning), if I wanted to configure a static NAT, I’d do something like this to create a static and a service NAT to two different boxes.
firewall(config)#static (inside,outside) 192.0.2.1 192.168.1.100 firewall(config)#static (inside,outside) tcp interface ssh 192.168.1.101 ssh
Now, you create an object and give that object all the attributes. I think Cisco calls this auto-NAT. I have no idea what the auto part means. In our example, we would do something like this.
firewall(config)#object network TESTHOST1 firewall(config-network-object)#host 192.168.1.100 firewall(config-network-object)#nat (inside,outside) static 192.0.2.1 firewall(config)#object network TEST2 firewall(config-network-object)#host 192.168.1.101 firewall(config-network-object)#nat (inside,outside) static interface service tcp ssh ssh
I would say that the configuration is easier to parse with your eyes if the ASA didn’t break up the configuration into two parts. If you were to do a show run and look for our configuration, you would have to look in two places. The first part declares the object name and the host/subnet/IP range for which it’s associated. The next part, which comes after the ACLs, declares the NAT stuff.
object network TESTHOST1 host 192.168.1.100 object network TESTHOST2 host 192.168.1.101 [SNIP a billion lines of ACL] pager lines 24 logging enable logging timestamp logging buffer-size 8192 logging buffered informational logging asdm informational logging host inside x.x.x.x flow-export destination inside x.x.x.x 12345 mtu outside 1500 mtu guests 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit x.x.x.0 255.255.255.0 inside asdm image disk0:/asdm-631.bin asdm history enable arp timeout 14400 object network TESTHOST1 nat (inside,outside) static 192.0.2.1 object network TESTHOST2 nat (inside,outside) static interface service tcp ssh ssh
It may be simpler to configure, but it’s not simpler to figure out later. I’d rather have single lines of static statements; at least I can use regex on those efficiently.
There is a bright side to the new NAT thing, though. Because the NAT statements are configured in the object, you can now reference the real IP of the host in ACLs instead of the NATted IP. This will help those of us who use firewalls with 488249284 interfaces and that many NATs for each host. If we wanted to allow access to the SSH host in the example, we would write an ACL that allows access to 192.168.1.101 instead of finding the NATted address on that interface and building the rules to that address.
Speaking of ACLs, you can actually create a global access-group. Instead of creating an ACL with rules and an access-group to bind to an interface, you can build one single ACL and configure an access-group with the global directive to basically apply that ACL to all interfaces. A few quick tests show that you can have both interface and global access-group configured simultaneously and that interface ACLs will be executed first. I need to do some more testing to figure out exactly how these work together.
Everyone should upgrade, right? Nope. I don’t ever upgrade to something cool just because it’s cool. I also don’t like to have to buy more hardware to go up a minor revision. Take a look at the the memory requirements for 8.3.1; every model up to the 5510 requires more than the base amount to upgrade. I got lucky since my 5505 has 512MB in it already, but I would hate to have to justify quadrupling (!) the RAM in a 5540 just for some cool features.
Send any rotten tomatoes questions to me.
My biggest complain about modern firewalls is their lack of the ability to create rules based on URLs or HTTP streams; you have to open access between IP addresses. Yes, I know there are other means to do that, but I want my ASA/PIX/FWSM to do it without making me do so much work.
Anyway, the fact that you have to use IPs brings up some interesting problems. Let’s say you have a server in a DMZ that needs to query Google for some content. Since you’re a hard-ass network guy like I am, you tell the admin that they have provide the data flow they want to use — source IP, destination IP, protocol, port. They come back and tell you that they need their server to connect via HTTP to 22.214.171.124. You put in the rules as given, but the IP has suddenly changed on you.
Google (and lots of other big sites) uses some tricks to keep the load down on their servers and to help with availability, and one such trick is to use round robin DNS, which rotates the A record so everyone doesn’t slam the same boxes. You can query google.com once and get an address, but, when you query it again, you may get a different address. That means that when your new rules don’t work, you have to check the logs, see what got denied, open that up, rinse, and repeat. That sucks.
An easier way might be to create an object-group that includes IPs as you discover them. You put in rules based on an object-group, then, when it fails, you just add to the object-group so you don’t have to put in any more rules. The problem is that you’ll spend a lot of time building up a good baseline. If only there were a way to get a list of IP addresses that Google uses. Hmmm. *segue*
Have you ever heard of SPF netblock records? SPF is an email security mechanism that allows an email server to verify that an email message is coming from an authorized email source. In other words, when a mail server receives mail, it can check to see if the sending server is actually allowed to send mail on behalf of the source domain. It supposed to cut down on spam and whatnot, but I don’t follow it closely enough to know if it’s working. The moral of the story is that is involves a list of IP addresses that an organization maintains; Google happens to be a participant in SPF.
If you query for the TXT record _netblocks.google.com, you get back a text record that looks like this.
[jac@holland ~]$ dig +short txt _netblocks.google.com
“v=spf1 ip4:126.96.36.199/19 ip4:188.8.131.52/19 ip4:184.108.40.206/20 ip4:220.127.116.11/18 ip4:18.104.22.168/17 ip4:22.214.171.124/20 ip4:126.96.36.199/16 ip4:188.8.131.52/20 ip4:184.108.40.206/20 ip4:220.127.116.11/16 ?all”
This record includes all IP addresses that Google says is authorized to send email from google.com. That’s a lot of IP addresses, isn’t it? It might make sense that this list might also be the definitive list of Google production IPs.
My company has used this TXT record in the past to open access to Google. We had an app that needed to query Google maps, and one of our engineers was tired of nickel and diming it to death, so he found the SPF block and put them all in. Works like a champ.
There are always dangers when you rely on information from somebody else, though, right? Google’s usually pretty good about stuff like this, but what if you did the same for another company who only half-heartedly kept their records up-to-date? You may only have half of their IPs in your object-gropu. You might even wind up opening access to or from a cable modem system or from another company who bought the IP addresses.
I’ll also note that there aren’t that many domains using this technique, so finding SPF netblock records may be a challenge. It’s worth the time to do a simply query, though; it might save you some time.
Send any carved pumpkins questions my way.
I can’t believe I haven’t talked about object-groups yet. I had a whole other blog entry written up, and, when I went to link things over, I realized I couldn’t find an intro to it. Here it goes.
Welcome to the modern world. A world of wonder. A world of quickly-advancing technology. A world where clusters of machines sit behind load balancers for scalability and availability. A world where those clusters need access to other clusters. A world where your firewall rulebase gets so big that it’s unreadable without some help.
Enough with the drama already. I would say I hate the cheesy stuff, but I think my whole blog is nothing but cheesy stuff, right? To the point. Enterprise firewall configurations can get quite large with ACLs applied in different directions to different interfaces. Our ACL entries number in the 6000 range, but the firewall we’re running says we’re only at 5% utilization in the ACLE memory space. That means that our not-top-of-the-line firewall is designed to handle 120k lines of ACLs. That can be quite a handful to configure by hand. There may be an easier-to-maintain solution, though.
Let’s say you have a cluster of servers behind your CSM that all need to access a database. Since there’s a nice ASA, FWSM, or PIX between the servers and database (as there should be), you have to open up access for this connection. Let’s say that you have four servers with the IPs of 192.168.100.101-104 that need access to 10.10.10.1 on the mySQL port (TCP/3306).
access-list LIST1 permit tcp host 192.168.100.101 host 10.10.10.1 eq 3306 access-list LIST1 permit tcp host 192.168.100.102 host 10.10.10.1 eq 3306 access-list LIST1 permit tcp host 192.168.100.103 host 10.10.10.1 eq 3306 access-list LIST1 permit tcp host 192.168.100.104 host 10.10.10.1 eq 3306
Where are your remarks? Why don’t you document something for once in your life?
Anyway, that’s easy, right. Four configuation lines isn’t so bad, but some of the server admins come to you one day and tell you that the company actually marketed the new web app and that tey are adding 37 more servers to the cluster. Now the 37 new servers need the same rules, right? The server dudes also tell you that, since the app has grown so much, the DBAs have set up a split-read-write scenario where the current database handles the reads and a new database handles the writes. That’s 78 new rules (37 to the old and 41 for the new). That’s a lot of rules.
Object-groups to the rescue. An object-group is a logical group of objects (duh!) that you can use to create ACLEs. You can create a group of hosts, a group of network, or a group of ports. For our example, let’s create an object-group that includes all the hosts in the new huge cluster.
object-group network CLUSTER1 description The Huge Cluster (that's what she said) network-object host 192.168.100.101 network-object host 192.168.100.102 ... network-object host 192.168.100.141
What do we do with it, though? You treat it (almost) just like it was a host in an ACL. Remember we wanted to open access to the old database on TCP/3306, right?
access-list LIST1 permit tcp object-group CLUSTER1 host 10.10.10.1 eq 3306
If you do a show access-list LIST1 now, you’ll see that a new rules has been added for each object in the object-group. It should look something like this.
access-list LIST1 permit tcp object-group CLUSTER1 host 10.10.10.1 eq 3306 access-list LIST extended permit tcp host 192.168.100.101 host 10.10.10.1 eq 3306 (hitcnt=0) access-list LIST extended permit tcp host 192.168.100.102 host 10.10.10.1 eq 3306 (hitcnt=0) ... access-list LIST extended permit tcp host 192.168.100.141 host 10.10.10.1 eq 3306 (hitcnt=0)
Notice that the firewall created 41 rules for you out of your one configured line, but now the rules are indented. The indention means that the rules is generated automagically instead of by hand. Since you can only take out rules that you put in by hand, so you can’t take out the line allowing 192.168.100.123 access; it’s an all-or-nothing scenario. Be aware of that.
You can use object-group for ports, too. Let’s add to our example and say that the cluster will need to access the memcached instance on the database server as well. Those processes run on TCP ports 15000 – 15100.
First we build an object-group for the ports we need.
object-group service DBPORTS description mySQL and memcached ports service-object tcp eq 3306 service-object tcp range 15000 15100
Now let’s apply it to the ACL.
access-list LIST1 permit tcp object-group CLUSTER1 host 10.10.10.1 object-group DBPORT
What does the ACL look like now? Well, it’s a Duesenberg.
access-list LIST1 permit tcp object-group CLUSTER1 host 10.10.10.1 object-group DBPORTS access-list LIST extended permit tcp host 192.168.100.101 host 10.10.10.1 eq 3306 (hitcnt=0) access-list LIST extended permit tcp host 192.168.100.101 host 10.10.10.1 eq 15000 (hitcnt=0) ... access-list LIST extended permit tcp host 192.168.100.101 host 10.10.10.1 eq 15099 (hitcnt=0) access-list LIST extended permit tcp host 192.168.100.101 host 10.10.10.1 eq 15100 (hitcnt=0) ... access-list LIST extended permit tcp host 192.168.100.141 host 10.10.10.1 eq 3306 (hitcnt=0) access-list LIST extended permit tcp host 192.168.100.141 host 10.10.10.1 eq 15000 (hitcnt=0) ... access-list LIST extended permit tcp host 192.168.100.141 host 10.10.10.1 eq 15099 (hitcnt=0) access-list LIST extended permit tcp host 192.168.100.141 host 10.10.10.1 eq 15100 (hitcnt=0)
That’s a lot of ACL entries for one configuration line, isn’t it? Let’s see. 102 ports times 41 servers is 4182 lines in the ACL. You can see how might be to your advantage to use object-groups at times.
Send any candy corn questions my way.
Wow. A new entry. Everyone sit down before you pass out.
I’ve got a real-world example for you today. We have an ASA 5540 installed at a business unit with interfaces in multiple networks, including one containing the production servers and another containing the accounting servers. The production network sits on a 7600 that’s not ours, so, to avoid IP conflicts, we are statically NATting connections into that network. The 7600 has with many, many VLANs, and, since the firewall production servers are on different VLANs, there’s an interface VLAN between us. Sounds pretty straightforward, but it just wasn’t working when we try to connect between the interfaces.
When we tried to connect from the accounting servers to the production gear, the firewall saw the SYN, built the outbound connection, sent the packet on, and waited. Nothing back. SYN timeout. The vendor on the production side checked the routing. Fine. Checked the ACLs. None installed. When the (other) vendor ran TCPDump on the production servers, they saw the SYN landing and the SYN-ACK leaving, but it never got to the ASA. We even looked at the inline IDS and still didn’t see the SYS-ACK hitting the firewall. It was simply not getting passed on.
I got tired of walking people through stuff over the phone, so I drove up there to see what I could find. When I checked the ARP table on the 7600, I noticed that the statically NATted IP we were serving was conveniently incomplete. For those who don’t know, that means that the 7600 was ARPing for the address, but nothing was answering for it. Obviously, our ASA should be answering, right? To make the situation a little more dire, I did a debug arp (or something close) on the firewall and generated an ARP request; the firewall saw the request but just ignored it. Ugh!
If you couldn’t tell by the title, it turns out that the solution was to enable proxy ARP. It’s off by default for good reason, but here’s how to enable it.
no sysopt noproxyarp PRODUCTIONINTERFACE
Enabling proxy ARP, however, could be a security issue. Any time you use the word “proxy”, there is a potential to spoof addresses, and, in this case, an attacker could (potentially) use the firewall to discover hosts that are on the other side of it. That wouldn’t be good.
A more-secure solution is to use static ARP entries. In our case, we added a static ARP entry on the 7600 that points our NATted IP to the MAC address of the firewall. Now, when you ping the IP, the 7600 doesn’t ARP; it already has the MAC in the ARP table, so it just sends the packet on. Since we only have one static translation in this case, it’s no big deal, but, if we had a whole class-C of addresses to NAT, there would be a management problem.
A part of me wants to do the simple thing and enable proxy ARP, but the vast majority of article, blogs, forums, lists, etc., that I’ve ready say to turn it off for security and efficiency purposes. The more I think about it, though, the more Iwonder why proxy ARP needs to be enabled to make staic NATs work. I looked back at an old PIX running 6.x, and proxy ARP is on by default. The same holds true for an FWSM running 2.x. I’m going to have to ask Cisco what’s up with that.
Send any misconfigured subnet masks questions my way.
Here’s a simple one since I haven’t updated in a while. I have my ASA 5505 at home and want to forward TCP/80 traffic to my public IP to my webserver at 10.10.10.10. There are two steps here — forward the port and open the ACL.
To forward the port, I would use the static directive, but there are two ways to do that. I can either set up a one-to-one NAT or a port redirection. In the one-to-one NAT, you have a outside address that’s mapped directly to an inside address, and any traffic to that IP is passed to the inside host (if it passes ACLS, of course). One of the limitation, though, of using this setup is that you can’t use that IP as your PAT address, and, since I only have one IP, no other inside hosts would have a outside address to which to be NATted. The other method — port redirection — is a much better solution. In this setup, I actually forward a protocol/port on a outside address to a protocol/port on an inside address. Since there are other ports available on that outside address, the address is still available for other hosts to use as a NAT address.
In an enterprise, I would probably use an address out of my pool for the port forwarding, but, since I only have one address at home, I’ve got another decision to make. I can configure the static statement with an IP address or I can use the reserved word interface to indicate the IP that is on an interface. This is a great feature, actually, since my outside IP could potentially change without notice. I’m going to use that feature, too.
static (inside,outside) tcp interface 80 10.10.10.10 80
This is pretty simple, but I’ll explain. The ASA will take any request that comes in on TCP/80 (HTTP) on its outside interface’s IP and forward it to TCP/80 of 10.10.10.10. If my webserver ran on TCP/81 on my box, I could just change the last 80 to 81 to make it work.
The port is redirecting, but I still need to open the ACL. When that’s done, everything should work as expected.