Archive for the ‘security’ tag
Oh, my. Another Junos post. Somebody stop me before I get my JNCIA!
This isn’t hard stuff at all. I’m sure there are a couple of cool tricks I don’t know yet, but let’s try anyway. I”m working on an SRX240 here running 11.1 and some change.
Let’s put interfaces ge-0/0/0.0 and lo0.0 in OSPF area 0. If you know the Junos configuration hierarchy, this will be very easy to you. Even if you don’t, you can stare at the config for a little bit and see what we’re doing.
set protocols ospf area 0.0.0.0 interface ge-0/0/0.0 set protocols ospf area 0.0.0.0 interface lo0.0
This is the only OSPF configuration you need, but guess what? It won’t work. Since a Junos device is also a firewall, it will drop OSPF packets as they come into the interface; you have to declare that you do indeed want to accept OSPF packets. You do this by creating a security zone, putting the right interfaces in the right zone, and then enabling OSPF on that zone.
We’ll create a zone called INSIDE for our purposes here. Note that there are about billion more steps (I counted) to fully configure your security zones, but that’s way beyond our scope here.
set security zones security-zone INSIDE interfaces ge-0/0/0.0 set security zones security-zone INSIDE interfaces lo0.0 set security zones security-zone INSIDE host-inbound-traffic protocols ospf
You can also allow OSPF on specific interfaces like this. These commands will also put those interfaces in the right security zone.
set security zones security-zone INSIDE interfaces ge-0/0/0.0 host-inbound-traffic protocols ospf set security zones security-zone INSIDE interfaces lo0.0 host-inbound-traffic protocols ospf
I’m not sure if you need to do this to lo0.0, but it won’t hurt.
Now you can see your OSPF neighbors come up and start exchanging routing information. That is, of course, assuming you did everything else right.
blog deadlines questions my way.
I don't usually cover news from Cisco, but they've changed some certification stuff around again, and I thought I would bring it up. This time they've changed the CCNA Voice, CCVP, and CCSP, so, if you've on those tracks, be careful what you're studying!
Circle 28 February 2011 on your calendars. That's when the CCNA Voice track gets a shakeup. The IIUC (640-460) exam will be no more, and passing CVOICE (642-436) will no longer be a valid way to get the cert. After the big day, you'll have to take ICOMM (640-461). This seems to be a much broader exam instead of having the enterprise and commercial focuses in CVOICE and IIUC, respectively. Look out for both CME- and CUCM-based topics including a troubleshooting section.
Wendell Odom's blog at NetworkWorld
The CCVP is now known as the CCNP Voice. There are still five exams to get the certification, so it's not that different. The QoS exam is gone, but the new CVOICE (642-437) exam includes QoS, so keep studying those queueing methods. The TUC exam is replaced by TVOICE (642-427), which, on the surface seems to be just an update. The CIPT1 (642-447), and CIPT2 (642-457) exams also look like they're simply updated, but you'll have to ask a Voice guy since I don't really know the differences here.. The last exam is CAPPS (642-467), and covers Unity, VPIM, and Presence. Fun stuff.
Wendell's blog again
Like the Voice track, the CCSP gets a name change and is now known as the CCNP Security. There are still four tests like the old track, but the content is updated. You have to take the SECURE (642-637), FIREWALL (642-617), VPN (642-647), and IPS (642-627). Word on the street is that the new VPN exam eliminates the inconsistencies with VPN deployment methods taught in SNAF and SNAA.
Wendell's blog again
Can someone explain why CCSP and CCNP Security are both still listed on the professional cert page at Cisco, but the CCNP Voice gets a "formerly known as" moniker?
I’ll start off with a warning. I’ve been running 8.3.1 on my home 5505 for a few hours now. Not only is this not really enough time for a thorough review, it’s also not the environment to test enterprise-level configurations. There are also a lot of details missing that I just don’t know about yet, so please do some research on your own to figure out what’s going to break if you upgrade your ASA.
If you haven’t heard, Cisco has released version 8.3.1 of their ASA operating system. I’m excited about this for only one reason – Smart Tunnels with tunnel policies.
If you’ve never heard of Smart Tunnels, you’re probably not alone. I don’t know why they’re not more popular than they are, but I dig them. A user connects to a URL, logs in, and a little applet loads on the machine that is used to proxy traffic through the ASA. It doesn’t proxy all your traffic, though; only traffic from applications that you define are sent through the tunnel. There is a huge problem that I can’t stand, though. What if you need to SSH through the firewall and to your local LAN at the same time? The smart tunnel applet doesn’t care or even know what you want to do; it tunnels all the traffic from the application. Not good, eh?
The big change to this in 8.3.1 is the addition of tunnel policies to the smart tunnels. According to the release notes, you can now dictate which connections do and don’t go through the smart tunnel. Now, I can configure the tunnel so that some traffic goes through the ASA to get to the production gear, but other traffic pukes out the NIC normally. I know a lot of users who are going to like not having to log in and out all day.
Note: I may do an article on smart tunnels once everything slows down a bit. It’s a solid way to implement a clientless VPN that doesn’t require administrative access on the machine to run.
The big feature that everyone is talking about, though, is the change to the way NAT is done. Back in the day (that means earlier this morning), if I wanted to configure a static NAT, I’d do something like this to create a static and a service NAT to two different boxes.
firewall(config)#static (inside,outside) 192.0.2.1 192.168.1.100 firewall(config)#static (inside,outside) tcp interface ssh 192.168.1.101 ssh
Now, you create an object and give that object all the attributes. I think Cisco calls this auto-NAT. I have no idea what the auto part means. In our example, we would do something like this.
firewall(config)#object network TESTHOST1 firewall(config-network-object)#host 192.168.1.100 firewall(config-network-object)#nat (inside,outside) static 192.0.2.1 firewall(config)#object network TEST2 firewall(config-network-object)#host 192.168.1.101 firewall(config-network-object)#nat (inside,outside) static interface service tcp ssh ssh
I would say that the configuration is easier to parse with your eyes if the ASA didn’t break up the configuration into two parts. If you were to do a show run and look for our configuration, you would have to look in two places. The first part declares the object name and the host/subnet/IP range for which it’s associated. The next part, which comes after the ACLs, declares the NAT stuff.
object network TESTHOST1 host 192.168.1.100 object network TESTHOST2 host 192.168.1.101 [SNIP a billion lines of ACL] pager lines 24 logging enable logging timestamp logging buffer-size 8192 logging buffered informational logging asdm informational logging host inside x.x.x.x flow-export destination inside x.x.x.x 12345 mtu outside 1500 mtu guests 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit x.x.x.0 255.255.255.0 inside asdm image disk0:/asdm-631.bin asdm history enable arp timeout 14400 object network TESTHOST1 nat (inside,outside) static 192.0.2.1 object network TESTHOST2 nat (inside,outside) static interface service tcp ssh ssh
It may be simpler to configure, but it’s not simpler to figure out later. I’d rather have single lines of static statements; at least I can use regex on those efficiently.
There is a bright side to the new NAT thing, though. Because the NAT statements are configured in the object, you can now reference the real IP of the host in ACLs instead of the NATted IP. This will help those of us who use firewalls with 488249284 interfaces and that many NATs for each host. If we wanted to allow access to the SSH host in the example, we would write an ACL that allows access to 192.168.1.101 instead of finding the NATted address on that interface and building the rules to that address.
Speaking of ACLs, you can actually create a global access-group. Instead of creating an ACL with rules and an access-group to bind to an interface, you can build one single ACL and configure an access-group with the global directive to basically apply that ACL to all interfaces. A few quick tests show that you can have both interface and global access-group configured simultaneously and that interface ACLs will be executed first. I need to do some more testing to figure out exactly how these work together.
Everyone should upgrade, right? Nope. I don’t ever upgrade to something cool just because it’s cool. I also don’t like to have to buy more hardware to go up a minor revision. Take a look at the the memory requirements for 8.3.1; every model up to the 5510 requires more than the base amount to upgrade. I got lucky since my 5505 has 512MB in it already, but I would hate to have to justify quadrupling (!) the RAM in a 5540 just for some cool features.
Send any rotten tomatoes questions to me.
I coworker sent over a link today that got me thinking about an old adage that I’ve been sharing for years. The link actually has nothing to do with the philosophy but did trigger a random spewing of words from my brain.
Here’s what I tell everyone. When I deliver these lines, I usually picture myself as Socrates talking to a bunch of Greeks in togas.
There’s a line. On one end of the line is security; on the other end is convenience. You have to figure out where the best place for your users/application/system/etc. to sit on the line to be both secure and convenient enough to function.
I usually follow that up with an extreme example.
What’s the most convenient configuration for a public webserver? One solution would be to have it cabled to an Internet switch in front of a firewall with every network service enabled and all security software disabled in case it interferes with operation. Quite convenient, but not very secure.
What the most secure configuration? The server is powered down, disassembled, all parts shredded to bits, and the bits put into a dozen different boxes that are shipped to the ends of the world. Nobody’s going to get unauthorized access to that, but it’s not very convenient, is it?
In both cases, being too far to one side actually interferes with functionality. How long will it be before the convenient server get owned by a script kiddie and no longer functions? How long before someone wants to access the secure server and finds it doesn’t function at all? We should probably make a compromise, right?
This is nothing new. We’ve all been saying this for years, right?
What’s my point? I don’t think I have one, really. I guess I just wanted to refresh this in everyone’s mind today.
I’m at training for the ISCW test this week, and this topic came up yesterday. Since it came up last week at the office, I figure it was a sign from $deity that it was time for a blog entry.
An admin in another business unit was trying to set up command access for some of his techs. He was going through a couple of routers and assigning commands to privilege levels so that his techs could access them. He was having a boat load of problems, though, and couldn’t get it to work
He was trying to allow his guys to run a show ip route, but they also wanted to run show ip route x.x.x.x. He was assigning commands to privilege level 7 then giving his tech’s user accounts the same privilege.
Router(config)#privilege exec all level 7 show ip route Router(config)#username user1 privilege 7 secret his.password
For some reason, this wasn’t working, though. The user could log into the router, but they couldn’t get authorized to run the subcommands as expected. I blamed it on his non-standard 7600 running a non-standard IOS version (sorry, I can’t give any more detail without revealing too much about the company), but I came across a much easier way to do it today in class with role-based views.
A view is a set of commands that can be assigned to users, and, to give a user access to those commands, you make them a member of that view. You’ll see that in a second. You also have a superview, which is a set of views, so a user can be a member of multiple views.
There are some prerequisites to using views. First of all, you have to have the enable secret set. You should already have that on a production router, but, if you’re working in a lab or something, you may have issues. You also need to have AAA enabled. That’s beyond the scope here, but I’m sure you can figure it out.
To configure a view, you must first be in the root view. How do you do that? Just enable to it.
You’ll enter the enable secret, and nothing special will happen, but now you can use the parser view command to create a new view. This takes you into the view submode which is where you list what commands you want to let users run. You also set a secret (password) so you can call up the view later.
Let’s create a view called “TechView” for my guy. We’ll give members of that view access to the “show ip route” commands to include all the subcommands. We’ll put the user “tech1″ in that view, too.
Router(config)#parser view TechView Router(config-view)#secret view.pass Router(config-view)#command exec include all show ip route Router(config)#username tech1 view TechView secret tech.pass
Every time that “tech1″ logs in, that user will have access to all the show ip route commands. If you have a user who is not in that view but wants access to it, they can run the enable view TechView command and enter the secret. On the console, you’ll see a message saying that user has switched to the view. If the user does a show parser view, they can see what view they’re in.
Router#enable view TechView Password: Router# *Mar 1 00:09:04.047: %PARSER-6-VIEW_SWITCH: successfully set to view 'TechView'. Router#sh parser view Current view is 'TechView'
Send any test vouchers questions my way.
It looks like one of those Russian b*%*#rds got me some time last week. I don’t know how long the site was down for sure, but I would guess that he first got access on Thursday, 22 October. Since we’re talking about WordPress here, I just restored back to 15 October to be safe, and it looks like we’re back in business.
As a precaution, I’ve reset some passwords and deleted a whole mess of accounts. I tried to leave the ones that look familiar to me like Blindhog and LBSources, but, if I killed your account, I apologize. I’m afraid you’ll have to sign up again for the sake of security.
Thanks to Drew at GoDaddy for walking me through the restore process! An unsolicited hazzah to those guys for having such great tools so they don’t have to do any more work than they have to do!
Send any stagnant accounts questions my way.
My biggest complain about modern firewalls is their lack of the ability to create rules based on URLs or HTTP streams; you have to open access between IP addresses. Yes, I know there are other means to do that, but I want my ASA/PIX/FWSM to do it without making me do so much work.
Anyway, the fact that you have to use IPs brings up some interesting problems. Let’s say you have a server in a DMZ that needs to query Google for some content. Since you’re a hard-ass network guy like I am, you tell the admin that they have provide the data flow they want to use — source IP, destination IP, protocol, port. They come back and tell you that they need their server to connect via HTTP to 126.96.36.199. You put in the rules as given, but the IP has suddenly changed on you.
Google (and lots of other big sites) uses some tricks to keep the load down on their servers and to help with availability, and one such trick is to use round robin DNS, which rotates the A record so everyone doesn’t slam the same boxes. You can query google.com once and get an address, but, when you query it again, you may get a different address. That means that when your new rules don’t work, you have to check the logs, see what got denied, open that up, rinse, and repeat. That sucks.
An easier way might be to create an object-group that includes IPs as you discover them. You put in rules based on an object-group, then, when it fails, you just add to the object-group so you don’t have to put in any more rules. The problem is that you’ll spend a lot of time building up a good baseline. If only there were a way to get a list of IP addresses that Google uses. Hmmm. *segue*
Have you ever heard of SPF netblock records? SPF is an email security mechanism that allows an email server to verify that an email message is coming from an authorized email source. In other words, when a mail server receives mail, it can check to see if the sending server is actually allowed to send mail on behalf of the source domain. It supposed to cut down on spam and whatnot, but I don’t follow it closely enough to know if it’s working. The moral of the story is that is involves a list of IP addresses that an organization maintains; Google happens to be a participant in SPF.
If you query for the TXT record _netblocks.google.com, you get back a text record that looks like this.
[jac@holland ~]$ dig +short txt _netblocks.google.com
“v=spf1 ip4:188.8.131.52/19 ip4:184.108.40.206/19 ip4:220.127.116.11/20 ip4:18.104.22.168/18 ip4:22.214.171.124/17 ip4:126.96.36.199/20 ip4:188.8.131.52/16 ip4:184.108.40.206/20 ip4:220.127.116.11/20 ip4:18.104.22.168/16 ?all”
This record includes all IP addresses that Google says is authorized to send email from google.com. That’s a lot of IP addresses, isn’t it? It might make sense that this list might also be the definitive list of Google production IPs.
My company has used this TXT record in the past to open access to Google. We had an app that needed to query Google maps, and one of our engineers was tired of nickel and diming it to death, so he found the SPF block and put them all in. Works like a champ.
There are always dangers when you rely on information from somebody else, though, right? Google’s usually pretty good about stuff like this, but what if you did the same for another company who only half-heartedly kept their records up-to-date? You may only have half of their IPs in your object-gropu. You might even wind up opening access to or from a cable modem system or from another company who bought the IP addresses.
I’ll also note that there aren’t that many domains using this technique, so finding SPF netblock records may be a challenge. It’s worth the time to do a simply query, though; it might save you some time.
Send any carved pumpkins questions my way.
There’s a lot of noise on the Internet. I’m not talking about certain news sites, either; I’m talking about stuff like port scans or attempts on weak services from all sorts of bad people on the Internet. A large chunk of that noise can be filtered by the edge routers, taking some of the load off of the network and firewalls.
Here are a few things that we filter inbound on our Internet links. Your mileage will vary.
- Packets from RFC 1918 space — You should never see a packet from 10/8, 172.16/12, or 192.168/16.
- Packets from your IP space — Why would you receive packets from yourself from the Internet?
- SSH, telnet, cmd, rlogin, RDP, etc. – You should be doing all your admin stuff from the internal network or from a VPN, right?
- Windows ports — For God’s sake, drop these at the edge.
- Packets to your network services subnets — If you use public addresses for things like your FWSM or CSM sync networks, no one should ever talk to those subnets.
- SNMP, SNMPTrap — No monitoring from the Internet!
- SMTP to non-MX hosts — If you have a lot of hosts, you probably have email run amongst them. Only the MX hosts should accept connections from the Internet.
- TCP/UDP small services — whois, finger, chargen, etc., are just waiting to be used for something bad.
- DNS, RNDC — You may have some name caching servers or hidden masters somewhere that shouldn’t be reachable from the Desolate Plains of the Internet™.
- Syslog — No logging from the Internet. Use a VPN tunnel or something if you really need it.
- NTP — You’re not a time service, are you?
That should cut out a significant amont of noise for you. Remember to allow stuff, too. You may want to end your ACLs with an old-fashioned permit ip any any log to see what else is coming through and maybe block some of that noise, too.
Stretch at Packetlife has a lively little write-up on the Australian government’s attempt to implement a nation-wide web filtering service.
Setting aside the myriad of technical barriers to implementing such a system, the most obvious question is, “who decides what gets blocked?” When a corporation implements a web filter, it does so in accordance with corporate policy — policy that is set by the owner of the network. But the Internet doesn’t belong to any one entity, be it governmental or commercial, so such an authority simply doesn’t exist at this scale. In a very Orwellian sense, this filtering initiative appears to want to create that authority out of thin air.
I don’t know enough about the specifics down under to weigh in very heavily, but I would never support any service that filters web content from my house.
I’ve seen a thousand [tag]firewalls[/tag] in my time, and nearly all of them are poorly configured. The biggest culprit? No [tag]outbound[/tag] [tag]filtering[/tag]. I guess a lot of people think that firewalls are there to protect the network from the Internet, but that’s only part of it. The firewall is to protect every segment from every other segment — all segments both inbound and outbound.
I guess that way back in the day that was true. You had your well-behaved network behind a firewall, and the only threat was from the evil hackers of the Internet. That’s not true any more, though. What about viruses? Or spyware? You don’t want those things spreading out from your network, do you? Think about liability, too. If you run a corporate network and an employee starts illegally downloading stuff from Kazaa, the company is liable for that, and the first step is to block any unneeded traffic from getting out.
Forget that your workstation sits on the inside of the firewall and remember that the intern down in development has a machine there. You know — the guy who “learned all about computers” in school. Use your firewall to protect the Internet from him!
The note: Outbound filtering doesn’t keep the badies out completely. One of the first rules on your list will be to allow all users to TCP/80 on any host so everyone can surf. Any worm worth its salt these days will use TCP/80 for all its communications to take advantage of that hole, so you need to keep your antivirus and antispyware software updated to protect yourself and everyone else.