Friday, December 8, 2017

ASEv2 subnet sizing

When Microsoft rolled out the original App Service Plans the recommended subnet size was 64 addresses (aka /26).  The ASEv1 series was limited to 50 worker process (minus update domain overhead, etc).  With the introduction of ASEv2 they now support up to 100 worker processes so naturally the question is do you need to use larger subnets - and the answer is yes.

In an ASE environment each App Service Plan (container of apps) is equivalent to 1 worker which is really a VM.  Each worker consumes 1 IP address and even if you follow the general guideline of leaving 20% or more free capacity for scaling and other events that still puts you in the ball park of 80 IP addresses.  On top of that, the ASEv2 consumes 7 IP addresses (with an ILB) between the hidden front end servers, file servers, and fault tolerant instances of small/med/large images.  And if you're running in a multi-tenancy configuration you'll consume even more IPs depending on how many IP addresses you attach to it. 

If you're never planning on exceeding more than say ~30 app service plans in your ASEv2 then you can probably get away with a /26 but you're doing so knowing that you're risking scaling or capacity issues down the road.  But if you really want to cover your bases properly use a /25 (128 IP) subnet.

Monday, February 20, 2017

365 now supports SHA256 signed tokens from your ADFS


Not sure when they're going to cut off the old SHA-1 but it doesn't hurt to get updated early.   It's an easy change which shouldn't have any negative impact on your production environment.   Instructionslink below:

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-federation-sha256-guidance

Thursday, February 9, 2017

Palo Alto NTSTATUS: NT_STATUS_ACCESS_DENIED - Access denied

Being able to transparently tie in a particular user to traffic passing through your firewall is a great feature (and fairly common in the current gen of firewalls) - provided you set it up right.

I followed the instructions at
https://live.paloaltonetworks.com/t5/Configuration-Articles/How-to-Configure-Agentless-User-ID/ta-p/62122
and set up the dedicated ldap user on my Windows 2012 R2 domain and assigned it to Distributed COM users, Server Operators, and Event Log readers.  Then I set up the WMI permissions and started seeing the Access Denied next to my discovered domain controllers.  I then SSH'd into the Palo to check the mp-log and useridd.log and ran into the NT_STATUS_ACCESS_DENIED error. After some troubleshooting I realized what I'd messed up - I misread the instructions for the WMI edit.  I had drilled down to 'Security' when the instructions had intended for me to stop at CIMv2 prior to editing the properties.

After fixing my mistake, the access denied message went away.

Thursday, January 26, 2017

Ubiquiti Unifi - an SME's best friend - resistance is futile

It's often difficult in small to medium IT shops to get enough budget to build a network that's stable enough to let you sleep at night.  For the most part you either pay a premium for your Cisco Catalyst, Juniper, etc and then spend hours learning how to use them properly or you wind up buying small business versions like the SG300, netgear, linksys and pray daily for uptime and accept lower performance.  It's kind of like buying a SonicWall instead of a Cisco ASA or a Palo Alto firewall.

A colleague of mine recently introduced me to Ubiquity Networks which has been around for a little over a decade and has a decent following.  Their approach to network design places a high emphasis on a dedicated controller machine or cloud key which in turn manages every other Unifi device in your network.  You define all your VLANs, WAP networks, and other settings in the controller and then 'adopt' your other devices.  The controller handles all the upgrades and provisioning of the new devices after the device has been adopted and provides statistics on clients, bandwidth usage, and types of hardware.
One console to rule them all.

bandwidth hogs can't hide.

basic switching - and yes it has STP.


The built-in Map function is pretty nifty as well.  It allows you to upload a floor layout and then define a map scale.  You then drag and drop the devices from inventory and the map updates to show you hotspot coverage, topology and other useful network management data.  And yes, this is all without buying an additional software package!


Wireless Cover map - labels removed


I was able to replace the whole wireless network for a 16,000 sq ft facility for just under $1k.

My deployment:
a) 1 UniFi Cloud Key (~$95 on amazon) - powers off POE and has a smaller footprint than a dedicated controller machine.
b) 1 Unifi 24 port POE 250W switch (~$365 on amazon)
c)  multiple UniFi AP-AC-Pro wireless access points (~$129 on amazon).  All POE based and a ridiculous indoor range compared to the Cisco WAP551 units that we used to have.

Implementation:
Note:  Make sure you have working DHCP on your network to make configuring the devices easier.

1) Rack mounted the switch, plugged in the cloud key, ran cabling to WAPs from the switch.
2) Configured the Cloud key - set up multiple wireless networks (limit 4).  The WAPs auto switch between 2.4 and 5 GHz using the same wireless network name so both client types work.  I set each wireless network to it's own VLAN and RADIUS authentication on the more secure one.
3) I 'adopted' the switch and the WAPs through the cloud controller interface.  And then I went ahead and hit the 'upgrade' button next to each to get the latest firmware.

--------------- And that was all it took -------------

Flat out, the stuff works.  Wireless handoff from WAP to WAP and all my client devices worked without a hitch.  I'd definitely recommend them if you're doing a greenfield deployment or if you're just looking to upgrade your small to medium sized network.


Wednesday, January 25, 2017

Extending your on premise AD (hybrid 365) into the Azure Cloud

Sure, if your on-premise Active Directory is already being synchronized with Office 365 then you've most likely already been exposed to the benefits of single sign-on.  And perhaps you've even spun up your own Azure subscription and set your synchronized Azure AD as the authentication provider so your team can assign Azure admin roles to your on-premise credentials.  There's one more nifty thing you can do which is to use Azure AD Services to extend your AD into Azure to provide domain services to the VMs inside your subscription (aka domain join, single-sign on inside the VM, etc).

The other alternative would be to spin up some servers, build out a site to site VPN, dcpromo the boxes, set up the AD site(s), and then manage it old school.  On the upside you'll have more control over your AD and it'll be a complete replica of your on-prem setup.  The downside is that you'll have more boxes to patch, more replication traffic to pay for, and possibly split fsmo roles.  There isn't a wrong answer, it just depends on if feel that your datacenter is more secure than Azure and what your company's needs are.  In my case, I decided to explore the ADDS route.

Enabling my Azure AD instance started out pretty straightforward, got a little murky with the virtual networks, and then took some patience for password sync.  I used Microsoft's documentation at https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-getting-started   (Make sure to use the 'synched tenant' instructions for password sync)

Steps a through e below cover just setting up the basic ADDS.  The steps after that explain how I got it integrated with a Resource Management virtual network and VMs using Peering.

a) Created the AAD DC Administrators Group - this is a special group that is automatically inherited into your new ADDS so you'll want to put your admin accounts in here.
b) ADDS currently only works with the old type of virtual network and not the newer Resource Manager one.  So I had to create a legacy virtual network.
c) After enabling ADDS it took around 15 minutes to provision.  I chose the 'yourcompany.onmicrosoft.com' domain name and connected it to my new legacy virtual network.  Once provisioned, it popped out a new DNS IP.
d) I then edited the legacy virtual network and specified the IP address for the new ADDS.  This made it the new default DNS service for that virtual network.  Note:  After another hour, a second DNS IP showed up in the ADDS view.  It doesn't matter what you name them in the virtual network.
e) I then ran the powershell script in the link above to force a full sync in my AAD instance.  The first two variables have to be edited by hand before you run the script.  If you're not sure what your connectors are called, just open the Synchronization Service Manager and view the Connectors tab. (Hint - the one that ends in 'AAD' is your $azureadConnector)

f) I created a new virtual network in the 'new' Azure portal - making sure that the IP range did not overlap the IP range of the legacy virtual network.  (10.10.0.0/24 vs 10.20.0.0/24 and not 10.0.0.0/8 and 10.20.0.0/16 which would have collided).
g) Now to get both virtual networks to play nicely, you can either do a VPN and/or gateway or you can just do virtual network peering which will merge the two together much like joining two switches with a cable in a Layer 2 fashion.  From the 'new' Azure portal, under Virtual Networks, I selected the virtual network (ARM type) that I created earlier and then Peerings


h) I clicked Add at the top of the blade, gave the peering connector a name, chose Resource manager (important), assigned it the same subscription as everything else, and then chose the Classic virtual network from the selector.


i) Then I went back in and updated the DNS settings for the ARM virtual network.  Remember, out of the box each virtual network defaults to the Azure-provided DNS.  I was not able to join a VM to ADDS until I changed it to use the DNS servers for ADDS.  (There is a chance that if would have eventually worked without this step but it's up to you if you have more time available to wait it out).



j) I provisioned a new machine, booted it up, and then joined the yourcompany.onmicrosoft.com domain using the on-premise credentials that I'd put in the AAD DC Administrators group.




Wednesday, August 17, 2016

The return of UNCHardenedPath problems.

Last week we rolled out some new GPO security settings which made our Windows 10 machines stop being able to process group policy changes.  First we noticed the GPP drive maps had stopped working and when we ran gupdate /force manually it failed citing that it couldn't access gpt.ini for
31B2F340-016D-11D2-045F-00C04FB984F9 (aka the Default Domain Policy).
While researching it we found many articles on how Windows 10 by default has UNC Hardenening enabled and the various patches (MS15-011, MS15-014) had affected many users in GPO environments.  We weren't using user filtering and all of our GPOs had Authenticated users listed with Read and Apply permissions so that wasn't it.  So for testing, we added the registry keys to disable Mutual Authentication on a laptop.

New-ItemProperty "HKLM:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths" -Name "\\*\SYSVOL" -Value "RequireMutualAuthentication=0" -Property "String"

New-ItemProperty "HKLM:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths" -Name "\\*\NETLOGON" -Value "RequireMutualAuthentication=0" -Property "String"

We were able to run gpupdate /force successfully after that but we didn't like that solution because that meant we'd have to manually update a lot of machines since even login scripts were broken at this point.  That and it just didn't make sense that Microsoft would have implemented all these security controls if they didn't work so we continued researching.  We found the next clue at the end of Sean Greenbaum's post - patch MS16-075 / KB 3161561 which was released in June and purportedly had caused issues for users trying to access SYSVOL shares.

The workaround listed was to set the SmbServerNameHardeningLevel to 0 under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
on the domain controller servers.  That registry key corresponds to the GPO security policy
Ensure 'Microsoft network server: Server SPN target name validation level' is set to 'Accept if provided by client' or higher
which was one of the settings that we'd changed the week before.  Setting that to Off changes SmbServerNameHardeningLevel to 0.  Once that change was made on the Domain Controller GPO and applied, all of our client issues were resolved.

Ultimately this came down to insufficient testing on our part and it is one of the risks of trying to harden down existing systems.

References;
https://blogs.technet.microsoft.com/askpfeplat/2015/02/22/guidance-on-deployment-of-ms15-011-and-ms15-014/

https://blogs.technet.microsoft.com/askpfeplat/2016/07/05/who-broke-my-user-gpos/

https://social.technet.microsoft.com/Forums/en-US/6a20e3f6-728a-4aa9-831a-6133f446ea08/gpos-do-not-apply-on-windows-10-enterprise-x64?forum=winserverGP

https://community.spiceworks.com/topic/1389891-windows-10-and-sysvol-netlogon

Friday, June 17, 2016

Veeam error after Hyper-V migration


In general I find Veeam backup and replication 9 performs brilliantly once it's configured.  But sometimes infrastructure changes can really throw it for a loop.  I recently had to shuffle several VMs around between Hyper-V hosts using the built-in Move command and afterwards Veeam started throwing errors on some of the VMs. (Task failed:  failed to expand object.  Error:  Cannot find VM on host...)


The main thing they all had in common was they they were configured to use alternate guest OS credentials (which Veeam uses to take the internal snapshots).  In the Veeam GUI these all appear to be tagged by VM Name but what I suspect is that on the back end it's latched on either to the GUID or the host server name so by Moving the VMs it started treating them as new entities.


The fix is a relatively straightforward but manual process of removing them, adding them back from the new associated servers (under Guest Processing, Credentials...), setting the right credentials for them, then hitting OK, then Finish.  That will fix that particular error so you won't see it again on the next run.



Tuesday, February 9, 2016

Configuring LDAP auth from Palo Alto PA-500 firewalls to Windows 2012 R2 AD servers


For the most part this is covered in the Palo Alto admin guides but if like me you just wind up owning one of these at work and you don't have a bunch of time to decipher it then you might find this useful.  Especially since configuring Palo Altos is a lot like object oriented programming where you have to 'build' out all your components and then chain them together which makes troubleshooting more fun.

LDAP Config (using PanOS release 7.x):


Step 1 -

Device Tab -> Server Profiles -> LDAP.  From here Add a new Server profile, give it a meaningful name like domain-ldap and populate the server list.
Enter in your base DN
Enter in your Bind DN - which in my case I created a dedicated service account and entered it in UPN format as 'accountname@domainname.com'.  Then enter in the password for the account so it'll be able to access the directory.


For AD LDAP, go ahead an uncheck the Require SSL/TLS checkbox.

And Commit your changes

Step 2

Now go to the Authentication Profile (also on the Device Tab) and click Add.
Give it a meaningful name like ldap-authprofile.
Then choose the Server Profile that we created in step 1 from the drop down list.
The Login Attribute should be sAMAccountame.  (no, I don't know if that's case sensitive).
Important - Fill in the User Domain with the NETBIOS name of your domain.  Yes, I know it's 2016 and we're still stuck with it.  It'll make a difference later on if you try to do Group Filtering.
If you're setting up an Allow list then click the Advanced Tab and enter in the LDAP strings for your groups.



And Commit your changes

Optional Step 3 - Group filtering/search

If you're using Group Filtering, make sure to go under User Identification, then to the Group Mappings setup tab and Add those groups in.
Click Add, then choose your Sever Profile that we created in Step 1.
Go to the Group Include List Tab, and drill down to your group.
Note:  if you can't drill down, then you don't have a working LDAP connection.  check your settings and make sure your AD Controllers are listening.  Also, keep in mind that the traffic will be coming From the MGT port on the Palo Alto which may have a different IP.


Click Ok. Commit your changes.

At this point you should have a fully functional LDAP Authentication Profile which you can feed into other objects like Authentication Sequences, GlobalProtect Gateways, etc.

Troubleshooting tips:
The default caching period is about an hour.  If you're doing testing you'll want to force that cache to empty out.  From a console/ssh connection - run
debug user-id refresh group-mapping all
to refresh the LDAP cache.

PanOS 7.x also has a new feature to help you troubleshoot authentication from a command line. Details here:
http://dsg0.com/t/palo-atlo-networks-user-authentication-test-through-cli/273

Good Luck!