Wednesday, April 16, 2014

Good RID(ance, I mean issuance)

As we know a SID is 12 Bytes long or 96 bits long and is composed of several components, among them the domain identifier and the relative identifier or RID of a particular object. The RID is 30 bits long which means you have approximately 1 billion RIDs. So while you think it is unlikely that you will run out of RIDs, according to http://TechNet.microsoft.com/en-us/library/jj574229.aspx you can encountering this if you have accidentally used scripts or provisioning tools (like FIM) to shoot your self in the foot and create gobs and gobs of users, you let some end-user go out of control creating waaaay too many groups, you increased the RID pool size to be too big, did lots of DC demotion and promotion, cleanups, forest recoveries or invalidated RID pools.

In short most of you would be more likely to encounter this in a test or dev environment where you destroy and create many many users as part of your testing with FIM.

So Windows Server 2012 to the rescue.
1) It adds a bit so now you can unlock that bit and have 31 bits for the RID or 2 billion RIDs.
2) You get warnings in the event log whenever you consume 10% of the space left since your last warning.
3) Now there is a safety mechanism, you can't increase the RID Block size to higher than 15,000. Previously there was no limit and you could have allocated the entire RID space in one transaction to one domain controller.
4) There are also brakes. When you are within 1 percent of only have 10% of your global RID space left you get warned and there is also an artificial ceiling so that you can fix whatever is chewing up your RIDs before you are out.

In short good RID(ance I mean issuance).

Wednesday, November 13, 2013

FIM Deprecated Features FIM TEAM user group meeting

So in 1 hr and 20 min I will present on

November 13, 2013 21:00 UTC
See when this is in your timezone
David Lundell
Impact of deprecated features.This session will go over various deprecated features that the FIM product group have announced are to be eliminated in future releases, such as XMA v1 (ECMA v1), transaction properties, multi-mastery and equal precedence, with advice on planning for and working around their future absence.

Friday, October 4, 2013

DirSync w/ domain if NetBios and FQDN don't match

If one of your AD domains has a NetBios domain name that doesn't match the leftmost part of your FQDN you need to have the Replicating Directory Changes permission given to your AD MA account. This is documented in a few places including my book. However, DirSync misses this step. Normally, Dirsync does a very good job of installing and configuring everything which you need without needing you to be an expert in FIM, but this is one thing it misses.

For Example if the FQDN is Exchange.loc but the netbios name of the domain was Snappy then you would use this command to solve the issue

DSACLS "CN=Configuration,dc=exchange,dc=loc" /G "exchange\Grp_DirChanges:CA;Replicating Directory Changes;"

Declarative or Bust!

Michael Pearn from down under wrote about his experience trying to use just Declarative Sync Rules

His experience -- especially the religious debates are similar to my own. It made me recall my presentation at TEC 2012 the FIM 2010 R2 Showdown: Classic vs. Declarative

The vast majority of old hands at the presentation declared for Classic both before and after the presentation. During the presentation I attempted to view anything you could do without code as declarative whether it came from a sync rule or not, especially if it was a new feature. But the crowd wouldn't let me claim anything configured in the sync engine as declarative. But in this post only classic code counts as not declarative.

Michael found that he needed classic code for Advance Join rules, doing anything with multi-valued attributes other than just flowing them and "Converting Binary values to ISO8601 Datetime." In his example he could have modified his SQL query that gets the data from HR and avoided the need for the Advanced Join rule.

In addition to Michael's list here are some other things that you may need classic code to do:

  • Advanced Filtering scenario to cause disconnections
  • Changing the Metaverse object type when a connector joins to it
  • Join Resolution Rules
  • Manual Precedence Import Flows
  • Provision objects with Auxiliary classes
  • Decide on a case by case basis whether to deprovision as a Disconnector, Explicit Disconnector or just delete the object.

But what does the FIM Sync engine bring us beyond what we had in ILM 2007 FP1?

  • A lot less need for MVDeletion rules extension since we can indicate that disconnection from any of a list of MA's should trigger MV Deletion or if it is disconnected from all MA's (but ignore this one and that one)
  • Many, many attribute transformations can be done with Sync Rules and don't need code
  • A way to do fairly sophisticated provisioning logic without code (Transition Sets, MPRs, Workflows, OutBound Sync Rules)
  • R2: A way to do some basic provisioning logic with filter based Outbound Sync rules that performs pretty decently and doesn't require a detour through the FIM Service
  • New ways to trigger deprovisioning (Transition Sets, MPRs, Workflows, OutBound Sync Rules)
  • OU creation w/o code (That sure is nice and no worried about the OU being a connector to the MV object that first needed it)
  • DN Rename w/o code (also very nice)

In my presentation I contrasted the new sync rules with the classic config and code:

image

 

image

image

image

image

image

image

image

image

Finally here were my recommendations that many disagreed with but it sounds like Michael would agree:

image

There are some high volume scenarios where sync rules are too slow and there are still some customers that get all they want out the classic sync engine and don't want to pay for CALs.

Wednesday, September 11, 2013

Windows 2012 R2 and Windows 8.1 RTM now on MSDN and Technet

One of my fellow MVPs and Insight teammates Alessandro Cardoso (he runs one of our practices down under) announced on his blog that Windows 2012 R2 and Windows 8.1 RTM now on MSDN and Technet.

He goes on to mention the salient points around 2012 R2 for virtualization so I thought I would discuss some of the benefits for Active Directory and ADFS

One key thing is that ADFS on Windows Server 2012 R2 doesn't require IIS so now it can and should be installed on domain controllers.

But the most exciting aspect is the enhancement to security specifically mobile security. With the advent of Workplace Join (or Join to Workplace) mobile devices (iOS and Windows) can be part of the domain and participate in SSO.

One of the best enhancements to ADFS is the ability to "Set [multi-factor authentication] requirement for all extranet access or conditionally based on the user’s identity, network location or a device that is used to access protected resources." http://technet.microsoft.com/en-us/library/dn280949.aspx

Thursday, August 15, 2013

MS13-066 causes ADFS 2.0 problems

Microsoft put out a release day before yesterday (8/13/13) to fix a security vulnerability in ADFS 2.0

It caused an outage for SSO with Office365 for a customer of ours (they had the servers set to auto update).

http://technet.microsoft.com/en-us/security/bulletin/ms13-066

http://support.microsoft.com/kb/2843639

http://support.microsoft.com/kb/2843638

At the moment we recommend NOT installing these updates.

We saw the following error repeated for every authentication attempt:

Event ID 111 Federation service encountered an error while processing the ws-trust request.

Exception Details:

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation ---> System.TypeLoadException: Could not load type 'Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=3.5.0.0

It goes on with the stack trace.

Uninstalling these updates has partially restored service: Outlook and Lync now work, but the Outlook Web Access and SharePoint online still aren't working.

In fact they pulled the patch later in the day yesterday.

Monday, July 8, 2013

Is the Password dead? Gotta eat what you kill!

At last year's Cloud Identity Summit in Vail I heard a lot about how the password is dead. I expect to hear a lot more this year.

Most of it fit into one of several categories:

  1. Complaints about why passwords should be dead
    1. In other words all of the various problems with passwords -- and there are
  2. Schemes to have various applications depend on someone else's password
    1. While this is helpful it doesn't kill the password
  3. Schemes for authentication that don't quite apply.

Last year when talking about DMZ's Gunnar Peterson said "You have to eat what you kill." Meaning you have to provide replacement functionality.

As I was recently reminded by a business analyst co-worker you always have to start with the requirements. So let's list what are the requirements for a password replacement? Well we need to consider the requirements from several view points

  1. The consumer end-user
  2. The Business To Consumer (B2C) website developers and admins
  3. The corporate end-user
  4. Those developing apps principally for consumption by corporate users
  5. Corporate IT Security
  6. Legal departments responsible for reducing the liability of #2 and #4

The password killer that best meets the expectations of all of these groups should become the most widely adopted.

So in the next several posts I will explore what each of these view points want in a password killer

Then I plan on evaluating all of the password killers I find against these criteria.