Wednesday, December 24, 2014

'Twas the night before Christmas

'Twas the night before Christmas, when all through the internet
Not an identity was stirring, not even a Passport .NET
The user accounts requests were submitted with care
Hoping that their access would soon be there

The users were nestled all snug in their beds
While visions of being able to do their jobs danced in their heads
The servers and computers were in sleep mode
Awaiting someone to move a mouse and send the wake up code

An urgent email pinging my iPhone created a vibration
I sprang to my Surface to see what was the perturbation.
Opening up Windows 8.1, I signed in to the computer
I ran AD Users and Computers and Event Viewer

User accounts had been created and added to groups
All while I had slept after eating my soups
As I looked at my network, what should appear?
But a brand new Identity Management System so nice and clear

On Sync Engine, on Management Agent! Now MPRs and Workflows!
On Metaverse on Sync Rules!  On PowerShell and Data flows!
To the web service! To Self Service Password Resets!
Provision, Deprovision and Synchronize all the sets!


Ok, ok so maybe I am just a bit eager for the release of Microsoft Identity Manager (due out 1st half of 2015).

Friday, December 12, 2014

Speaking at 2015 Redmond Summit (Jan 27-29 '15)

I will be speaking at the 2015 Redmond Summit: Where Identity Meets Enterprise Mobility.
This summit is put on by my friends at Oxford Computer Group.

I will be speaking on Password Sync vs.  ADFS. Then the next day I will speak on the Business track about How Identity Management Impacts the Bottom Line.

See you there
 
January 27-29, 2015 in Redmond, WA on the Microsoft Campus

Join OCG, Microsoft, and industry experts for two and a half days of networking and talks on the latest thinking on identity and enterprise mobility. If you’re overwhelmed by devices, have a hybrid environment, wish to simplify access, or manage identity in an increasingly complex digital world then you won’t want to miss this event. Sessions will assess and look in detail at the largest release of new identity products in Microsoft’s history, including Enterprise Mobility Suite, Intune, Azure Active Directory, Hybrid Identity, and more! Discover how other organizations have tackled the same problems you face through case studies and get technical insight from Microsoft product managers and engineers. Registration is $800 per delegate. Find our more and register!

Thursday, December 4, 2014

What AD Attributes are indexed? ANR? Tuple? PowerShell

Import-Module ActiveDirectory
Write-Host "Tuple Index Enabled Attributes"
Get-ADObject -SearchBase ((Get-ADRootDSE).schemaNamingContext)  -SearchScope OneLevel -LDAPFilter "(searchFlags:1.2.840.113556.1.4.803:=32)" -Property objectClass, name, whenChanged,  whenCreated, LDAPDisplayNAme  | Out-GridView
Write-Host "ANR Enabled Attributes"
Get-ADObject -SearchBase ((Get-ADRootDSE).schemaNamingContext)  -SearchScope OneLevel -LDAPFilter "(searchFlags:1.2.840.113556.1.4.803:=4)" -Property objectClass, name, whenChanged,  whenCreated, LDAPDisplayNAme | Out-GridView
Write-Host "Indexed Enabled Attributes"
Get-ADObject -SearchBase ((Get-ADRootDSE).schemaNamingContext)  -SearchScope OneLevel -LDAPFilter "(searchFlags:1.2.840.113556.1.4.803:=1)" -Property objectClass, name, whenChanged,  whenCreated, LDAPDisplayNAme  | Out-GridView

The above script is something I use to quickly look and see what is indexed in an AD environment

Friday, October 24, 2014

SQL Maintenance for FIM and anything other databases


An easy way to take care for your FIM databases is to "use Ola Hallengren's script (http://ola.hallengren.com/scripts/MaintenanceSolution.sql). Download the script, adjust the backup paths and run the script on each instance of SQL Server. It will automatically create several jobs some for maintaining the system databases and some for maintain the user databases. You will need to create schedules for each of the jobs." -- FIM Best Practices Volume 1
I love using Ola script for index maintenance because it is so much smart than the Database Maintenance wizard which wants to spend lots of time rebuilding indexes that only needed to be reorganized and messing with indexes that were just fine or too small to matter. A table with less than 1000 pages is usually too small to matter. Less than 5% fragmentation and why bother. Less than 20% and a reorg will usually solve it. Over 20% and you should usually rebuild.
A benefit of using a smart index maintenance solution is that your transaction log backups won't be as large as they would if you rebuild all indexes.

Friday, October 3, 2014

Mistaken Identity

Years ago, I walked into the client site a few months into an Identity Management project, and the PM told me his account had been deactivated by mistake as an employee with the same last name and same first initial was terminated, and they termed his account by mistake.

Ironic.

A few years before that I visited a client whose VP of HR had his account disabled when they let the janitor go. Again same last name but this time the same first name.

What went wrong?

In both cases the AD account was linked to the wrong employee record.

How did that happen?

In the first example they had been diligently entering the employeeID into the employeeID field in AD long before Identity Management. The helpdesk had a tool to query the HR database to look up an employee ID. Apparently, the day this PM had been hired HR was a little slow or the helpdesk made a mistake. Either way they plugged in the wrong employeeID into his AD account. So when the other gentleman was termed, the script they ran (this was before we turned on FIM) disabled his account too.

Garbage in, garbage out. While FIM was not the "perpetrator" it would have done the same thing acting on the wrong data.

In the HR of VP example, the initially joining was done using MIIS (a FIM predecessor) based on first name and last name. Somehow in the intervening years no one noticed that the wrong job title had been pushed into AD.

So how can you avoid this? You can't entirely, but you can reduce the # of occurrences. The first step is to understand the data you are given. The second step is to question the validity of the data -- especially if a human was involved. If the whole process has been automated then any errors should be consistent throughout. A firm hiring George Cludgy (instead of Clooney) would have that data flow from HR out to AD and everywhere else with the correct employeeID. The name itself might be wrong but at least it would be consistent. However, if a human gets involved to do data entry, even though the look it up you have a chance for errors. So you can't take the presence of an employeeID in AD for granted. You must question its validity and confirm it.

I prefer to get dumps of HR and AD and use PowerShell to match them up. Just kidding, this is a job for SQL. While PowerShell actually can do some matching this really is a job for SQL.

By then running queries in my database before setting up FIM I can get a good idea of the matches and non-matches. I can then get the client to confirm the matches and fix the non-matches.

Steps:
1) Look at and understand the data
2) Question its validity
   Did humans input the data?
3) Export from AD using csvde
4) Get an export of the employees
5) Load 3 and 4 into a SQL database
6) write some queries joining based on employeeID (if present)
7) look at the matches and come up with some additional ways to verify such as including First name and last name
8) use a nick name database to handle the David vs Dave issues.
9) Use Fuzzy lookups from SSIS to generate possible matches.
10) Get the client to validate your matches, especially the possible matches
11) Get the client to work on the non-matches (these accounts may end up getting disabled if no match can be found)

Tuesday, September 16, 2014

Phoenix MVP Roadshow Transform the DataCenter Wed Sept 24 4 PM-8PM

Register Now! to attend MVP Roadshow Sept 24th 4 PM - 8PM

I will be presenting on why we want to get to Active Directory based on Windows Server 2012 R2 and how to get there. My fellow MVP's will be covering the rest of the agenda. I also created an IT clue game to play in small groups where the objective is to figure out who stole the data and how it could have been prevented.

Presented by: MVP David Lundell, MVP Jason Helmick, MVP Rory Monaghan, MVP Tom Ziegmann

IT professionals face many challenges in their struggle to deliver the infrastructure, applications, and services that their organizations need.  Common issues include limited budgets, datacenter infrastructure complexity, and technical expertise to support a wide variety of changing goals.  New features in the Windows Server and Microsoft Azure platform can help address these problems by increasing resource utilization and by simplifying administration.
 This "Transform Your Datacenter MVP Roadshow" will focus on specific approaches, methods, and features that attendees can use to ultimately improve the services delivered to their users.  We'll begin by examining the issues that often prevent or delay infrastructure upgrades, and look at ways in which IT professionals can use modern approaches to overcome them.  Methods include leveraging cloud services where they make sense, and migrating from older OS's, such as Windows Server 2003.
 Next, we'll examine specific features in the Windows Server 2012 R2 platform that can help simplify and improve datacenter infrastructure.  Powerful features include updates such as iSCSI, SMB 3.0, Scale-Out File Server, data de-duplication, NIC teaming, and additional methods for improving your datacenter environment.  We'll also focus on virtualization features in the latest version of Hyper-V, including features for achieving low-cost high-availability, improved performance and scalability, and simplified administration. 
 Finally, we'll discuss ways in which you can take advantage of features in Windows Server 2012 R2 and services in Microsoft Azure to simplify and optimize your datacenter.  Topics include identifying the best candidate applications and services for moving to or integrating with the cloud, and methods of making these transformations.
 Overall, the focus will be on technical details and features that are available now to help IT pros optimize and transform their datacenter environments.  We hope you'll be able to join us!

Agenda 
4:00 – 4:30        Registration and Welcome/Dinner
 (Post/share whoppers, challenges, and questions through twitter and paper)
4:30 – 5:00        IT Clue game – in small groups
5:00– 5:35        To Upgrade or not to Upgrade?
 §  Why you really need to upgrade from Windows Server 2003 or
 2008! (Server Platform)   
 §  Demo: Combating Configuration Drift with PowerShell
 §  Desired State Configuration Q&A
 §  Why you really need to upgrade your Active Directory from Windows Server 2003 or 2008 to 2012 R2! 
 §  Q&A
5:50– 6:00        10 minute Break
6:00 – 7:00        Upgrading to Windows Server 2012 R2
 §  How to upgrade from Windows Server 2003
 §  How to upgrade from Windows Server 2008 
 §  Q&A
 §  How to upgrade AD from Windows Server 2003
 §  How to upgrade AD from Windows Server 2008 
 §  Q&A 
 7:00 – 8:00Datacenter - Dealing with Application Compatibility and Delivery
 §  Discussion and Demos for strategizing Application Migration  
 §  Discussion and Demos of App-V for Application Delivery
  
 IT Clue game -- someone stole the data
 Wrap up

ADUC Common Queries: Days Since Last Logon

Recently a client asked me how Active Directory Users and Computers (ADUC) performs the Days Since Last Logon query found in the Find Dialog box's Common Queries option.

LastLogon is not replicated so to really get it you have to query every single DC. So I was reasonably certain that the query didn't use LastLogon but rather used the LastLogonTimestamp which was created "to help identify inactive computer and user accounts."  Assuming default settings "the lastLogontimeStamp will be 9-14 days behind the current date."

However, I couldn't find any documentation confirming that so I had to test it. For all I knew it could have been querying all the DC's to get an accurate LastLogon.

So when I ran the query yesterday, 15th of Sept, 120 days previous was 5/18 and on the domain controller I was querying the lastlogon of the account in question was 5/20 but the LastLogonTimeStampwas 5/14. So I knew that if the ADUC query showed the account in question that it meant that the ADUC query was using LastLogonTimeStamp because if it was using LastLogon (whether it was querying all of the DC's or just the one) then the account wouldn't show up.

Sure enough the account showed up. Conclusion: ADUC's Days Since Last Logon query is using the LastLogonTimeStamp as I expected.

Friday, July 4, 2014

Happy Independence Day -- Using PowerShell for Reporting

Unfortunately, my Independence day is not free -- I am working. Just so happens I need to report on when computer objects are getting migrated to a new AD forest. Day 1 4 Day 2 30 Day 3 25 etc.

Now I could have taken the data and imported it into SQL and then busted out some awesome queries in no time flat. But my buddy Craig Martin, keeps insisting how awesome this PowerShell stuff is. So I decided to give it a try, plus if I can get it to work then it will be faster to run this repeatedly from PowerShell rather than needing to import it into SQL Server. I am actually a big believer in using the right tool for the job. Otherwise you end up blaming the tool for failing you when you should have picked a different tool, one better suited for your task.

When working in a language of which I am not yet the master, I like to start small and build, so that I don't create 15 billion places to troubleshoot my code. So we start with using Get-ADComputer. Made certain that my filter, searchbase, searchscope and properties give me what I want:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated

whenCreated gives me the complete date and time but I want to group and count by day. So I needed to transform the whenCreated to date with no time. The .Date method will work for that but I struggled with how to get it into the pipeline for further processing. Eventually I discovered that I can use the @ symbol to note a hash table and tell the Select-Object commandlet to transform it with an expression and give the result a new name. (Thanks Don Jones)
 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Select-Object -Property Name,@{Name="DateCreated"; Expression = {$_.WhenCreated.Date}} 

 I later discovered I could do the same thing with the Group-Object commandlet which simplifies the command set. So I tack on:  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 
to get:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 

But then in sorting it if I want to get a true sorting by date rather than a textual sorting I once again need to do an expression because the Group-Object commandlet has transformed my DateTime values into strings so I tack on:
| Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

So all together with a little message at the beginning:
Write-host "Daily totals of computer migrations"
Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement | Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

Tuesday, July 1, 2014

8 Time MVP

Today I received notification that for the 8th time (2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014) I have been honored by Microsoft as a Microsoft Most Valuable Professional (MVP). According to the MVP web site there are currently 10 Identity Management MVP's in the world, and only three in North America.

Looking forward to the on-going journey with this product set and wonderful friends I have made along the way, product group members (past and present), MVP's (past and present), readers (book, blog, twitter) and other Identity Management professionals.

Tuesday, June 24, 2014

Projects and Heisenberg's Uncertainty Principle

Is it done yet? What's the status? How much longer? If I get asked these questions too often on a project I take a moment to explain about Heisenberg's Uncertainty Principle. Which states states that you can't know both the position and velocity of an electron because in measuring the one you alter the other.

The old saying goes "a watched pot never boils," especially if you keep sticking  a new thermometer into a heating pot of water every two seconds. Observations change the system. Frequent observations can change it even more.

On a project, when you get asked for status (or position) and it alters your velocity. If you get asked often enough your velocity slows and then halts. Which isn't the kind of change leaders are looking for.

An article in the Wall Street Journal reveals that even interruptions as short as two seconds can lead to errors.

So observation affects the system. That doesn't mean that we can go without measuring, just that leaders, managers and project managers all need to keep in mind that the demand for constant updates alters the velocity (usually slowing) of the people in the system.

Thursday, May 1, 2014

To Farm, or not to Farm, that is the question --

  • Whether 'tis nobler in the mind to suffer
  • the slings and arrows of outrageous fortune
  • Or to take Farms against a sea of patches
  • and by opposing end them? To, die, to sleep --

Today I will be "moderating" the debate about using SharePoint Farms vs. Stand-Alone as the foundation for the FIM Portal. In this corner we have Paul Williams of Microsoft sharing knowledge from his hard fought victories with FIM and painful experiences with Farms. In the other corner we have Spencer Harbar, SharePoint MVP, applying his years of SharePoint expertise to the FIM world providing a definitive guide to installing FIM 2012 R2 SP1 portal on SharePoint 2013.

Spencer points out that "farm deployment aspects are generally not well understood by FIM practitioners which leads to a number of common deployment and operational challenges." Point conceded. I saw much of the same thing with regards to MIIS and ILM when it came to deploying SQL Server.

Spencer argues that "the general idea is to build a small dedicated SharePoint instance purely for the purposes of hosting the FIM Portal [and FIM Service] and nothing else (although it could also host the Password Registration and Reset web sites)" and that by deploying a farm instead of Stand-Alone, the "craptastic demoware default configuration,"  you can avoid "a bunch of unnecessary goop."   Note: Assuming Spencer knows, but just to clarify for everyone, the Password Portals use IIS and do not need or use SharePoint.

An example of the "unnecessary goop" is the SharePoint Search Service Application, which when installed then requires us to turn off the SharePoint Indexing job. A benefit of avoiding "a bunch of stuff we don’t want" is that it "minimizes the attack surface available." Minimizing attack surface is a good thing.

Spencer opines that "the Standalone Server Type... is evil, pure and simple." He also decries the use of "Embedded SQL."

Paul shares some compelling experience based evidence to think about using Stand-Alone instead of a farm, stating that a farm gives you a "serious headache when patching ... more operational maintenance" (more SQL Server Databases to manage instead of the "optimised WID[embedded SQL] files that are totally expendable")  "and more complexity around backup and restore (or rebuild) and patching SharePoint itself" not to mention when you need to "[modify] web.config."

Patching needs to be explored further. According to Paul, you must "patch ... nodes sequentially" which "takes quite a bit longer than a standalone node" because "the FIM installer isn’t optimised for a farm" which would normally "deploy the solution pack once" instead we have an installer for the FIM Service and Portal, meaning the patch is the same. Since you need to patch the FIM Service on each node you must run the patch on each node which will also see the FIM Portal, "retract the solution pack and deploy the new one," which in turns causes "all application pools related to SharePoint to be recycled." Since "the retraction and redeployment is global (to SharePoint)" that means "that downtime affects all nodes in the farm – you can’t drop one out of the NLB array, patch, add back, drop the next, etc." Whereas if you do Stand Alone you can "drop one out of the NLB array, patch, add back, drop the next, etc."

I know that with some of the pre-R2 updates I have been able to run the patch on the first node of the farm, installing FIM Service and Portal, and then on the second node just installing the patch for the FIM Service, since the Portal bits had already been updated. I need to double check whether this is still the case (since then most of our installs have been stand-alone).

Paul continues with the woes of the Language packs that  they "comprise some file system files for the FIM Service and SharePoint solution packs," which for a Farm means repeated downtime for the whole farm as each node is Language Packed. If you need language packs then a farm is still bad news for downtime even if the method I have used still applies for the Service Pack and hotfixes.

Pros for SharePoint Stand-alone for FIM

Pros for SharePoint Farm for FIM

Setup is simple (it creates the first site collection, plus database and site for you)

Can get much smaller attack surface by not installing "unnecessary goop"

Don't have to have a separate SQL Instance (which you must make highly available to avoid single point of failure) to manage, backup, etc.

Avoid the overhead of running the Windows Internal Database/SQL Express Edition (aka Embedded SQL) on each node (overhead that we haven't seen cause FIM performance issues).

Can patch one server at a time without taking down whole NLB Array of FIM Servers (also each node is faster to patch)

Can deploy pure SharePoint items and CSS files once instead of to each node

Perhaps there are ways to get the most of the best of both worlds.

  1. Install one Single Server SharePoint Farm for each FIM Portal node
    1. Upside: You avoid the painful patching process and Language Pack process
    2. Upside: Done right have the smaller attack surface (you would get complete control)
    3. Downside: More complex installation, but you could use the very complete scripts from Spencer to do this
    4. Downside: Shoot where do I want to put all those databases? I could put them on the SQL Server that will host the other FIM databases
  2. Separate the FIM Service from the FIM Portal
    1. Upside: This way when you do the patching and language packs that impact the portal they should only need to be done once, but still have downtime for whole farm.
    2. Upside: Smaller attack surface
    3. Upside: Pure SharePoint items and CSS files get deployed and configured only once
    4. Downside: more vm's/machines to manage and more FIM Server licenses to buy
  3. Install Stand-Alone and find a way to reduce the "attack surface" by eliminating some of the "unnecessary goop"
    1. This has most of the upsides and few of the downsides if we can find a way to do it
    2. Spencer: This is where I would love to have your expert opinion: How to reduce the attack surface on SharePoint Stand-Alone.

Conclusion: Initially, working with Brad Turner, I went with Farms, but then when I saw the Language Pack issues I thought Stand-Alone. Also when trying to keep it simple for non-SharePoint Admins, I thought Stand-Alone. As always there are trade offs, and want to see more discussion before we settle on single answer or even a definitive decision tree for which one to choose. For now I lean towards each FIM Portal and Service node having its own SharePoint Stand-Alone instance, but would love to advance the state of the art with better security and possibly performance.

All: Give me your thoughts on one vs. the other or on the additional options.

Note: Ross Currie also provides a guide resulting from his hard fought battle to get FIM on SharePoint 2013

Note: Paul and Spencer AFAIK have never actually carried out a debate on this topic.

Wednesday, April 30, 2014

MIM's the word -- New name for FIM

Last week the Product group announced the new name for FIM and MIM's the word Microsoft Identity Manager.

Of course as a good futurist I had made enough guesses that I got this one right, even though as an honest man I must admit I also had it wrong -- Azure is not part of the name.

Fortunately, they didn't go with APE nor AILMENT, nor MIME, nor MIAMI, nor MICE, nor MAIM, nor WIMP. MIM's the word!

Hopefully, many of my readers have been entertained by my speculation. It has been fun. So now back to real work ... what will it be called in the release after the next one?

Hmm...

  • Hybrid Identity Manager (HIM) -- Too sexist
  • Hybrid Identity Provisioning Engine (HIPE) -- Hype -- nah
  • Hybrid Identity Access Engine -- (HIAE) -- pronounced Hi yah! I could go for that one!

Friday, April 18, 2014

Mailbag: Learning FIM, SQL and IIS

Recently, a reader reached out to me for advice on learning FIM, SQL and IIS. As well as guidance on setting up a lab (more advice on that part in a later post).

First think for a moment about your best learning styles for technology. Do you need to read the concepts and architecture first and then do it? Do you need to watch a video and then read, and then do it? Do you need to try it and then go back and read? Do you need an instructor? Sometimes you have to learn through experimentation. In the early days of ILM 2 Beta there wasn't much info so we had to experiment. Brad Turner and I spent many days in a lab configuring and trying things out to see what was the best practice.

Fortunately there are a fair amount of videos, articles, virtual labs and classes about all three subjects. In general I find the virtual labs to be a great way to get in and get some quick hands on lab knowledge without having to labor endlessly to setup your own lab. Not that you won't get something out of that experience. But sometimes you need to pickup tidbits or try something out before deciding you need to setup a more permanent lab to experiment with.

FIM, SQL and IIS rely on Windows Server, Active Directory and Networking. It is surprising how many issues get resolved through knowledge of basic networking and its troubleshooting tools. Understand how client applications use DNS to find what they are looking for and SPNs to authenticate through Kerberos.  If you are shaky or want a refresher I encourage people to start with those topics.

For FIM I would start with the Ramp Up training. It provides you with video, lab manuals and the virtual labs. Of course I also recommend my book. There is also another FIM book by Kent Nordstrom. Beyond that here is a great list of resources: http://social.technet.microsoft.com/wiki/contents/articles/399.forefront-identity-manager-resources.aspx#Learning_FIM_TwentyTen

SQL: this is more in the context of what you need to know about SQL to support FIM. Start with a presentation I gave a few years ago at The Experts Conference on Care and Feeding of the databases
 as this gives you some perspective on what you need to SQL to support FIM. Configuring overall memory for SQL, TempDB configuration, Index management, Backups, Transaction Logs, Recovery Models. The last chapter of FIM Best Practices Volume 1 covers how to intelligently automate your SQL maintenance.

If you want to start learning SQL queries try http://www.ilmbestpractices.com/files/I_Dream_in_SQL.zip  or take the Microsoft course

IIS: Again this is in the context of what you need to know about IIS to support FIM.
Overview of IIS 8 (Windows 8 and Windows Server 2012) http://technet.microsoft.com/en-us/library/hh831725.aspx

Overview of IIS 7 http://technet.microsoft.com/en-us/library/cc753734(v=WS.10).aspx

Great post comparing how IIS 6 through 8 deal with SSL.

Intro to IIS 8 virtual lab http://go.microsoft.com/?linkid=9838455



Thursday, April 17, 2014

New name for FIM?

Did you know that if you subscribe to Azure AD Premium you also get licenses for FIM? Well if that isn't a hand tipper I don't know what is. I think we can safely assume the next version of FIM will have Azure in the name. Safe or not I am going speculate that it will.

Azure Identity Manager (AIM) -- I would be ok with this
Azure Role Based Access Manager (ARBAM) -- Explosive sounding name
Azure Provisioning Engine (APE) -- Please no!!
Azure Identity Technology (AIT) -- pronounced 8 or aight. Nah.
Azure Identity Sync Lifecycle Engine (AISLE) -- Certainly when people walk down the aisle they have an identity changing event.
Azure Identity Lifecycle Management Engine Next Technology (AILMENT). I really hope not we want to cure ailments not install one for you.

My Official guess -- Azure Identity Enhancements (AIE)

Unless we have already seen the new product name -- Azure Active Directory Premium (AADP).
Maybe the on-premise version will have a slightly different name
Azure Active Directory Premium On Premises Edition (AAD POPE)

The above has been pure speculation. I have no inside knowledge on the name.

Hints of FIM's Future: Azure Active Directory (AAD) Sync

For years I have been trying to predict the future of Identity Management, but every time I look in my crystal ball it is just too cloudy to see anything. In fact anytime I look in my crystal ball on just about any technology topic the only thing it shows me are clouds! I was beginning to think it was broken.

But then, yesterday, I watched Andreas Kjellman present at the FIM user group
Andreas unveiled the AADSync, the Azure Active Directory Sync that will replace DirSync to sync from your Active Directory to the cloud. I finally got it! My crystal ball wasn't broken!

AADSync is built on the next generation of the Sync Engine. 80% of the scenarios for syncing with Azure (Office365) will be handled with a wizard, including Multi-Forest. For more advanced scenarios you will be able to use a significantly upgraded function library to do "declarative provisioning" with sync rules. In fact no code for rules extensions will be permitted.

What does this mean for FIM?

I speculate that eventually FIM will follow this path. Since this next version seems to support the same connector framework, I think we will continue to see connector development as well as continued cloud capabilities ala Azure Access Enhancements and Azure AD Premium.

Thanks to the user group sponsor --  the FIM team, hosted by Carol Wapshere for putting it together and eventually providing the recording found here: http://thefimteam.com/fim-team-user-group/

AADSync is available now in Preview.

Wednesday, April 16, 2014

Good RID(ance, I mean issuance)

As we know a SID is 12 Bytes long or 96 bits long and is composed of several components, among them the domain identifier and the relative identifier or RID of a particular object. The RID is 30 bits long which means you have approximately 1 billion RIDs. So while you think it is unlikely that you will run out of RIDs, according to http://TechNet.microsoft.com/en-us/library/jj574229.aspx you can encountering this if you have accidentally used scripts or provisioning tools (like FIM) to shoot your self in the foot and create gobs and gobs of users, you let some end-user go out of control creating waaaay too many groups, you increased the RID pool size to be too big, did lots of DC demotion and promotion, cleanups, forest recoveries or invalidated RID pools.

In short most of you would be more likely to encounter this in a test or dev environment where you destroy and create many many users as part of your testing with FIM.

So Windows Server 2012 to the rescue.
1) It adds a bit so now you can unlock that bit and have 31 bits for the RID or 2 billion RIDs.
2) You get warnings in the event log whenever you consume 10% of the space left since your last warning.
3) Now there is a safety mechanism, you can't increase the RID Block size to higher than 15,000. Previously there was no limit and you could have allocated the entire RID space in one transaction to one domain controller.
4) There are also brakes. When you are within 1 percent of only have 10% of your global RID space left you get warned and there is also an artificial ceiling so that you can fix whatever is chewing up your RIDs before you are out.

In short good RID(ance I mean issuance).