Friday, October 24, 2014

SQL Maintenance for FIM and anything other databases


An easy way to take care for your FIM databases is to "use Ola Hallengren's script (http://ola.hallengren.com/scripts/MaintenanceSolution.sql). Download the script, adjust the backup paths and run the script on each instance of SQL Server. It will automatically create several jobs some for maintaining the system databases and some for maintain the user databases. You will need to create schedules for each of the jobs." -- FIM Best Practices Volume 1
I love using Ola script for index maintenance because it is so much smart than the Database Maintenance wizard which wants to spend lots of time rebuilding indexes that only needed to be reorganized and messing with indexes that were just fine or too small to matter. A table with less than 1000 pages is usually too small to matter. Less than 5% fragmentation and why bother. Less than 20% and a reorg will usually solve it. Over 20% and you should usually rebuild.
A benefit of using a smart index maintenance solution is that your transaction log backups won't be as large as they would if you rebuild all indexes.

Friday, October 3, 2014

Mistaken Identity

Years ago, I walked into the client site a few months into an Identity Management project, and the PM told me his account had been deactivated by mistake as an employee with the same last name and same first initial was terminated, and they termed his account by mistake.

Ironic.

A few years before that I visited a client whose VP of HR had his account disabled when they let the janitor go. Again same last name but this time the same first name.

What went wrong?

In both cases the AD account was linked to the wrong employee record.

How did that happen?

In the first example they had been diligently entering the employeeID into the employeeID field in AD long before Identity Management. The helpdesk had a tool to query the HR database to look up an employee ID. Apparently, the day this PM had been hired HR was a little slow or the helpdesk made a mistake. Either way they plugged in the wrong employeeID into his AD account. So when the other gentleman was termed, the script they ran (this was before we turned on FIM) disabled his account too.

Garbage in, garbage out. While FIM was not the "perpetrator" it would have done the same thing acting on the wrong data.

In the HR of VP example, the initially joining was done using MIIS (a FIM predecessor) based on first name and last name. Somehow in the intervening years no one noticed that the wrong job title had been pushed into AD.

So how can you avoid this? You can't entirely, but you can reduce the # of occurrences. The first step is to understand the data you are given. The second step is to question the validity of the data -- especially if a human was involved. If the whole process has been automated then any errors should be consistent throughout. A firm hiring George Cludgy (instead of Clooney) would have that data flow from HR out to AD and everywhere else with the correct employeeID. The name itself might be wrong but at least it would be consistent. However, if a human gets involved to do data entry, even though the look it up you have a chance for errors. So you can't take the presence of an employeeID in AD for granted. You must question its validity and confirm it.

I prefer to get dumps of HR and AD and use PowerShell to match them up. Just kidding, this is a job for SQL. While PowerShell actually can do some matching this really is a job for SQL.

By then running queries in my database before setting up FIM I can get a good idea of the matches and non-matches. I can then get the client to confirm the matches and fix the non-matches.

Steps:
1) Look at and understand the data
2) Question its validity
   Did humans input the data?
3) Export from AD using csvde
4) Get an export of the employees
5) Load 3 and 4 into a SQL database
6) write some queries joining based on employeeID (if present)
7) look at the matches and come up with some additional ways to verify such as including First name and last name
8) use a nick name database to handle the David vs Dave issues.
9) Use Fuzzy lookups from SSIS to generate possible matches.
10) Get the client to validate your matches, especially the possible matches
11) Get the client to work on the non-matches (these accounts may end up getting disabled if no match can be found)

Tuesday, September 16, 2014

Phoenix MVP Roadshow Transform the DataCenter Wed Sept 24 4 PM-8PM

Register Now! to attend MVP Roadshow Sept 24th 4 PM - 8PM

I will be presenting on why we want to get to Active Directory based on Windows Server 2012 R2 and how to get there. My fellow MVP's will be covering the rest of the agenda. I also created an IT clue game to play in small groups where the objective is to figure out who stole the data and how it could have been prevented.

Presented by: MVP David Lundell, MVP Jason Helmick, MVP Rory Monaghan, MVP Tom Ziegmann

IT professionals face many challenges in their struggle to deliver the infrastructure, applications, and services that their organizations need.  Common issues include limited budgets, datacenter infrastructure complexity, and technical expertise to support a wide variety of changing goals.  New features in the Windows Server and Microsoft Azure platform can help address these problems by increasing resource utilization and by simplifying administration.
 This "Transform Your Datacenter MVP Roadshow" will focus on specific approaches, methods, and features that attendees can use to ultimately improve the services delivered to their users.  We'll begin by examining the issues that often prevent or delay infrastructure upgrades, and look at ways in which IT professionals can use modern approaches to overcome them.  Methods include leveraging cloud services where they make sense, and migrating from older OS's, such as Windows Server 2003.
 Next, we'll examine specific features in the Windows Server 2012 R2 platform that can help simplify and improve datacenter infrastructure.  Powerful features include updates such as iSCSI, SMB 3.0, Scale-Out File Server, data de-duplication, NIC teaming, and additional methods for improving your datacenter environment.  We'll also focus on virtualization features in the latest version of Hyper-V, including features for achieving low-cost high-availability, improved performance and scalability, and simplified administration. 
 Finally, we'll discuss ways in which you can take advantage of features in Windows Server 2012 R2 and services in Microsoft Azure to simplify and optimize your datacenter.  Topics include identifying the best candidate applications and services for moving to or integrating with the cloud, and methods of making these transformations.
 Overall, the focus will be on technical details and features that are available now to help IT pros optimize and transform their datacenter environments.  We hope you'll be able to join us!

Agenda 
4:00 – 4:30        Registration and Welcome/Dinner
 (Post/share whoppers, challenges, and questions through twitter and paper)
4:30 – 5:00        IT Clue game – in small groups
5:00– 5:35        To Upgrade or not to Upgrade?
 §  Why you really need to upgrade from Windows Server 2003 or
 2008! (Server Platform)   
 §  Demo: Combating Configuration Drift with PowerShell
 §  Desired State Configuration Q&A
 §  Why you really need to upgrade your Active Directory from Windows Server 2003 or 2008 to 2012 R2! 
 §  Q&A
5:50– 6:00        10 minute Break
6:00 – 7:00        Upgrading to Windows Server 2012 R2
 §  How to upgrade from Windows Server 2003
 §  How to upgrade from Windows Server 2008 
 §  Q&A
 §  How to upgrade AD from Windows Server 2003
 §  How to upgrade AD from Windows Server 2008 
 §  Q&A 
 7:00 – 8:00Datacenter - Dealing with Application Compatibility and Delivery
 §  Discussion and Demos for strategizing Application Migration  
 §  Discussion and Demos of App-V for Application Delivery
  
 IT Clue game -- someone stole the data
 Wrap up

ADUC Common Queries: Days Since Last Logon

Recently a client asked me how Active Directory Users and Computers (ADUC) performs the Days Since Last Logon query found in the Find Dialog box's Common Queries option.

LastLogon is not replicated so to really get it you have to query every single DC. So I was reasonably certain that the query didn't use LastLogon but rather used the LastLogonTimestamp which was created "to help identify inactive computer and user accounts."  Assuming default settings "the lastLogontimeStamp will be 9-14 days behind the current date."

However, I couldn't find any documentation confirming that so I had to test it. For all I knew it could have been querying all the DC's to get an accurate LastLogon.

So when I ran the query yesterday, 15th of Sept, 120 days previous was 5/18 and on the domain controller I was querying the lastlogon of the account in question was 5/20 but the LastLogonTimeStampwas 5/14. So I knew that if the ADUC query showed the account in question that it meant that the ADUC query was using LastLogonTimeStamp because if it was using LastLogon (whether it was querying all of the DC's or just the one) then the account wouldn't show up.

Sure enough the account showed up. Conclusion: ADUC's Days Since Last Logon query is using the LastLogonTimeStamp as I expected.

Friday, July 4, 2014

Happy Independence Day -- Using PowerShell for Reporting

Unfortunately, my Independence day is not free -- I am working. Just so happens I need to report on when computer objects are getting migrated to a new AD forest. Day 1 4 Day 2 30 Day 3 25 etc.

Now I could have taken the data and imported it into SQL and then busted out some awesome queries in no time flat. But my buddy Craig Martin, keeps insisting how awesome this PowerShell stuff is. So I decided to give it a try, plus if I can get it to work then it will be faster to run this repeatedly from PowerShell rather than needing to import it into SQL Server. I am actually a big believer in using the right tool for the job. Otherwise you end up blaming the tool for failing you when you should have picked a different tool, one better suited for your task.

When working in a language of which I am not yet the master, I like to start small and build, so that I don't create 15 billion places to troubleshoot my code. So we start with using Get-ADComputer. Made certain that my filter, searchbase, searchscope and properties give me what I want:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated

whenCreated gives me the complete date and time but I want to group and count by day. So I needed to transform the whenCreated to date with no time. The .Date method will work for that but I struggled with how to get it into the pipeline for further processing. Eventually I discovered that I can use the @ symbol to note a hash table and tell the Select-Object commandlet to transform it with an expression and give the result a new name. (Thanks Don Jones)
 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Select-Object -Property Name,@{Name="DateCreated"; Expression = {$_.WhenCreated.Date}} 

 I later discovered I could do the same thing with the Group-Object commandlet which simplifies the command set. So I tack on:  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 
to get:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 

But then in sorting it if I want to get a true sorting by date rather than a textual sorting I once again need to do an expression because the Group-Object commandlet has transformed my DateTime values into strings so I tack on:
| Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

So all together with a little message at the beginning:
Write-host "Daily totals of computer migrations"
Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement | Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

Tuesday, July 1, 2014

8 Time MVP

Today I received notification that for the 8th time (2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014) I have been honored by Microsoft as a Microsoft Most Valuable Professional (MVP). According to the MVP web site there are currently 10 Identity Management MVP's in the world, and only three in North America.

Looking forward to the on-going journey with this product set and wonderful friends I have made along the way, product group members (past and present), MVP's (past and present), readers (book, blog, twitter) and other Identity Management professionals.

Tuesday, June 24, 2014

Projects and Heisenberg's Uncertainty Principle

Is it done yet? What's the status? How much longer? If I get asked these questions too often on a project I take a moment to explain about Heisenberg's Uncertainty Principle. Which states states that you can't know both the position and velocity of an electron because in measuring the one you alter the other.

The old saying goes "a watched pot never boils," especially if you keep sticking  a new thermometer into a heating pot of water every two seconds. Observations change the system. Frequent observations can change it even more.

On a project, when you get asked for status (or position) and it alters your velocity. If you get asked often enough your velocity slows and then halts. Which isn't the kind of change leaders are looking for.

An article in the Wall Street Journal reveals that even interruptions as short as two seconds can lead to errors.

So observation affects the system. That doesn't mean that we can go without measuring, just that leaders, managers and project managers all need to keep in mind that the demand for constant updates alters the velocity (usually slowing) of the people in the system.