Tuesday, September 16, 2014

Phoenix MVP Roadshow Transform the DataCenter Wed Sept 24 4 PM-8PM

Register Now! to attend MVP Roadshow Sept 24th 4 PM - 8PM

I will be presenting on why we want to get to Active Directory based on Windows Server 2012 R2 and how to get there. My fellow MVP's will be covering the rest of the agenda. I also created an IT clue game to play in small groups where the objective is to figure out who stole the data and how it could have been prevented.

Presented by: MVP David Lundell, MVP Jason Helmick, MVP Rory Monaghan, MVP Tom Ziegmann

IT professionals face many challenges in their struggle to deliver the infrastructure, applications, and services that their organizations need.  Common issues include limited budgets, datacenter infrastructure complexity, and technical expertise to support a wide variety of changing goals.  New features in the Windows Server and Microsoft Azure platform can help address these problems by increasing resource utilization and by simplifying administration.
 This "Transform Your Datacenter MVP Roadshow" will focus on specific approaches, methods, and features that attendees can use to ultimately improve the services delivered to their users.  We'll begin by examining the issues that often prevent or delay infrastructure upgrades, and look at ways in which IT professionals can use modern approaches to overcome them.  Methods include leveraging cloud services where they make sense, and migrating from older OS's, such as Windows Server 2003.
 Next, we'll examine specific features in the Windows Server 2012 R2 platform that can help simplify and improve datacenter infrastructure.  Powerful features include updates such as iSCSI, SMB 3.0, Scale-Out File Server, data de-duplication, NIC teaming, and additional methods for improving your datacenter environment.  We'll also focus on virtualization features in the latest version of Hyper-V, including features for achieving low-cost high-availability, improved performance and scalability, and simplified administration. 
 Finally, we'll discuss ways in which you can take advantage of features in Windows Server 2012 R2 and services in Microsoft Azure to simplify and optimize your datacenter.  Topics include identifying the best candidate applications and services for moving to or integrating with the cloud, and methods of making these transformations.
 Overall, the focus will be on technical details and features that are available now to help IT pros optimize and transform their datacenter environments.  We hope you'll be able to join us!

Agenda 
4:00 – 4:30        Registration and Welcome/Dinner
 (Post/share whoppers, challenges, and questions through twitter and paper)
4:30 – 5:00        IT Clue game – in small groups
5:00– 5:35        To Upgrade or not to Upgrade?
 §  Why you really need to upgrade from Windows Server 2003 or
 2008! (Server Platform)   
 §  Demo: Combating Configuration Drift with PowerShell
 §  Desired State Configuration Q&A
 §  Why you really need to upgrade your Active Directory from Windows Server 2003 or 2008 to 2012 R2! 
 §  Q&A
5:50– 6:00        10 minute Break
6:00 – 7:00        Upgrading to Windows Server 2012 R2
 §  How to upgrade from Windows Server 2003
 §  How to upgrade from Windows Server 2008 
 §  Q&A
 §  How to upgrade AD from Windows Server 2003
 §  How to upgrade AD from Windows Server 2008 
 §  Q&A 
 7:00 – 8:00Datacenter - Dealing with Application Compatibility and Delivery
 §  Discussion and Demos for strategizing Application Migration  
 §  Discussion and Demos of App-V for Application Delivery
  
 IT Clue game -- someone stole the data
 Wrap up

ADUC Common Queries: Days Since Last Logon

Recently a client asked me how Active Directory Users and Computers (ADUC) performs the Days Since Last Logon query found in the Find Dialog box's Common Queries option.

LastLogon is not replicated so to really get it you have to query every single DC. So I was reasonably certain that the query didn't use LastLogon but rather used the LastLogonTimestamp which was created "to help identify inactive computer and user accounts."  Assuming default settings "the lastLogontimeStamp will be 9-14 days behind the current date."

However, I couldn't find any documentation confirming that so I had to test it. For all I knew it could have been querying all the DC's to get an accurate LastLogon.

So when I ran the query yesterday, 15th of Sept, 120 days previous was 5/18 and on the domain controller I was querying the lastlogon of the account in question was 5/20 but the LastLogonTimeStampwas 5/14. So I knew that if the ADUC query showed the account in question that it meant that the ADUC query was using LastLogonTimeStamp because if it was using LastLogon (whether it was querying all of the DC's or just the one) then the account wouldn't show up.

Sure enough the account showed up. Conclusion: ADUC's Days Since Last Logon query is using the LastLogonTimeStamp as I expected.

Friday, July 4, 2014

Happy Independence Day -- Using PowerShell for Reporting

Unfortunately, my Independence day is not free -- I am working. Just so happens I need to report on when computer objects are getting migrated to a new AD forest. Day 1 4 Day 2 30 Day 3 25 etc.

Now I could have taken the data and imported it into SQL and then busted out some awesome queries in no time flat. But my buddy Craig Martin, keeps insisting how awesome this PowerShell stuff is. So I decided to give it a try, plus if I can get it to work then it will be faster to run this repeatedly from PowerShell rather than needing to import it into SQL Server. I am actually a big believer in using the right tool for the job. Otherwise you end up blaming the tool for failing you when you should have picked a different tool, one better suited for your task.

When working in a language of which I am not yet the master, I like to start small and build, so that I don't create 15 billion places to troubleshoot my code. So we start with using Get-ADComputer. Made certain that my filter, searchbase, searchscope and properties give me what I want:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated

whenCreated gives me the complete date and time but I want to group and count by day. So I needed to transform the whenCreated to date with no time. The .Date method will work for that but I struggled with how to get it into the pipeline for further processing. Eventually I discovered that I can use the @ symbol to note a hash table and tell the Select-Object commandlet to transform it with an expression and give the result a new name. (Thanks Don Jones)
 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Select-Object -Property Name,@{Name="DateCreated"; Expression = {$_.WhenCreated.Date}} 

 I later discovered I could do the same thing with the Group-Object commandlet which simplifies the command set. So I tack on:  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 
to get:

 Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whenCreated | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement 

But then in sorting it if I want to get a true sorting by date rather than a textual sorting I once again need to do an expression because the Group-Object commandlet has transformed my DateTime values into strings so I tack on:
| Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

So all together with a little message at the beginning:
Write-host "Daily totals of computer migrations"
Get-ADComputer -filter * -searchscope subtree -SearchBase "OU=Workstations,DC=Domain,dc=com" -Resultsetsize 4000 -Properties whencreated  | Group-Object  @{Expression = {$_.WhenCreated.Date}}  -NoElement | Sort-Object @{Expression = {[datetime]::Parse($_.Name) }}

Tuesday, July 1, 2014

8 Time MVP

Today I received notification that for the 8th time (2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014) I have been honored by Microsoft as a Microsoft Most Valuable Professional (MVP). According to the MVP web site there are currently 10 Identity Management MVP's in the world, and only three in North America.

Looking forward to the on-going journey with this product set and wonderful friends I have made along the way, product group members (past and present), MVP's (past and present), readers (book, blog, twitter) and other Identity Management professionals.

Tuesday, June 24, 2014

Projects and Heisenberg's Uncertainty Principle

Is it done yet? What's the status? How much longer? If I get asked these questions too often on a project I take a moment to explain about Heisenberg's Uncertainty Principle. Which states states that you can't know both the position and velocity of an electron because in measuring the one you alter the other.

The old saying goes "a watched pot never boils," especially if you keep sticking  a new thermometer into a heating pot of water every two seconds. Observations change the system. Frequent observations can change it even more.

On a project, when you get asked for status (or position) and it alters your velocity. If you get asked often enough your velocity slows and then halts. Which isn't the kind of change leaders are looking for.

An article in the Wall Street Journal reveals that even interruptions as short as two seconds can lead to errors.

So observation affects the system. That doesn't mean that we can go without measuring, just that leaders, managers and project managers all need to keep in mind that the demand for constant updates alters the velocity (usually slowing) of the people in the system.

Thursday, May 1, 2014

To Farm, or not to Farm, that is the question --

  • Whether 'tis nobler in the mind to suffer
  • the slings and arrows of outrageous fortune
  • Or to take Farms against a sea of patches
  • and by opposing end them? To, die, to sleep --

Today I will be "moderating" the debate about using SharePoint Farms vs. Stand-Alone as the foundation for the FIM Portal. In this corner we have Paul Williams of Microsoft sharing knowledge from his hard fought victories with FIM and painful experiences with Farms. In the other corner we have Spencer Harbar, SharePoint MVP, applying his years of SharePoint expertise to the FIM world providing a definitive guide to installing FIM 2012 R2 SP1 portal on SharePoint 2013.

Spencer points out that "farm deployment aspects are generally not well understood by FIM practitioners which leads to a number of common deployment and operational challenges." Point conceded. I saw much of the same thing with regards to MIIS and ILM when it came to deploying SQL Server.

Spencer argues that "the general idea is to build a small dedicated SharePoint instance purely for the purposes of hosting the FIM Portal [and FIM Service] and nothing else (although it could also host the Password Registration and Reset web sites)" and that by deploying a farm instead of Stand-Alone, the "craptastic demoware default configuration,"  you can avoid "a bunch of unnecessary goop."   Note: Assuming Spencer knows, but just to clarify for everyone, the Password Portals use IIS and do not need or use SharePoint.

An example of the "unnecessary goop" is the SharePoint Search Service Application, which when installed then requires us to turn off the SharePoint Indexing job. A benefit of avoiding "a bunch of stuff we don’t want" is that it "minimizes the attack surface available." Minimizing attack surface is a good thing.

Spencer opines that "the Standalone Server Type... is evil, pure and simple." He also decries the use of "Embedded SQL."

Paul shares some compelling experience based evidence to think about using Stand-Alone instead of a farm, stating that a farm gives you a "serious headache when patching ... more operational maintenance" (more SQL Server Databases to manage instead of the "optimised WID[embedded SQL] files that are totally expendable")  "and more complexity around backup and restore (or rebuild) and patching SharePoint itself" not to mention when you need to "[modify] web.config."

Patching needs to be explored further. According to Paul, you must "patch ... nodes sequentially" which "takes quite a bit longer than a standalone node" because "the FIM installer isn’t optimised for a farm" which would normally "deploy the solution pack once" instead we have an installer for the FIM Service and Portal, meaning the patch is the same. Since you need to patch the FIM Service on each node you must run the patch on each node which will also see the FIM Portal, "retract the solution pack and deploy the new one," which in turns causes "all application pools related to SharePoint to be recycled." Since "the retraction and redeployment is global (to SharePoint)" that means "that downtime affects all nodes in the farm – you can’t drop one out of the NLB array, patch, add back, drop the next, etc." Whereas if you do Stand Alone you can "drop one out of the NLB array, patch, add back, drop the next, etc."

I know that with some of the pre-R2 updates I have been able to run the patch on the first node of the farm, installing FIM Service and Portal, and then on the second node just installing the patch for the FIM Service, since the Portal bits had already been updated. I need to double check whether this is still the case (since then most of our installs have been stand-alone).

Paul continues with the woes of the Language packs that  they "comprise some file system files for the FIM Service and SharePoint solution packs," which for a Farm means repeated downtime for the whole farm as each node is Language Packed. If you need language packs then a farm is still bad news for downtime even if the method I have used still applies for the Service Pack and hotfixes.

Pros for SharePoint Stand-alone for FIM

Pros for SharePoint Farm for FIM

Setup is simple (it creates the first site collection, plus database and site for you)

Can get much smaller attack surface by not installing "unnecessary goop"

Don't have to have a separate SQL Instance (which you must make highly available to avoid single point of failure) to manage, backup, etc.

Avoid the overhead of running the Windows Internal Database/SQL Express Edition (aka Embedded SQL) on each node (overhead that we haven't seen cause FIM performance issues).

Can patch one server at a time without taking down whole NLB Array of FIM Servers (also each node is faster to patch)

Can deploy pure SharePoint items and CSS files once instead of to each node

Perhaps there are ways to get the most of the best of both worlds.

  1. Install one Single Server SharePoint Farm for each FIM Portal node
    1. Upside: You avoid the painful patching process and Language Pack process
    2. Upside: Done right have the smaller attack surface (you would get complete control)
    3. Downside: More complex installation, but you could use the very complete scripts from Spencer to do this
    4. Downside: Shoot where do I want to put all those databases? I could put them on the SQL Server that will host the other FIM databases
  2. Separate the FIM Service from the FIM Portal
    1. Upside: This way when you do the patching and language packs that impact the portal they should only need to be done once, but still have downtime for whole farm.
    2. Upside: Smaller attack surface
    3. Upside: Pure SharePoint items and CSS files get deployed and configured only once
    4. Downside: more vm's/machines to manage and more FIM Server licenses to buy
  3. Install Stand-Alone and find a way to reduce the "attack surface" by eliminating some of the "unnecessary goop"
    1. This has most of the upsides and few of the downsides if we can find a way to do it
    2. Spencer: This is where I would love to have your expert opinion: How to reduce the attack surface on SharePoint Stand-Alone.

Conclusion: Initially, working with Brad Turner, I went with Farms, but then when I saw the Language Pack issues I thought Stand-Alone. Also when trying to keep it simple for non-SharePoint Admins, I thought Stand-Alone. As always there are trade offs, and want to see more discussion before we settle on single answer or even a definitive decision tree for which one to choose. For now I lean towards each FIM Portal and Service node having its own SharePoint Stand-Alone instance, but would love to advance the state of the art with better security and possibly performance.

All: Give me your thoughts on one vs. the other or on the additional options.

Note: Ross Currie also provides a guide resulting from his hard fought battle to get FIM on SharePoint 2013

Note: Paul and Spencer AFAIK have never actually carried out a debate on this topic.

Wednesday, April 30, 2014

MIM's the word -- New name for FIM

Last week the Product group announced the new name for FIM and MIM's the word Microsoft Identity Manager.

Of course as a good futurist I had made enough guesses that I got this one right, even though as an honest man I must admit I also had it wrong -- Azure is not part of the name.

Fortunately, they didn't go with APE nor AILMENT, nor MIME, nor MIAMI, nor MICE, nor MAIM, nor WIMP. MIM's the word!

Hopefully, many of my readers have been entertained by my speculation. It has been fun. So now back to real work ... what will it be called in the release after the next one?

Hmm...

  • Hybrid Identity Manager (HIM) -- Too sexist
  • Hybrid Identity Provisioning Engine (HIPE) -- Hype -- nah
  • Hybrid Identity Access Engine -- (HIAE) -- pronounced Hi yah! I could go for that one!