Quantcast
Channel: Ask the Directory Services Team
Viewing all 76 articles
Browse latest View live

Friday Mail Sack: Mothers day pfffft… when is son’s day?

$
0
0

Hi folks, Ned here again. It’s been a little while since the last sack, but I have a good excuse: I just finished writing a poop ton of Windows Server 2012 depth training that our support folks around the world will use to make your lives easier (someday). If I ever open MS Word again it will be too soon, and I’ll probably say the same thing about PowerPoint by June.

Anyhoo, let’s get to it. This week we talk about:

Question

Is it possible to use any ActiveDirectory module cmdlets through invoke-command against a remote non-Windows Server 2012 DC where the module is installed? It always blows up for me as it tries to “locally” (remotely) use the non-existent ADWS with error “Unable to contact the server. This may be because the server does not exist, it is currently down, or it does not have the active directory web services running”

image

Answer

Yes, but you have to ignore that terribly misleading error and put your thinking cap on: the problem is your credentials. When you invoke-command, you make the remote server run the local PowerShell on your behalf. In this case that remote command has to go off-box to yet another remote server – a DC running ADWS. This means a multi-hop credential scenario. Provide –credential (get-credential) to your called cmdlets inside the curly braces and it’ll work fine.

Question

We are using a USMT /hardlink migration to preserve disk space and increase performance. However, performance is crazy slow and we’re actually running out of disk space on some machines that have very large files like PSTs. My scanstate log shows:

Error [0x000000] Write error 112 for C:\users\ned\Desktop [somebig.pst]. Windows error 112 description: There is not enough space on the disk.[gle=0x00000070]

Error [0x080000] Error 2147942512 while gathering object C:\users\ned\Desktop\somebig.pst. Shell application requested abort![gle=0x00000070]

Answer

These files are encrypted and you are using /efs:copyraw instead of /efs:hardlink. Encrypted files are copied into the store whole instead of hardlink'ing, unless you specify /efs:hardlink. If you had not included /efs, this file would have failed with, "File X is encrypted. Use the /efs option to specify a different way to handle this file".

Yes, I realize that we should probably just require that option. But think of all the billable hours we just gave you!

Question

I was using your DFSR pre-seeding post and am finding that robocopy /B is slows down my migration compared to not using it. Is that required for preseeding?

Answer

The /B mode, while inherently slower, ensures that files are copied using a backup API regardless of permissions. It is the safest way, so I took the prudent route when I wrote the sample command. It’s definitely expected to be slower – in my semi-scientific repro’s the difference was ~1.75 times slower on average.

However, /B not required if you are 100% sure you have at least READ permissions to all files.  The downside here is a lot of failures due to permissions might end up making things even slower than just going /B; you will have to test it.

If you are using Windows Server 2012 and have plenty of hardware to back it up, you can use the following options that really make the robocopy fly, at the cost of memory, CPU, and network utilization (and possibly, some files not copying at all):

Robocopy <foo> <bar> /e /j /copyall /xd dfsrprivate /log:<sna.foo> /tee /t:128 /r:1

For those that have used this before, it will look pretty similar – but note:

  • Adds /J option (first introduced in Win8 robocopy) - now performs unbuffered IO, which means gigantic files like ISO and VHD really fly and a 1Gbps network is finally heavily utilized. Adds significant memory overhead, naturally.
  • Add /MT:128 to use 128 simultaneous file copy threads. Adds CPU overhead, naturally.
  • Removes /B and /R:6 in order to guarantee fastest copy method. Make sure you review the log and recopy any failures individually, as you are now skipping any files that failed to copy on the first try.

 

Question

Recently I came across an user account that keeps locking out (yes, I've read several of your blogs where you say account lockout policies are bad "Turning on account lockouts is a way to guarantee someone with no credentials can deny service to your entire domain"). We get the Event ID of 4740 saying the account has been locked out, but the calling computer name is blank:

 

Log Name:     Security

 

Event ID:     4740

 

Level:         Information

 

Description:

 

A user account was locked out.

 

Subject:

 

Security ID: SYSTEM

 

Account Name: someaccount

 

Account Domain: somedomain

 

Logon ID: 0x3e7

 

Account That Was Locked Out:

 

Security ID: somesid

 

Account Name: someguy

 

Additional Information:

 

Caller Computer Name:

 

The 0xC000006A indicates a bad password attempt. This happens every 5 minutes and eventually results in the account being locked out. We can see that the bad password attempts are coming via COMP1 (which is a proxy server) however we can't work out what is sending the requests to COMP1 as the computer is blank again (there should be a computer name).

Are we missing something here? Is there something else we could be doing to track this down? Is the calling computer name being blank indicative of some other problem or just perhaps means the calling device is a non-Microsoft device?

Answer

(I am going to channel my inner Eric here):

A blank computer name is not unexpected, unfortunately. The audit system relies on the sending computers to provide that information as part of the actual authentication attempt. Kerberos does not have a reliable way to provide the remote computer info in many cases. Name resolution info about a sending computer is also easily spoofed. This is especially true with transitive NTLM logons, where we are relying on one computer to provide info for another computer. NTLM provides names but they are also easily spoofed so even when you see a computer name in auditing, you are mainly asking an honest person to tell you the truth.

Since it happens very frequently and predictably, I’d configure a network capture on the sending server to run in a circular fashion, then wait for the lock out and stop the event. You’d see all of the traffic and now know exactly who sent it. If the lockout was longer running and less predictable, I’d recommend using a network capture configured to trace in a circular fashion until that 4740 event writes. Then you can see what the sending IP address is and hunt down that machine. Different techniques here:

[And the customer later noted that since it’s a proxy server, it has lots of logs – and they told him the offender]

Question

I am testing USMT 5.0 and finding that if I migrate certain Windows 7 computers to Windows 8 Consumer Preview, Modern Apps won’t start. Some have errors, some just start then go away.

Answer

Argh. The problem here is Windows 7’s built-in manifest that implements microsoft-windows-com-base , which then copies this registry key:

HKEY_LOCAL_MACHINE\Software\Microsoft\OLE

If the DCOM permissions are modified in that key, they migrate over and interfere with the ones needed by Modern Apps to run. This is a known issue and already fixed so that we don’t copy those values onto Windows 8 anymore. It was never a good idea in the first place, as any applications needing special permissions will just set their own anyways when installed.

And it’s burned us in the past too…

Question

Are there any available PowerShell, WMI, or command-line options for configuring an OCSP responder? I know that I can install the feature with the Add-WindowsFeature, but I'd like to script configuring the responder and creating the array.

Answer

[Courtesy of the Jonathan “oh no, feet!” Stephens– Ned]

There are currently no command line tools or dedicated PowerShell cmdlets available to perform management tasks on the Online Responder. You can, however, use the COM interfaces IOCSPAdmin and IOSCPCAConfiguration to manage the revocation providers on the Online Responder.

  1. Create an IOSCPAdmin object.
  2. The IOSCPAdmin::OCSPCAConfigurationCollection property will return an IOCSPCAConfigurationCollection object.
  3. Use IOCSPCAConfigurationCollection::CreateCAConfiguration to create a new revocation provider.
  4. Make sure you call IOCSPAdmin::SetConfiguration when finished so the online responder gets updated with the new revocation configuration.

Because these are COM interfaces, you can call them from VBScript or PowerShell, so you have great flexibility in how you write your script.

Question

I want to use Windows Desktop Search with DFS Namespaces but according to this TechNet Forum thread it’s not possible to add remote indexes on namespaces. What say you?

Answer

There is no DFSN+WDS remote index integration in any OS, including Windows 8 Consumer Preview. At its heart, this comes down to being a massive architectural change in WDS that just hasn’t gotten traction. You can still point to the targets as remote indexes, naturally.

Question

Certain files – as pointed out here by AlexSemi– that end with invalid characters like a dot or a space break USMT migration. One way to create these files is to use the echo command into a device path like so:

image

These files can’t be opened by anything in Windows, it seems.

image

When you try to migrate, you end up with a fatal “windows error 2” “the system cannot find the file specified” error unless you skip the files using /C:

image

What gives?

Answer

Quit making invalid files! :-)

USMT didn’t invent CreateFile() so its options here are rather limited… USMT 5.0 handles this case correctly through error control - it skips these files when hardlink’ing because Windows returns that they “don’t exist”. Here is my scanstate log using USMT 5.0 beta, where I used /hardlink and did NOT provide /C:

image

In the case of non-hardlink, scanstate copies them without their invalid names and they become non-dotted/non-spaced valid files (even in USMT 4.0). To make it copy these invalid files with the actual invalid name would require a complete re-architecting of USMT or the Win32 file APIs. And why – so that everyone could continue to not open them?

Other Stuff

In case you missed it, Windows 8 Enterprise Edition details. With all the new licensing and activation goodness, Enterprise versions are finally within reach of any size customer. Yes, that means you!

Very solid Mother’s Day TV mash up (a little sweary, but you can’t fight a something that combines The Wire, 30 Rock, and The Cosbys)

Zombie mall experience. I have to fly to Reading in June to teach… this might be on the agenda

Well, it’s about time - Congress doesn't "like" it when employers ask for Facebook login details

Your mother is not this awesome:

image
That, my friend, is a Skyrim birthday cake

SportsCenter wins again (thanks Mark!)

Don’t miss the latest Between Two Ferns (veeerrrry sweary, but Zach Galifianakis at his best; I just wish they’d add the Tina Fey episode)

But what happens if you eat it before you read the survival tips, Land Rover?!

 

Until next time,

- Ned “demon spawn” Pyle


What's Causing that DFSR Change Storm?

$
0
0

[This is another guest post from our pal Mark in Oz. Even if you don’t care about DFSR, I highly recommend this post; it teaches some very clever log analysis techniques, useful in a variety of troubleshooting scenarios – The Neditor]

Hi there! It’s Mark Renoden– Premier Field Engineer in Sydney, Australia – here again. Today I’m going to talk about an issue where a customer's DFSR environment lost file updates and they’d see regular alerts in SCOM about replication backlog. While I was on site working with them, I came up with a few creative ideas about how to use the DFSR debug logs that led us to the root cause.

The problem at hand was that a large DFS replication backlog would accumulate from time to time. Finding the root cause meant understanding the trend in changes to files in the replica. To do this, we needed to use the debug logs as our data source: to manipulate them so that they would tell the story.

With the aid of some custom scripts and tools, and a lab environment, I’m going to simulate their experience and talk through the investigation.

Test Lab Setup

The test lab I’ve used for this post is pretty simple: I’ve got two file servers, DPS1 and DPS2 configured with a single replication group called RG1 that replicates C:\ReplicatedFolder between the servers. There are 100,000 files in C:\ReplicatedFolder.

Prepare for Battle

In Ned’s previous post Understanding DFSR Debug Logging (Part 1: Logging Levels, Log Format, GUID’s), the various options for debug log verbosity are discussed. For this scenario, the only one I’ll change is the number of debug log files. 1000 is generally a good number to choose for troubleshooting purposes:

image

Harvest the Data

After reproducing the symptoms, we want to harvest the logs from the server that has pending replication. When on site at the customer, I just copied DFSR*.log.gz from the %windir%\debug folder, but the best possible practice would be to stop DFSR, copy the logs and then start the service again. This would prevent log rollover while you harvest the logs.

After you copy the logs for investigation, they need to be un-g-zipped. Use your favourite gzip-aware decompression utility for that.

Understand the Log Format

Before we can mine the debug logs for interesting information, we need to look at what we’re dealing with. Opening up one of the logs files, I want to look for a change and to understand the log format –

20120522 12:39:57.764 2840 USNC  2450 UsnConsumer::UpdateIdRecord LDB Updating ID Record:
+       fid                             0x2000000014429
+       usn                             0x693e0ef8
+       uidVisible                      1
+       filtered                        0
+       journalWrapped                  0
+       slowRecoverCheck                0
+       pendingTombstone                0
+       internalUpdate                  0
+       dirtyShutdownMismatch           0
+       meetInstallUpdate               0
+       meetReanimated                  0
+       recUpdateTime                   20120521 01:04:21.513 GMT
+       present                         1
+       nameConflict                    0
+       attributes                      0x20
+       ghostedHeader                   0
+       data                            0
+       gvsn                            {5442ADD7-04C7-486B-B665-2CB036997A67}-v937024
+       uid                             {5442ADD7-04C7-486B-B665-2CB036997A67}-v615973
+       parent                          {8A6CF487-2D5A-456C-A235-09F312D631C8}-v1
+       fence                           Default (3)
+       clockDecrementedInDirtyShutdown 0
+       clock                           20120522 02:39:57.764 GMT (0x1cd37c42d5a9268)
+       createTime                      20120516 00:41:05.011 GMT
+       csId                            {8A6CF487-2D5A-456C-A235-09F312D631C8}
+       hash                            00000000-00000000-00000000-00000000
+       similarity                      00000000-00000000-00000000-00000000
+       name                            file0000021380.txt
+      
20120522 12:39:59.326 2840 USNC  2453 UsnConsumer::UpdateIdRecord ID record updated from USN_RECORD:
+       USN_RECORD:
+       RecordLength:        96
+       MajorVersion:        2
+       MinorVersion:        0
+       FileRefNumber:       0x2000000014429
+       ParentFileRefNumber: 0x4000000004678
+       USN:                 0x693e0ef8
+       TimeStamp:           20120522 12:39:57.764 AUS Eastern Standard Time
+       Reason:              Close Security Change
+       SourceInfo:          0x0
+       SecurityId:          0x0
+       FileAttributes:      0x20
+       FileNameLength:      36
+       FileNameOffset:      60
+       FileName:            file0000021380.txt

What I can see here is a local database update followed by the USN record update that triggered it. If I can gather together all of the date stamps for USN record updates, perhaps I can profile the change behaviour on the file server…

image

The command above finds every line in every log file that contains USN_RECORD: and then excludes lines that contain a + (thereby eliminating occurrences of + USN_RECORD: as seen in the log excerpt above). Finally, it directs that output into USN.csv.

Let’s open our CSV file in Excel and see what we can do with it.

Graph the Data

I'd ideally like to graph the data that I've got, to make it easy to spot trends. The data I have right now isn't super-easy to work with, so I'm going to sanitize it a bit, and then make a Pivot Table and chart from the sanitized data.

Here is a single column of USN_RECORD: timestamps:

image

I’d like to figure out the rate of change on the file system for files in the replicated folder so I’ll use text to columns. I’m using a fixed width conversion and I’m going to split out my timestamp to the minute (so I can see how many changes per minute I have) and I’ll split USN_RECORD: off the end of the line so that I have something to count:

image

Now I’ve got columns like this:

image

I delete the columns I don’t need (A and C). My result is a column of timestamps down to the minute and a column of identical values (column B) which I can count to understand the rate of change:

image

To do this, I insert a pivot table. I simply select columns A and B and then choose PivotTable from the Insert menu in Excel.

image

Now I configure my PivotTable Field List as follows:

image

After configuring the PivotTable, it looks like this

image

All that’s left for me to do is to click on one of the row labels and to select a chart from the Insert menu. The resulting chart tells us quite a lot:

image

Here I can see that there are constant changes at roughly 230 per minute, and that for a two-hour window, the changes increase to about 1500 per minute.

Conclusions so far

For the entire duration of the logs, a roughly consistent level of change was occurring. However, for a two-hour window, lots of change was occurring. There are two possibilities here: either the cause of change has become more aggressive during this time or this chart represents two different activities.

We need more investigation …

Back to the Debug Logs

I start by skimming a log that contains the timestamps from the two-hour window where we see many changes, and look at the USN record updates. Skimming through, I can see two different types of change:

20120523 10:54:41.249 2840 USNC  2453 UsnConsumer::UpdateIdRecord ID record updated from USN_RECORD:
+       USN_RECORD:
+       RecordLength:        96
+       MajorVersion:        2
+       MinorVersion:        0
+       FileRefNumber:       0x20000000175FA
+       ParentFileRefNumber: 0x4000000004678
+       USN:                 0xbf0ad430
+       TimeStamp:           20120523 10:54:39.827 AUS Eastern Standard Time
+       Reason:              Close Security Change
+       SourceInfo:          0x0
+       SecurityId:          0x0
+       FileAttributes:      0x20
+       FileNameLength:      36
+       FileNameOffset:      60
+       FileName:            file0000031231.txt

And:

20120523 10:54:41.249 2840 USNC  2085 UsnConsumer::UpdateUsnOnly USN-only update from USN_RECORD:
+    USN_RECORD:
+    RecordLength:        96
+    MajorVersion:        2
+    MinorVersion:        0
+    FileRefNumber:       0x2000000019AD4
+    ParentFileRefNumber: 0x4000000004678
+    USN:                 0xbf0ad4f0
+    TimeStamp:           20120523 10:54:39.843 AUS Eastern Standard Time
+    Reason:              Basic Info Change Close
+    SourceInfo:          0x0
+    SecurityId:          0x0
+    FileAttributes:      0x20
+    FileNameLength:      36
+    FileNameOffset:      60
+    FileName:            file0000038828.txt

Skimming a log that covers a timeframe with a low rate of change, I can only seem to find:

20120522 23:28:54.953 2840 USNC  2453 UsnConsumer::UpdateIdRecord ID record updated from USN_RECORD:
+    USN_RECORD:
+    RecordLength:        96
+    MajorVersion:        2
+    MinorVersion:        0
+    FileRefNumber:       0x2000000022440
+    ParentFileRefNumber: 0x4000000004678
+    USN:                 0x7e7f5188
+    TimeStamp:           20120522 23:28:52.984 AUS Eastern Standard Time
+    Reason:              Close Security Change
+    SourceInfo:          0x0
+    SecurityId:          0x0
+    FileAttributes:      0x20
+    FileNameLength:      36
+    FileNameOffset:      60
+    FileName:            file0000072204.txt

Now I have a theory – Basic Info Change Close events only occur during the two-hour window where there are many changes and there’s an underlying and ongoing security change the rest of the time. I can prove this if I extract the timestamps for Basic Info Change Close changes and similarly, extract timestamps for Close Security Change changes.

Looking back at the log entries, I can see I have a time stamp followed by a series of lines that start with a +. I need to parse the log with something (I chose PowerShell) that takes note of the timestamp line and when a Basic Info Change Close or Close Security Change follows soon after, return the timestamp.

Here’s my PS script:

$files = Get-ChildItem *.log

$processingBlock = $False
$usnBlock = $False

foreach ($file in $files)
{
    $content = Get-Content $file
    foreach ($line in $content)
    {
        if (!($line.ToString().Contains("+")))
        {
            $outLine = $line
            $processingBlock = $True
        }
        if ($processingBlock)
        {
            if ($line.ToString().Contains("+    USN_RECORD:"))
            {
                $usnBlock = $True
            }
        }
        if ($usnBlock)
        {
            if ($line.ToString().Contains("+    Reason:              Basic Info Change Close"))
            {
                $outLine.ToString()
                $processingBlock = $False
                $usnBlock = $False
            }
        }
    }
}

And:

$files = Get-ChildItem *.log

$processingBlock = $False
$usnBlock = $False

foreach ($file in $files)
{
    $content = Get-Content $file
    foreach ($line in $content)
    {
        if (!($line.ToString().Contains("+")))
        {
            $outLine = $line
            $processingBlock = $True
        }
        if ($processingBlock)
        {
            if ($line.ToString().Contains("+    USN_RECORD:"))
            {
                $usnBlock = $True
            }
        }
        if ($usnBlock)
        {
            if ($line.ToString().Contains("+    Reason:              Close Security Change"))
            {
                $outLine.ToString()
                $processingBlock = $False
                $usnBlock = $False
            }
        }
    }
}

I run each of these (they take a while) against the debug log files and then chart the results in exactly the same way as I’ve done above.

image

First, Basic Info Change Close (look at the time range covered and number plotted):

image

And Close Security Change, below:

image

This confirms the theory – Basic Info Change Close takes place in the two hours where there’s a high rate of change and Close Security Change is ongoing.

Root Cause Discovery

If this is an ongoing pattern where the high rate of change occurs during the same two hours each day, I can capture both activities using Process Monitor.

Once I have a trace, it’s time to filter it and see what’s happening:

image

Here I’ve reset the filter and added Operation is SetBasicInformationFile then Include. I chose SetBasicInformationFile because it looks like a good fit for the USN record updates labelled Basic Info Change Close. After clicking OK, my filtered trace has the answer…

image

As it turns out, the backup window matches nicely with the storm of Basic Info Change Close updates.

Clearly, this is my own little application replicating behaviour but in the case of my customer, it was actually their backup application causing this change. They were able to talk to their vendor and configure their backup solution so that it wouldn’t manipulate file attributes during backups.

Now all we need to do is identify the source of Close Security Change updates. Once again, I reset the filter and look for an operation that sounds like a good match. SetSecurityFile looks good.

image

What I found this time is that no entries show up in Process Monitor

image

What explains this? Either I chose the wrong operation or the filter is broken in some other way. I can’t see any other sensible operation values to filter with so I’ll consider other options. Looking at the filter, I realize that perhaps System is responsible for the change and right now, Procmon filters that activity out. I remove the exclusion of System activity from my filter and see what happens:

image

Aha! Now I’ve got something:

image

Now I need to understand what System is doing with these files. I right click the path for one of these entries and select “Include C:\ReplicatedFolder\file…”:

image

I also need to remove the filter for SetSecurityFile:

image

In summary, I’m interested in everything that happened to file0000033459.txt:

image

If I look at operations on the file that took place prior to SetSecurityFile, I can see a CreateFile operation. This is where System obtained a handle to the file. Looking at this entry, adding the Details column to Process Monitor and examining the fine print I find:

mark1

System is making this change in the context of the account CONTOSO\ACLingApp that just happens to be the service account of an application used to change permissions on resources in the environment.

Conclusion

The process I've described today is a good example of the need to Understand the System from my earlier post. The Event Logs - and even the debug logs - won’t always tell you the answer straight away. Know what you’re trying to achieve, know how to use the tools in your arsenal, and know how they can be made to produce the outcome you need.

Knowing what I know now, I might have found the root cause by starting with Process Monitor but there’s a chance I’d have missed Close Security Change updates (considering that System is excluded from Process Monitor by default). I may have also missed the Basic Info Change Close updates if the tracing interval wasn’t aligned with the backup window. By mining the debug logs, I was able to establish there were two separate behaviours and the appropriate times to gather Process Monitor logs.

- Mark “Spyglass” Renoden

Windows PowerShell remoting and delegating user credentials

$
0
0

Hey all Rob Greene here again. Yeah, I know, it’s been a while since I’ve written anything for you good people of the Internet.

I recently had an interesting issue with the Active Directory Web Services and the Active Directory Windows PowerShell 2.0 modules in Windows 7 and Windows Server 2008 R2. Let me explain the scenario to you.

We have a group of helpdesk users that need to be able to run certain Windows PowerShell commands to manage users and objects within Active Directory. We do not want to install any of the Active Directory RSAT tools on helpdesk groups Windows 7 workstations directly because these users should not have access to Active Directory console snap-ins [Note: as pointed out in the Comments, you don't have to install all RSAT AD tools if you just want AD Windows PowerShell; now back to the action - the Neditor]. We have written specific Windows PowerShell scripts that the help desk users employ to manage user accounts. We are storing the Windows PowerShell scripts on a central server that the users need to be able to access and run the scripts remotely.

Hmmm…. Well my mind starts thinking, man this is way too complicated, but hey that’s what our customers like to do sometimes… Make things complicated.

clip_image002

The basic requirement is that the help desk admins must run some Windows PowerShell scripts on a remote computer that leverages the ActiveDirectory Windows PowerShell cmdlets to manage user accounts in the domain.

So let’s think about the “ask” here:

  • We are going to require Windows PowerShell remoting from the Windows 7 client to the middle tier server where the ActiveDirectory Windows PowerShell modules are installed.

By default you must connect to the remote server with an Administrator level account when PS remoting otherwise the remote session will not be allowed to connect. That means the helpdesk users cannot connect to the domain controllers directly.

If you are interested in changing this requirement the Scripting Guy blog has two ways of doing this via:

  • The middle tier server where the ActiveDirectory Windows PowerShell cmdlets are installed has to connect to a domain controller running the Active Directory Web Service as the PS remoted user account.

Wow, how do we make all this happen?

1. You need to enable Windows PowerShell Remoting on the Remote Admin Server. The simplest way to do this is by launching an elevated Windows PowerShell command prompt and type:

Enable-PSRemoting -Force

To specify HTTPS be used for the remote connectivity instead of HTTP, you can use the following cmdlet (this requires a certificate environment that's outside the scope of this conversation):

Set-WSManQuickConfig –Force -UseSSL

2. On the Remote Admin Server you will also want to make sure that the “Windows Remote Management (WS-Management)” service is started and set to automatic.

If you have done a decent amount of Windows PowerShell scripting you probably got this part.

Alright, the next part is kind of tricky. Since we are delegating the user’s credentials from the Remote Admin Server to the ADWS service, you are probably thinking that we are going to setup some kind of Kerberos delegation here. That would be incorrect. Windows PowerShell remoting does not support Kerberos delegation. You have to use CredSSP to delegate the user account to the Remote Admin Server (which does a logon to the Remote Admin Server) and then it is allowed to interact with the ADWS service on the domain controller.

More information about CredSSP:

MSDN Magazine: Credential Security Support Provider

951608 Description of the Credential Security Support Provider (CredSSP) in Windows XP Service Pack 3
http://support.microsoft.com/kb/951608/EN-US

If you have done some research on CredSSP, it takes the user's name and password and passes it on to the target server. It is not sending a Kerberos ticket or NTLM token for validation. This can be somewhat risky. Just like with Windows PowerShell remoting CredSSP usage is disabled by default and must be enabled. The other key thing to understand about CredSSP is you have to enable the “Client” and the “Server” to be able to use it.

NOTE: Although Windows XP Service Pack 3 does have CredSSP in it. The version of Windows PowerShell for Windows XP does not support CredSSP with remote management.

3. On the Remote Admin Server, we need to enable Windows Remote Management to support CredSSP. We do this by typing the command below in an elevated Windows PowerShell command window:

Enable-WSManCredSSP –Role Server -Force

4. On the Windows 7 client, we need to configure the “Windows Remote Management (WS-Management)” service startup to Automatic. Failure to do this will result in the following error being displayed at the next step:

Enable-WSManCredSSP : The client cannot connect to the destination specified in the request. Very that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination to analyze and configure the winRM service: “winrm quickconfig”

5. On the Windows 7 client, we need to enable Windows Remote Management to support CredSSP. We do this by typing the command below in an elevated Windows PowerShell command window:

Enable-WSManCredSSP -Role Client -DelegateComputer *.contoso.com -Force

NOTE: “*.contoso.com” is a placeholder for your DNS domain name. Within the client configuration is where you can constrain the CredSSP credentials to certain “Targets” or destination computers. If you want them to only work to a specific computer replace *.contoso.com with the specific servers name.

6. Lastly, when the remote session is created to the target server we need to make sure that the “-Authentication CredSSP” switch is provided. Here are a couple of remote session examples:

Enter-PSSession -ComputerName con-rt-ts.contoso.com -Credential (Get-Credential) -Authentication CredSSP

Invoke-Command –ComputerName con-rt-ts.contoso.com –Credential (Get-Credential) –Authentication CredSSP –ScriptBlock {Import-Module ActiveDirectory; get-aduser administrator}

I hope that you have some new information around Windows PowerShell remoting today to make your Windows PowerShell adventures more successful. This story changes in Windows 8 and Windows Server 2012 for the better, so use this article only with your legacy operating systems.

Rob “Power Shrek” Greene

Monthly Mail Sack: Yes, I Finally Admit It Edition

$
0
0

Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

This week month, I answer your questions on:

Let’s incentivize our value props!

Question

Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

My first question is.... Am I missing something?

My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

Answer

Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

Frame

Source

Destination

Packet Data Summary

1

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname:

2

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

3

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

4

DC

Client

AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

5

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

6

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

7

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

8

DC

Client

TGS Response Cname: client$

When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

This isn’t some Microsoft wackiness either – RFC 4430 states:

If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW.The optional client's time in the KRB-ERROR SHOULD be filled out.

If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

  1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
  2. Not all third parties honor it.
  3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
  4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
  5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

We have a KB tucked away on this here but it is nearly un-findable.

Awesome question.

Question

I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

Answer

It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

(get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

[system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

This also lead to updated TechNet content. Good work, Internet!

Question

Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

Answer

The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

DOMAINSID-RID1
DOMAINSID-RID2
DOMAINSID-RID3
DOMAINSID-RID4
DOMAINSID-RID5

With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

DOMAINSID-RID1
“-RID2
“-RID3
“-RID4
“-RID5

The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

Question

When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

Answer

The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

Question

Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

Answer

USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

Question

Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

Answer

Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

After those thoughts… get a better server or a better app. :)

Question

I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

Answer

Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

Graphical method

1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

image

2. Inspect that disk and you see the parent as well.

image

3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

image

4. Merge the disk to a new copy:

image

image

Windows PowerShell method

Much simpler, although slightly counter-intuitive. Just use:

Convert-vhd

For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

image
Violin!

You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

Question

It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

What is the Microsoft take on this?

Answer

This is one of those “it depends” scenarios:

1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

For all these reasons, we in MS Support generallyrecommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

As a side note, restoring the RID master usedto cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

Question

I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

Answer

Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

image

The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

Other Stuff

Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

Stop worrying so much about the end of the world and think it through.

So awesome:


And so fake :(

If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

image

It’s free and exactly 17 times better than the old in-box version:

image
OMG Lisa, stop yelling at me! 

Is this the greatest geek advert of all time?


Yes. Yes it is.

When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

Hetfield in Milan
Ride the lightning Mercedes

We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with ��funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

Until next time,

- Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

ADWS has been released for Windows Server 2008 and Windows Server 2003

$
0
0

Ned here. The beta is over, and the new AD Web Service service introduced in Windows Server 2008 R2 has been released to the world for downlevel OS's. ADWS allows AD PowerShell to connect to domain controllers and do... work. It also allows the new AD Administration Center - which is a kissing cousin of the AD Users and and Computers snap-in - to manage AD objects. If you have only Windows 7 clients with RSAT, or a mix of Win2003, Win2008, and Win 2008 R2 DC's, this download is for you:

Download Active Directory Management Gateway Service (Active Directory Web Service for Windows Server 2003 and Windows Server 2008)

For more info on ADAC, take a look here.

I'll talk more about ADAC and ADWS in the coming weeks, but I figured you'd want this sucker sooner than later.

- Ned "I'm an AD" Pyle

Inventorying Computers with AD PowerShell

$
0
0

Hi, Ned here again. Have you ever had to figure out what operating systems are running in your domain environment so that you can plan for upgrades, service pack updates, or support lifecycle transitions? Did you know that you don’t have to connect to any of the computers to find out? It’s easier than you might think, and all possible once you start using AD PowerShell in Windows Server 2008 R2 or Windows 7 with RSAT.

Get-ADComputer

The cmdlet of choice for inventorying computers through AD is Get-ADComputer. This command automatically searches for computer objects throughout a domain, returning all sorts of info.

As I have written about previously my first step is to fire up PowerShell and import the ActiveDirectory module:

image

Then if I want to see all the details about using this cmdlet, I run:

Get-Help Get-ADComputer -Full

Getting OS information

Basics

Now I want to pull some data from my domain. I start by running the following:

Important note: in all my samples below, the lines are wrapped for readability.

Another important note(thanks dloder): I am going for simplicity and introduction here, so the -Filter and -Property switches are not designed for perfect efficiency. As you get comfortable with AD PowerShell, I highly recommend that you start tuning for less data to be returned - the "filter left, format right" model described here by Don Jones.

Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion -Wrap –Auto

image

This command is filtering all computers for all their properties. It then feeds the data (using that pipe symbol) into a formatted table. The only attributes that the table contains are the computer name, operating system description, service pack, and OS version. It also automatically sizes and wraps the data. When run, I see:

image

It looks like I have some work to do here – one Windows Server 2003 computer needs Service Pack 2 installed ASAP. And I still have a Windows 2000 server that is going to move quickly and replace that server.

Server Filtering

Now I start breaking down the results with filters. I run:

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

I have changed my filter to find all the computers that are running “Windows Server something”, using the –like filter. And I stopped displaying the OS version data because it was not providing me anything unique (yet!).

image

Cool, now only servers are listed! But wait… where’d my Windows 2000 server go? Ahhhh… sneaky. We didn’t start calling OS’s “Windows Server” until 2003. Before that it was “Windows 2000 Server”. I need to massage my filter a bit:

Get-ADComputer -Filter {OperatingSystem -Like "Windows *Server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

See the difference? I just added an extra asterisk to surround “Server”.

image

As you can see, my environment has a variety of Windows server versions running. I’m interested in the ones that are running Windows Server 2008 or Windows Server 2008 R2. And once I have that, I might just want to see the R2 servers – I have an upcoming DFSR clustering project that requires some R2 computers. I run these two sets of commands:

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*2008*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*r2*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

image

image

Starting to make sense? Repetition is key; hopefully you are following along with your own servers.

Workstation Filtering

Okeydokey, I think I’ve got all I need to know about servers – now what about all those workstations? I will simply switch from -Like to -Notlike with my previous server query:

Get-ADComputer -Filter {OperatingSystem -NotLike "*server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

And blammo:

image

Family filtering

By now these filters should be making more sense and PowerShell is looking less scary. Let’s say I want to filter by the “family” of operating system. This can be useful when trying to identify computers that started having a special capability in one OS release and all subsequent releases, and where I don’t care about it being server or workstation. An example of that would be BitLocker– it only works on Windows Vista, Windows Server 2008, and later. I run:

Get-ADComputer -Filter {OperatingSystemVersion -ge "6"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemVersion -Wrap -Auto

See the change? I am now filtering on operating system version, to be equal to or greater than 6. This means that any computers that have a kernel version of 6 (Vista and 2008) or higher will be returned:

image

If I just wanted my Windows Server 2008 R2 and Windows 7 family of computers, I can change my filter slightly:

Get-ADComputer -Filter {OperatingSystemVersion -ge "6.1"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemVersion -Wrap -Auto

image

Getting it all into a file

So what we’ve done ‘til now was just use PowerShell to send goo out to the screen and stare. In all but the smallest domains, though, this will soon get unreadable. I need a way to send all this out to a text file for easier sorting, filtering, and analysis.

This is where Export-CSV comes in. With the chaining of an additional pipeline I can find all the computers, select the attributes I find valuable for them, then send them into a comma-separated text file that is even able to read the weirdo UTF-8 trademark characters that lawyers sometimes make us put in AD.

Hey, what do you call a million lawyers at the bottom of the ocean? A good start! Why don’t sharks eat lawyers? Professional courtesy! What do have when a lawyer is buried up to his neck in sand? Not enough sand! Haw haw… anyway:

Get-ADComputer -Filter * -Property * | Select-Object Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion | Export-CSV AllWindows.csv -NoTypeInformation -Encoding UTF8

image

Then I just crack open the AllWindows.CSV file in Excel and:

image

What about the whole forest?

You may be tempted to take some of the commands above and tack on the necessary arguments to search the entire forest. This means adding:

-searchbase “” –server <domain FQDN>:3268

That way you wouldn’t have to connect to a DC in every domain for the info – instead you’d just ask a single GC. Unfortunately, this won’t work; none of the operating system attributes are replicated by global catalog servers. Oh well, that’s not PowerShell’s fault. All the data must be pulled from domains individually, but that can be automated – I leave that to you as a learning exercise.

Conclusion

The point I made above about support lifecycle is no joke: 2010 is a very important year for a lot of Windows products’ support:

Hopefully these simple PowerShell commands make hunting down computers a bit easier for you.

Until next time.

- Ned “bird dog” Pyle

Friday Mail Sack – Big Picture Edition

$
0
0

Hi folks, Ned here again. Here are this week’s sample of interesting questions sent to AskDS.

Question

Is there a way to see information about the available RID pool for a domain?

Answer

Yes, with the attribute: RidAvailablePool

DN path: CN=RID Manager$,CN=System,DC= domain ,DC=com

Global RID space for an entire domain is defined in Ridmgr.h. as a large integer with upper and lower parts. The upper part defines the number of security principals that can be allocated per domain (0x3FFFFFFF or just over 1 billion). The lower part is the number of RIDs that have been allocated in the domain. To view both parts, use the Large Integer Converter command in the Utilities menu in Ldp.exe.

• Sample Value: 4611686014132422708 (Insert in Large Integer Calculator in the Utilities menu of Ldp.exe) 
• Low Part: 2100 (Beginning of next RID pool to be allocated) 
• High Part: 1073741823 (Total number of RIDS that can be created in a domain)
 

This is all (buried) in:

305475  Description of RID Attributes in Active Directory
http://support.microsoft.com/default.aspx?scid=kb;EN-US;305475

Update: and see comments - Rick has a slick alternative.

Question

I have an NT 4.0 and Exchange 5.5 environment… <other stuff>

Answer

We’ve got nothing for you, as those operating systems and applications have not been supported for years -the same way if you call Ford and ask about getting warranty work on your '96 Taurus. A handful of Premier contract customers pay a significant premium every year for a “Custom Support Agreement” to maintain support on deceased products. If you’re interested in CSA’s (and if you are running Windows 2000 and getting worried that July 13th is approaching fast), contact your TAM.

Otherwise, whatever you can dig up from our KB or the Internet is your best bet. Your best chance to get an NT 4.0 question answered from us is “I am trying to migrate to a later OS and…”

Question

I am setting up DFSR and I’ve been told the following are best practices:

  • Increase the RF staging quota to be at least as large as the 9 largest files on Windows Server 2003 R2 sets.
  • Increase the RF staging quota to be at least as large as the 32 largest files on Windows Server 2008 or Windows Server 2008 R2 READ-WRITE sets.
  • Increase the RF staging quota to be at least as large as the 16 largest files on Windows Server 2008 R2 READ-ONLY sets.

Is there any easy way to find the N largest files with PowerShell? DIR really blows and the Windows Search GUI is taking forever since I don’t index files.

Answer

Try this on for size (ha!):

Get-ChildItem d:\scratch -recurse | Sort-Object length -descending  | select-object -first 32 | ft directory,name,length -wrap –auto

The highlighted portions are what you need to change. The first one is the path and the second is how many items you want to list as the “biggest”.

image

Question

I hear that you’re a big Chicago Cubs fan, Ned. Is it true that they have not won the championship in over 100 years?

Answer

I hate you.

 

Have a great weekend folks,

Ned “the short picture” Pyle


Friday Mail Sack – While the Ned’s Away Edition

$
0
0

Hello Internet! Last week, Ned said there wouldn’t be a Mail Sack this week because he was going to be out of town. Well, the DS team was sitting around during our “Ned is out of our hair for a few days” party and we decided that since this is a Team Blog after all, we’d go ahead and post a Friday Mail Sack. So even though the volume was a little light this week, perhaps due to Ned’s announcement, we put one together all by ourselves.

So without further ado, here is this week’s Ned-less Mail Sack.

Certificate Template Supersedence

Q: I’m using the Certificate Wizard in OCS to generate a certificate request and submit it to my Enterprise CA. My CA isn’t configured to issue certificates based on the Web Server template, but I have duplicated the Web Server template and modified the settings. My new template is configured to supersede the Web Server template.

The request fails. Why doesn’t the CA issue the certificate based on my new template if it supersedes the default Web Server template?

A: While that would be a really cool feature, that’s not how Supersedence works. Supersedence is used when you want to replace certificates that have already been issued with a new certificate with modified settings. In addition, it only works with certificates that are being managed by Windows Autoenrollment.

For example, the Administrator has enabled Autoenrollment in the Computer Configuration of the Default Domain Policy:

image

Further, the Administrator has granted the Domain Computers group permission to Autoenroll for the Corporate Computer template. Appropriately, every Windows workstation and member server in the domain enrolls for a certificate based on this template.

Later, the Administrator decides that she needs to update the template in some fashion – add a new certificate purpose to the Enhanced Key Usage, change a key option, whatever. Our intrepid Admin duplicates her Corporate Computer template and creates a new Better Corporate Computer template. In the properties of this new template, she adds the now obsolete Corporate Computer template to the Superseded Templates list.

image

The Admin clicks Ok to commit the changes and then sits back and waits for all of the workstations and member servers in the domain to update their certificate. So how does that work, exactly?

On each workstation and member server, the Autoenrollment server wakes up about every 8 hours and checks to see if it has any work to do. As this occurs on each Windows computer, Autoenrollment determines it is enabled by policy and so checks Active Directory for a list of templates. It discovers that there is a new template for which this computer has Autoenrollment permissions. Further, this new template is configured to supersede the template a certificate it already has is based upon.

The Autoenrollment service then archives the current certificate and enrolls for a new certificate based on the superseding template.

In summary, supersedence doesn’t change the behavior of the CA at all, so you can’t use it to control how the CA will respond when it receives a request for a certain template. No, supersedence is merely a hint to tell Autoenrollment on the client that it needs to replace an existing certificate.

Active Directory Web Services

Q: I’m seeing the following warning event recorded in the Active Directory Web Services event log about once a minute.

Log Name:      Active Directory Web Services
Source:        ADWS
Date:          4/8/2010 3:13:53 PM
Event ID:      1209
Task Category: ADWS Instance Events
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      corp-adlds-01.corp.contoso.com
Description:
Active Directory Web Services encountered an error while reading the settings for the specified Active Directory Lightweight Directory Services instance.  Active Directory Web Services will retry this operation periodically.  In the mean time, this instance will be ignored.
Instance name: ADAM_ContosoAddressbook

I can’t find any Microsoft resources to explain why this event occurs, or what it means.

A: Well…we couldn’t find any documentation either, but we were curious ourselves so we dug into the problem. It turns out that event is only recorded if ADWS can’t read the ports that AD LDS is configured to use for LDAP and Secure LDAP (SSL). In our test environment, we deleted those values and restarted the ADWS service, and sure enough, those pesky warning events started getting logged.

The following registry values are read by ADWS:

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port LDAP
Type: REG_DWORD
Data: 1 - 65535 (default: 389)

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port SSL
Type: REG_DWORD
Data: 1 - 65535 (default: 636)

Verify that the registry values described above exist and have the appropriate values. Also verify that the NT AUTHORITY\SYSTEM account has permission to read the values. ADWS runs under the Local System account.

Once you've corrected the problem, restart the ADWS service. If you have to recreate the registry values because they've been deleted, restart the AD LDS instance before restarting the ADWS service.

Thanks for sending us this question. We’ve created the necessary internal documentation, and if we see more issues like this we’ll promote it to the Knowledge Base.

Final Note

Well…that’s it for this week. Please keep posting your comments, observations, topic ideas and questions. And fear not, Ned will be back next week.

Jonathan “The Pretender” Stephens

Friday Mail Sack – Tweener Clipart Comics Edition

$
0
0

Hey folks, Ned here again. For those keeping score, you’ve probably noticed the full-on original article content has been a bit thin in the past few weeks. We have some stuff in the draft pipeline so hang in there. In the meantime, here’s a weeks worth of.. stuff.

I like to move it, move it.

Question

I am confused on what DFS features are different between Standard Edition and Enterprise Edition versions of Windows Server. This includes DFSN and DFSR.

Answer

There are only two* differences:

DFS Replication – Enterprise edition gives you the ability to use cross-file RDC. Cross-file RDC is a way to replicate files by using a heuristic to determine similar data in existing files on a downstream server and use that construct a file locally without the need to request the whole new file over the network from an upstream partner.

http://technet.microsoft.com/en-us/library/cc773238(WS.10).aspx#BKMK_cross_fileRDC_editions

DFS Namespace – A Standard Edition server can host only one root standalone namespace. It can, however, host multiple domain-based namespaces if running Win2003 SP2 or later. Nice bullet points here.

* There was a third difference prior to Windows Server 2003 SP2 and in Windows 2000 SP4 – those Standard Edition servers can only run one DFS root namespace, no matter if domain-based or standalone. Since 2000 is nearly dead and you are not supported running Win2003 non-SP2, don’t worry about it further.

Question

Can I use the miguser.xml and migapp.xml from USMT 3.01 to migrate data using USMT 4.0?

Answer

Yes, but with plenty of caveats. You would not have any errors or anything; the schema and migxml library are compatible. But you are going to miss out on plenty of new features:

  • New applications that were added will not migrate
  • New types of helper functions will not work
  • Updated migration features will not work
  • f you use an old config.xml it will be missing settings.

Plus if you are using miguser.xml, you are not using the new migdocs.xml, which is vastly improved in most scenarios for what it gathers and for performance. It’s a much better idea to use the new XML files and simply recreate any customizations that you had done if 3.01 – if you still need to use them, that is. A lot of 3.01 customizations may be duplication of effort in 4.0.

You can steer a car with your feet, but that doesn’t make it a good idea.

Question

Are there any free tools out there for reporting on AD? Stuff like number of objects, installed OS’s, functional levels, disabled user accounts, locked out users, domains, trusts, groups, etc. The gestalt of AD, basically.

Answer

You can pay for these sorts of tools, of course (rhymes with zest!). If you dig around the intarwebs you will also find some free options. You could of course script any of this you want with AD PowerShell– that’s why we wrote it. One fellow on my team recommends this nice free UNSUPPORTED project that lives on CodePlex called “Active Directory reporting”. It’s a way to use SQL Reporting Server to analyze AD. Feel free to pipe up in the comments with others you like.

Question

Does USMT migrate file information like security & attributes? The “metadata” aspects of NTFS.

Answer

USMT preserves the security (DACL/SACL) as well as the file attributes like hidden, read-only, the create date, etc. So if you have done this:

clip_image001 clip_image001[4]

It will end up migrating the same:

clip_image001[6] clip_image001[8]

Note that if you are using the /NOCOMPRESS option to a non-hard-link store, these permissions and attributes will not be set on that copy of the file. That extra data is stored in the migration catalog. So don’t use the data in an uncompressed store to see if this is working, it is not accurate. When restored, everything will get fixed up by USMT based on the catalog.

Don’t confuse all this with EFS though – that requires use of the /EFS switch to handle.

Question

When I deploy new AD forests, should I continue to use an empty root domain?

Answer

We stopped arbitrarily recommending empty forest roots a while back – but instead of saying that we just stopped talking about them. Documentation through omission! But if you read between the lines you’ll see that we don’t think they are a great idea anymore. Brian Puhl, the world’s oldest AD admin wishes they had never deployed an empty root in 1999. Mark Parris and Instan both provide a good comprehensive list of reasons not to use an empty root.

For me, the biggest reason is that it’s a lot more complex without providing a lot more value. Fine Grain Password Policy takes care of differing security needs since Win2008. The domain does not provide enough admin separation to be considered a full security barricade, but merely a boundary of functionality – meaning you are now maintaining multiple copies of group policy, multiple SYSVOLs, etc. All with more fragility. Better to have a single domain and arrange your business via OU’s, if possible.

PS: I mean that Brian runs the world’s oldest AD, not that he is old. Well, not that old.

Question

Is there a command-line way to create DFS links (i.e. “folders”)? I need to make a few hundred.

Answer

In 2008/2008R2 & Vista/7 RSAT:

dfsutil.exe link add

In 2003/XP Support Tools:

dfscmd.exe /map

=====

Finally – the clock is ticking down on Windows 2000 end of life – now just 7 weeks to go. If you have not begun planning your upgrade, migration, or removal of Windows 2000 in your environment, you are officially behind the eight ball. Soon you will be running an OS that does not get security updates. Then it will be immediately owned by some new malware that your AV vendor fails to catch.

Then your boss will be all like

image

and you will be all like

image

and your users will be all like

image

and your week will be all like

image

and your company’s bottom line will be all like

image

and you don’t want that. So get to our Windows 2000 portal and make your move to a supported operating system before it’s too late: Windows 2000 End-of-Support Solution Center. Also, Windows Server 2003 enters extended support the same day, so don’t bother asking for bug fixes after that. Get on Win2008/R2 and we’ll be all ears…

Until next time,

- Ned  “like”  Pyle

Friday Mail Sack – It’s About To Get Real Edition

$
0
0

Hello Terra, it’s Ned here again. Before I get rolling, a big announcement:

On May 16th all the MSDN and TechNet blogs are being migrated to a new platform. This will get us back in line with modern blogging software, and include new features, better search, more user customization, and generally remove a lot of suck. Because AskDS is a very popular blog – thanks to youwe rated extra sandbox testing and migration support and we believe things are going to go smoothly. The migration will be running for a week (although many sites will be done before then) and during this time commenting will be turned off; just email us through our contact form if you need to chat. You can read more about the new features and track progress on the migration here.

On to this week’s most interesting questions.

Question

What happened to the GPMC scripts in Windows 7 and Win2008 R2?

Answer

Those went buh-bye when Vista came out. They can be downloaded from here if you like and I’ll wager they’ll work fine on 7, but the future of scripting GP is in PowerShell. Recommended reading:

Question

KB832017 (Services Overview and Network Port Requirements...) lists port 5722/TCP as being used for DFSR -- but only on Server 2008 or Server 2008 R2 DCs.  What exactly happens over 5722/TCP?  KB832017 is practically the only time I've ever seen that port mentioned.

Answer

There’s no special reasoning here, it’s a bug. :-) In a simple check to determine if a computer was a member client or member server, we forgot that it might also be a domain controller. So the code ends up specifying a port that was supposed to be reserved for some client code. Amazingly, no Premier contract customer has ever opened a DCR with us asking to have it fixed. I keep waiting…

Nothing else weird happens here, and it will look just like normal DFSR RPC communication in all other respects – because it is normal. :)

5722portcapturemedpyle

You can still change the port with DFSRDIAG STATICRPC <options> if you need to traverse a firewall or something. You are not stuck with this.

Question

I am missing tabs in Active Directory Users and Computers (DSA.MSC) when using the Windows 7 RSAT tools. I found some of your old Vista content about this, but you later said most of this has been fixed. Whiskey Tango Hotel?

Answer

As is often the case with RSAT (a tool designed by committee due to all the various development groups, servicing rules, and other necessities of this suite), there are a series of steps here to make this work. I’ll go through this systematically:

After installing RSAT on a domain-joined Windows 7 client, you add the Role Administration Tools for "AD DS Snap-ins and Command-line Tools":

nedpylersatremotefeature3

You then start DSA.MSC and examine the properties of a user. You notice that some or all of the following tabs are missing:

Published Certificates
Password Replication
Object
Security
Attribute Editor
Environment
Sessions
Remote Control
Remote Desktop Services Profile
Personal Virtual Desktop
UNIX Attributes
Dial-in

1. Enable "Advanced Features" via the View menu. This will show at least the following new tabs:

Published Certificates
Password Replication
Object
Security
Attribute Editor

image

2. If still not seeing tabs:

Environment
Sessions
Remote Control
Personal Virtual Desktop
Remote Desktop Services Profile

Add the following RSAT feature: "Remote Desktop Services Tools". Then restart DSA.MSC and if Advanced View is on, these tabs will appear.

 nedpylersatremotefeature

3. If still not seeing tab:

UNIX Attributes

Add the following RSAT feature: "Server for NIS Tools". Then restart DSA.MSC and if Advanced View is on, this tab will appear.

nedpylersatremotefeature2

4. The "Dial-In" tab will always be missing, as its libraries are not included in RSAT due to a design decision by the networking Product Group. If you need this one added, open a Premier contract support case and file a DCR. We’ve had a number of customers complain about this, but none of them bothered to actually file a design change request so my sympathy wanes. Until they do, there is no possibility of this being changed.

Question

What tools will synchronize passwords from AD to ADAM or ADLDS?

Answer

MIIS/IIFP (now Forefront Identity Management 2010) can do that. We don't have any in-box tools or options for this. [Thanks to our resident ADAM expert Jody Lockridge for this answer. He’s forgotten more about ADAM than I’ll ever know - Ned]

Question

I am trying to script changing user home folders to match the users’ logon ID’s. I’ve tried this:

dsquery.exe user OU=AD_ABC,DC=domain,DC=local | dsmod.exe user -hmdir \\servername\%username%

But this only places the currently logged on username in all users profile. How can I make this work?

Answer

DSMOD.EXE includes a special token you can use called $username$. It automatically uses the SAM account name passed in from DSQUERY commands and works with the –hmdir, –email, –webpg, and –profile arguments.

So if I do this to locate all my users and update their home directory:

clip_image002

I get this:

clip_image002[5]

Question

When will the Windows Server 2008 Resource Kit utilities and tools be released?

Answer

Never. If it didn’t happen 3 years ago, it’s not going to happen now. The books do include helpful scripts and such, but the days of providing unsupported out of band reskit binaries are behind us - and it’s for the best. If you want to buy the 2008 books, here’s the place:

2008 Resource Kit -  http://www.microsoft.com/learning/en/us/book.aspx?ID=10345&locale=en-us
2008 GP Resource Kit - http://www.microsoft.com/learning/en/us/book.aspx?ID=9556&locale=en-usR

Question

Something something somethingAuditingsomething something something.

Answer

While I find Windows security auditing quite interesting and periodically write about it, if you want retroactive answers to every common audit question you need to visit Eric Fitzgerald’s  blog "Windows Security Logging and Other Esoterica”. Eric was once the PM of Windows Security auditing and helped design the new audit system in Vista/2008, then he moved on to helping design the Audit Collection Service, and gosh knows what he does now – he’d probably have to kill me after he told me. A million years ago, Eric was also a Support Engineer in my organization, so he knows your pain better than most Windows developers. Many questions I get asked about auditing have already been answered on his blog so give it a look before searching the rest of the Internet. Eric is also a funny, decent guy and a good writer – pick any blog post and you will learn something. I wish he wrote more often.

 

Finally, we had a nice visit this week from Tim Springston – yes, that  Tim Springston. Tim’s been working on a new system designed to make it easier for you to open support cases and have them route correctly so he bored us to tears demo’ed all that to us. Make sure you stop by his blog and check it out.

Until next time.

Ned “fingers crossed on the blog migration” Pyle

Friday Mail Sack: Shut Up Laura Edition

$
0
0

Hello again folks, Ned here for another grab bag of questions we’ve gotten this week. This late posting thing is turning into a bad habit, but I’ve been an epileptic octopus here this week with all the stuff going on. Too many DFSR questions though, you guys need to ask other stuff!

Let’s crank.

Question

Is it possible to setup a DFSR topology between branch servers and hub servers, where the branches are an affiliate company that are not a member of our AD forest?

Answer

Nope, the boundary of DFSR replication is the AD forest. Computers in another forest or in a workgroup cannot participate. They can be members of different domains in the same forest. In that scenario, you might explore scripting something like:

robocopy.exe /mot /mir<etc>

Question

I was examining KB 822158 – with the elegant title of “Virus scanning recommendations for Enterprise computers that are running currently supported versions of Windows” - and wanted to make sure these recommendations are correct for potential anti-virus exclusions in DFSR.

Answer

They better be, I wrote the DFSR section! :-)

Question

Is there any way to tell that a user’s password was reset, either by the user or by an admin, when running Win2008 domains?

Answer

Yes – once you have rolled out Win2008 or R2 AD and have access to granular auditing, this becomes two easy events to track once you enable the subcategory User Account Management:

ID 

Message 

4723 

An attempt was made to change an account's password.  

4724  

An attempt was made to reset an account's password.

 

Once that is turned on, the 4724 event tells you who changed who’s password:

clip_image002

And if you care, the 4738 confirms that it did change:

image 

If a user changes their own password, you get the same events but the Subject Security ID and Account Name change to that user.

Question

Any recommendations (especially books) around how to program for the AD Web Service/AD Management Gateway service?

Answer

Things are a little thin here so far for specifics, but if you examine the ADWS Protocol specification and start boning up on the Windows Communication Foundation you will get rolling.

Windows Communication Foundation
http://msdn.microsoft.com/en-us/library/dd456779(v=VS.100).aspx

WCF Books - http://www.amazon.com/s/ref=pd_lpo_k2_dp_sr_sq_top?ie=UTF8&cloe_id=05ebc737-d598-45a3-9aec-b37cc04e3946&attrMsgId=LPWidget-A1&keywords=windows%20communication%20foundation&index=blended&pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0672329484&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1NQD69FBHSA2RM8PR97K)

[MS-ADCAP]: Active Directory Web Services: Custom Action Protocol Specification
http://msdn.microsoft.com/en-us/library/dd303965(v=PROT.10).aspx

Remember that we don’t do developer support here on AskDS so you should direct your questions over to the AD PowerShell devs if you get stuck in code specifics.

Question

Is their any guidance around using DFSR with satellite link connections?

Answer

Satellite connections create a unique twist to network connectivity – they often have relatively wide bandwidth compared to low-end WAN circuits, but also have comparitively high latency and error levels. When transmitting a packet through a geosynchronous orbit hop, it hits the limitation of the speed of light – how fast you can send a packet 22,000 miles up, down, then reply with a packet up and down again. And when talking about a TCP conversation using RPC, one always uses round trip times as part of the equation. You will be lucky to average 1400 millisecond response times with satellite, compared to a frame relay circuit that might be under 50ms. This also does not account for the higher packet loss and error rates typically seen with satellite ISP’s. Not to mention what happens when it, you know, rains :-).  In a few years you can think about using medium and low earth orbit satellites to cut down latency, but those are not commercially viable yet. The ones in place have very little bandwidth.

When it comes to DFSR, we have no specific guidance except to use Win2008 R2 (or if you must, Win2008) and not Win2003 R2. That first version of DFSR uses synchronous RPC for most communications and will not reliably work over satellite’s high latency and higher error rates – Win2008 R2 uses asynchronous RPC. Even Win2008 R2 may perform poorly on the lower bandwidth ranges. Make sure you pre-seed data and do not turn off RDC on those connections.

Other

Totally unrelated, I found this slick MCP business card thing we’re doing now since we stopped handing out the laminates. It’s probably been around for a while now, but hey, new to me. :) If you go to https://www.mcpvirtualbusinesscard.com and provide your MCP ID # and Live ID you can get virtual business cards that link to your transcript.

Then you can have static cards: 

Or get fancy stuff like this javascript version. Mouse over the the right side to see what I mean:


Oh yeah, did you know my name is really Edward? They have a bunch of patterns and other linking options if you don't want graphics; give it a look. 

 

Finally, I want to welcome the infamous Laura E. Hunter to the MSFT borg collective. Author and contributor to TechNet Magazine, the AD Cookbook, AD Field Guide, Microsoft Certified Masters, and endlessboring a considerable body of ADFS documents, Laura is most famously known for her www.ShutUpLaura.com blog. And now she’s gone blue – welcome to Microsoft, Laura! Now get to work.

Have a nice weekend folks,

- Ned “what does the S stand for Bobby?” Pyle

Using AD Recycle Bin to restore deleted DNS zones and their contents in Windows Server 2008 R2

$
0
0

Ned here again. Beginning in Windows Server 2008 R2, Active Directory supports an optional AD Recycle Bin that can be enabled forest-wide. This means that instead of requiring a System State backup and an authoritative subtree restore, a deleted DNS zone can now be recovered on the fly. However, due to how the DNS service "gracefully" deletes, recovering a DNS zone requires more steps than a normal AD recycle bin operation.

Before you roll with this article, make sure you have gone through my article here on AD Recycle Bin:

The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting

Note: All PowerShell lines are wrapped; they are single lines of text in reality.

Restoring a deleted AD integrated zone

Below are the steps to recover a deleted zone and all of its records. In this example the deleted zone was called "ohnoes.contoso.com" and it existed in the Forest DNS Application partition of the forest “graphicdesigninstitute.com”. In your scenario you will need to identify the zone name and partition that hosted it before continuing, as you will be feeding those to PowerShell. 

1. Start PowerShell as an AD admin with rights to all of DNS in that partition (preferably an Enterprise Admin) on a DC that hosted the zone and is authoritative for it.

2. Load the AD modules with:

Import-Module ActiveDirectory

3. Validate that the deleted zone exists in the Deleted Objects container with the following sample PowerShell command:

get-adobject -filter 'isdeleted -eq $true -and msds-lastKnownRdn -eq "..Deleted-ohnoes.contoso.com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" -property *

Note: the zone name was changed by the DNS service to start with "..-Deleted-", which is expected behavior. This behavior means that when you are using this command to validate the deleted zone you will need to prepend whatever the old zone name was with this "..Deleted-" string. Also note that in this sample, the deleted zone is in the forest DNS zones partition of a completely different naming context, just to make it interesting.

4. Restore the deleted zone with:

get-adobject -filter 'isdeleted -eq $true -and msds-lastKnownRdn -eq "..Deleted-ohnoes.contoso.com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" | restore-adobject

Note: the main changes in syntax now are removing the "-property *" argument and pipelining the output of get-adobject to restore-adobject.

5. Restore all child “DNSnode” objects of the recovered zone with:

get-adobject -filter 'isdeleted -eq $true -and lastKnownParent -eq "DC=..Deleted-ohnoes.contoso.com,CN=MicrosoftDNS,DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" | restore-adobject

Note: the "msds-lastKnownRdn" has now been removed and replaced by "lastKnownParent", which is now pointed to the recovered (but still mangled) version of the domain zone. All objects with that as a previous parent will be restored to their old location. Because DNS stores all of its node values as flattened leaf objects, the structure of deleted records will be perfectly recovered.

6. Rename the recovered zone back to its old name with:

rename-adobject "DC=..Deleted-ohnoes.contoso.com,CN=MicrosoftDNS,DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" -newname "ohnoes.contoso.com"

Note: the rename operation here is just being told to remove the old "..Deleted-" string from the name of the zone. I’m using PowerShell to be consistent but you could just use ADSIEDIT.MSC at this point, we’re done with the fancy bits.

7. Restart the DNS service or wait for it to figure out the zone has recovered (I usually had to restart the service in repros, but then once it worked by itself for some reason – maybe a timing issue; a service restart is likely your best bet). The zone will load without issues and contain all of its recovered records.

Special notes

If the deleted zone was the delegated _msdcs zone (or both the primary zone and delegated _msdcs zone were deleted and you now need to get the _msdcs zone back):

a. First restore the primary zone and all of its contents like above.

b. Then restore the _msdcs zone like in step 4 (with no contents).

c. Next, restore all the remaining deleted _msdcs records using the lastKnownParent DN which will now be the real un-mangled domain name of that zone. When done in this order, everything will come back together delegated and working correctly.

d. Rename it like in step 6.

Note: If you failed to do step c before renaming the zone because you want to recover select records, the recovered zone will fail to load. The DNS snap-in will display the zone but selecting the zone will report “the zone data is corrupt”. This error occurs because the “@” record is missing. If this record was not restored prior to the rename simply rename the zone back to “..Deleted-“, restore the “@” record, rename the zone once more and restart the DNS Server service. I am intentionally not giving a PowerShell example here as I want you to try all this out in your lab, and this will get you past the “copy and paste” phase of following the article. The key to the recycle bin is getting your feet wet before you have the disaster!

A couple more points

  • If the zones were deleted outside of DNS (i.e. not using DNS tools) then the renaming steps will be unnecessary and you can just restore it normally. If that happens someone was really being a goof ball.
  • The AD Recycle Bin can only recover DNS zones that were AD-integrated; if the zones were Standard Primary and stored in the old flat file format, I cannot help you.
  • I have no idea why DNS has this mangling behavior and asking around the Networking team didn’t give me any clues. I suspect it is similar to the reasoning behind the “inProgress” zone renaming that occurs when a zone is converted from standard primary to AD Integrated, in order to somehow make the zone invalid prior to deletion, but… it’s being deleted, so who could care? Meh. If someone really desperately has to know, ping me in Comments and I’ll see about a code review at some point. Maybe.

As always, you can also “just” run an authoritative subtree restore with your backups and ntdsutil.exe also. If you think my steps looked painful, you should see those. KB’s don’t get much longer.

- Ned “let’s go back to WINS” Pyle

Friday Mail Sack: Barbados Edition

$
0
0

Hello world, Ned here again. I’m back to write this week’s mail sack – just in time to be gone for the next two weeks on vacation and work travel. In the meantime Jonathan and Scott will be running the show, so be sure to spam the heck out of them with whatever tickles you. This week we discuss DFSR, Certificates, PKI, PowerShell, Audit, Infrastructure, Kerberos, NTLM, Active Directory Migration Tool, Disaster Recovery, and not-art.

Catluck en ’ dogluck!

image

Question

I need to understand what the difference between the various AD string type attribute syntaxes are. From http://technet.microsoft.com/en-us/library/cc961740.aspx : String(Octet), String(Unicode), Case-Sensitive String, String(Printable), String(IA5) et al. While I understand each type represents a different way to encode the data in the AD database, it isn't clear to me:

  1. Why so many?
  2. What differences are there in managing/using/querying them?
  3. If an application uses LDAP to update/read an attribute of one string type, is it likely to encounter issues if the same routine is used to update/read a different string type?

Answer

Active Directory has to support data-storage needs for multiple computer systems that may use different standards for representing data. Strings are the most variable data to be encoded because one has to account for different languages, scripts, and characters. Some standards limit characters to the ANSI character set (8-bit) while others specify another encoding type altogether (IA5 or PrintableString for X.509, for example).

Since Active Directory needs to store data suitable for all of these various systems, it needs to support multiple encodings for string data.

Management/query/read/write differences will depend very much on how you access the directory. If you use PowerShell or ADSI to access the directory, some level of automation is involved to properly handle the syntax type. PowerShell leverages the System.String class of the .NET Framework which handles, pretty much invisibly, the various string types.

Also, when we are talking about the 255-character extended ANSI character set, which includes the Latin alphabet used in English and most European Languages, then the various encodings are pretty much identical. You really won't encounter much of a problem until you start working in 2-byte character sets like Kanji or other Eastern scripts.

Question

Is it possible / advisable to run the CA service under an account different from SYSTEM with EFS enabled for some files that should not be read by system or would another solution be more appropriate?

Answer

No, running the CA service under any account other than Network Service is not supported. Users who are not trusted for Administrator access to the server should not be granted those rights.

[PKI and string type answers courtesy of Jonathan Stephens, the “Blaster” in our symbiotic “Master Blaster” relationship – Ned]

Question

Tons of people asking us about this article http://blogs.technet.com/b/activedirectoryua/archive/2010/08/04/conditions-for-kerberos-to-be-used-over-an-external-trust.aspx and if it is true or false or confused or what.

Answer

It’s complicated and we’re getting this ironed out. Jonathan is going to create a whole blog post on how User Kerberos can function perfectly without a Kerberos Trust, or with an NTLM trust, or with no trust. It’s all smoke and mirrors basically – you don’t need a trust in all circumstances to use User Kerberos. Heck, don’t even have to use a domain-joined computer. For now, disregard that article please.

Question

I followed the steps outlined in this blog post: http://blogs.msdn.com/b/ericfitz/archive/2005/08/04/447951.aspx. Works like a champ and I see the data correctly in the Event Viewer. But when I try to use PowerShell 2.0 on one of those Win2003 DC’s with this syntax:

Get-EventLog -logname security -Newest 1 -InstanceId 566 | Where-Object { $_.entrytype -match "Success" } | Format-List

A bunch of the outputs are broken and unreadable (they look like un-translated GUID’s and variables). Like Object Type and Object Name, for example:

image

Answer

Ick, I can repro that myself.

This appears to be an issue in PowerShell 2.0 Get-EventLog cmdlet on Win2003 where an incorrect value is being displayed. You can’t have the issue on Win2008/2008 R2, I verified. Hopefully one of our Premier contract customers will report this issue so we can investigate further and see what the long term fix options are.

In the meantime though, here’s some sample workaround code I banged up using an alternative legacy cmdlet Get-WmiObject to do the same thing (including returning the latest event only, which makes this pretty slow):

Get-WmiObject -query "SELECT * FROM Win32_NTLogEvent Where Logfile = 'Security' and EventCode=566" | sort timewritten –desc | select –first 1

Slower and more CPU intensive, but it works.

image

A better long term solution (for both auditing and PowerShell) is get your DC’s running Win2008 R2.

Question

Do you have suggestions for pros/cons on breaking up a large DFSR replication group? One of our many replication groups has only one replicated folder. Over time that folder has gotten to be a bit large with various folders and shares (hosted as links) nested within. Occasionally there are large changes to the data and the replication backlog obviously impacts the ENTIRE folder. I have thought about breaking the group into several individual replication folders, but then I begin to shudder at the management overhead and monitoring all the various backlogs, etc.

  1. Is there a smooth way to transition an existing replication group with one replicated folder into one with many replicated folders? By "smooth" I mean no disruption to current replication if at all possible, and without re-replicating the data.
  2. What are the major pros/cons on how many replicated folders a given group has configured?

Answer

There’s no real easy answer – any change of membership or replicated folder within an RG means a re-synch of replication; the boundaries are discrete and there’s no migration tool. The fact that a backlog is growing won’t be helped by more or fewer RG/RF combos though, unless the RG/RF’s now involve totally different servers. Since the DFSR service’s inbound/outbound file transfer model is per server, moving things around locally doesn’t change backlogs significantly*.

So:

  1. No way to do this without total replication disruption (as you must rebuild the RG’s/RF’s in DFSR from scratch; the only saving grace here is if you don’t have to move data, you would get some pre-seeding for free).
  2. Since each RF would still have a staging/conflictanddeleted/installing/deleted folder each, there’s not much performance reasoning behind rolling a bunch of RF’s into a single RG. And no, you cannot use a shared structure. :) The main piece of an RG is administrative convenience: delegation is configured at an RG level for example, so if you had a file server admin that ran all the same servers that were replicating… stuff… it would be easier to organize those all as one RG.

* As a regular reader though, I imagine you’ve already seen this, which has some other ways to speed things up; that may help some of the choke ups:

http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

Question

Is there an Add-QADPermission (from Quest) equivalent command is in AD PowerShell?

Answer

There is not a one-to-one cmdlet. But it can be done:

http://blogs.msdn.com/b/adpowershell/archive/2009/10/13/add-object-specific-aces-using-active-directory-powershell.aspx

It is – to be blunt – a kludge in our current implementation.

Question

I am working on an inter-forest migration that will involve a transitional forest hop. If I have to move the objects a second time to get them from a transition forest into our forest then will I lose the original SID History that is in the SID History attribute.?

Answer

You will end up with multiple SID history entries. It’s not an uncommon scenario to see customers would have been through multiple acquisitions and mergers end up with multiple SID histories. As far as authorization goes, it works fine and having more than one is fine:

http://msdn.microsoft.com/en-us/library/ms679833(VS.85).aspx

Contains previous SIDs used for the object if the object was moved from another domain. Whenever an object is moved from one domain to another, a new SID is created and that new SID becomes the objectSID. The previous SID is added to the sIDHistory property.

The real issue is user profiles. You have to make sure that ADMT profile translation is performed so that after users and computers are migrated the ProfileList registry entries are updated to use the user’s real current SID info. If you do not do this, when you someday need to use USMT to migrate data it will fail as it does not know or care about old SID history, only the SID in the profile and the current user’s real SID.

And then you will be in a world of ****.

image 
Picture courtesy of the IRS

Question

Do you know if there is any problem with creating a DNS record with the name ldap.contoso.com name? Or maybe there will be some problems with other components of Active Directory if there is a record called “LDAP”?

Answer

Windows certainly will not care and we’ve had plenty of customers use that specific DNS name. We keep a document of reserved names as well, so if you don’t see something in this list, you are usually in good shape from a purely Microsoft perspective:

909264  Naming conventions in Active Directory for computers, domains, sites, and OUs
http://support.microsoft.com/default.aspx?scid=kb;EN-US;909264

This article is also good for winning DNS-related bar bets. If you drink at a pub called “The Geek and Spanner”, I suppose…

image
This is not that pub

Question

I'm currently working on a migration to Windows Server 2008 R2 AD forest – specifically the Disaster Recovery plan. Is it good idea to take one of the DCs offline, and after every successful "adprep operation" bring it back online? Or in case if something will go bad use this offline one to recreate domain?

Answer

The best solution is to put these plans in place:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

That way no matter what happens under any circumstances (not just adprep), you have a way out. You can’t imagine how many customers we deal with every day that have absolutely no AD Disaster Recovery system in place at all.

Question

How did you make this kind of picture in your DFSR server replacement series?

image

[From a number of readers]

Answer

MS Office to the rescue for a non-artist like me. This is a modified version of the “relaxed perspective” picture format preset.

1. Create your picture, then select it and use the Picture Tools Format ribbon tab.

image

2. Use the arrows to see more of the style options, and you’ll see the one called “Relaxed Perspective, White”. Select that and your picture will now look like a three dimensional piece of paper.

image

3. I find that the default is a little too perspective though, so right-click it and select “Format Picture”.

 image 

4. Use the 3-D Rotation menu to adjust the perspective and Y axis.

image

You can get pretty crazy with Office picture formatting.

image
Why yes sir, we do have plastic duck eight-ball clipart. Just the one today?

See you all in a few weeks,

Ned “please don’t audit me, I was kidding” Pyle

Friday Mail Sack: Cluedo Edition

$
0
0

Hello there folks, it's Ned. I’ve been out of pocket for a few weeks and I am moving to a new role here, plus Scott and Jonathan are busy as #$%#^& too, so that all adds up to the blog suffering a bit and the mail sack being pushed a few times. Never fear, we’re back with some goodness and frankness. Heck, Jonathan answered a bunch of these rather than sipping cognac while wearing a smoking jacket, which is his Friday routine. Today we talk certs, group policy, backups, PowerShell, passwords, Uphclean, RODC+FRS+SYSVOL+DFSR, and blog editing. There were a lot of questions in the past few weeks that required some interesting investigations on our part – keep them coming.

Let us adjourn to the conservatory.

Question

Do you know of a way to set user passwords to expire after 30 days of inactivity?

Answer

There is no automatic method for this, but with a bit of scripting it would be pretty trivial to implement. Run this sample command as an admin user (in your test environment first!!!):

Dsquery.exe user -inactive 4 | dsmod.exe user –mustchpwd yes

Dsquery will find all users in that domain that have not logged in for 4 weeks or longer, then pipe that list of DN’s into the Dsmod command that sets the “must change password at next logon” (pwdlastset) flag on each of those users.

image

You can also use AD PowerShell in Win2008 R2/Windows 7 RSAT to do this.

search-adaccount –accountinactive –timespan 30 –usersonly | set-aduser –changepasswordatlogon 1

The PowerShell method works a little differently; Dsquery only considers inactive accounts that have logged on. Search-adaccount also considers users that have never logged on. This means it will find a few “users” that cannot usually have their password change flags enabled, such as Guest, KRBTGT, and TDO accounts that are actually trusts between domains. If someone wants to post a slick example of bypassing those, please send them along (as the clock ran down here).

Question

As it’s stated here: http://technet.microsoft.com/en-us/library/cc753609%28WS.10%29.aspx  

"You are not required to run the ntdsutil snapshot operation to use Dsamain.exe. You can instead use a backup of the AD DS or AD LDS database or another domain controller or AD LDS server. The ntdsutil snapshot operation simply provides a convenient data input for Dsamain.exe."

I should be able to mount snapshot and use dsamain to read AD content, with only full backup of AD. But I can't. Using ntdsutil I can list and mount snapshot from AD, but I can't do "dsamain -dbpath full_path_to_ntds.dit".

Answer

You have to extract the .DIT file from the backup.

1. First run wbadmin get versions. In the output, locate your most recent backup and note the Version identifier:

wbadmin get versions

2. Extract the Active Directory files from the backup. Run:

 wbadmin start recovery -versions:<version identifier> -itemtype:app -items:AD -recoverytarget:<drive>

3. A folder called Active Directory will be created on the recovery drive. Contained therein you'll find the NTDS.DIT file. To mount it, run:

dsamain -dbpath <recovery folder>\ntds.dit -ldapPort 4321

4. The .DIT file will be mounted, and you can use LDP or ADSIEDIT to connect to the the directory on port 4321 and browse it.

Question

I has run into the issue described in KB976922 where "Run only specified Windows Applications" or “Run only allowed Windows applications” is blank when you are mixing Windows XP/Windows Server 2003 and Windows Server 2008/R2 Windows 7 computers. Some forum posts on TechNet state that this was being fixed in Win7 and Win2008 R2 though, which appears to be untrue. Is this being fixed in SP1 or later or something?

Answer

It’s still broken in Win7/R2 and still broken in SP1. It’s quite likely to remain broken forever as there are so many workarounds and the technology in question actually dates back to before group policy – it was part of Windows 95 (!!!) system policies. Using this policy isn’t very safe. It’s often something that was configured many years ago  that lives on through inertia.

Windows 7 and Windows Server 2008 R2 introduced AppLocker to:

  • Help prevent malicious software (malware) and unsupported applications from affecting computers in your environment.
  • Prevent users from installing and using unauthorized applications.
  • Implement application control policy to satisfy security policy or compliance requirements in your organization.

Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 all support Software Restriction Policies (SAFER) which also control applications similarly to AppLocker. Both AppLocker and SAFER replace that legacy policy setting with something less easily bypassed and limited.

For more information about AppLocker, please review:
http://technet.microsoft.com/en-us/library/dd723678(WS.10).aspx

For more information about SAFER, please review:
http://technet.microsoft.com/en-us/library/bb457006.aspx

I updated the KB to reflect all this too.

Question

Is it possible to store computer certificates in a Trusted Platform Module (TPM) in Windows 7?

Answer

The default Windows Key Storage Provider (KSP) does not use a TPM to store private keys. That doesn't mean that some third party can't provide a KSP that implements the Trusted Computer Group (TCG) 1.2 standard to interact with a TPM and use it to store private keys. It just means that Windows 7 doesn't have such a KSP by default.

Question

It appears that there is a new version of Uphclean available (http://www.microsoft.com/downloads/en/details.aspx?FamilyId=1B286E6D-8912-4E18-B570-42470E2F3582&displaylang=en). What’s new about this version and is it safe to run on Win2003?

Answer

The new 1.6 version only fixes a security vulnerability and is definitely recommended if you are using older versions. It has no other announced functionality changes. As Robin has said previously, Uphclean is otherwise deceased and 2.0 beta will not be maintained or updated. Uphclean has never been an officially supported MS tool, so use is always at your own risk.

Question

My RODCs are not replicating SYSVOL even though there are multiple inbound AD connections showing when DSSITE.MSC is pointed to an affected RODC. Examining the DFSR event log shows:

Log Name: DFS Replication
Source: DFSR
Date: 5/20/2009 10:54:56 AM
Event ID: 6804
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: 2008r2-04.contoso.com
Description:
The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group.

New RODCs that are promoted work fine. Demoting and promoting an affected RODC fixes the issue.

Answer

Somebody has deleted the automatically generated "RODC Connection (FRS)" objects for these affected RODCs.

  • This may have been done because the customer saw that the connections were named "FRS" and they thought that with DFSR replicating SYSVOL that they were no longer required.
  • Or they may have created manual connection objects per their own processes and deleted these old ones.

RODCs require a special flag on their connection objects for SYSVOL replication to work. If not present, SYSVOL will not work for FRS or DFSR. To fix these servers:

1. Logon to a writable DC in the affected forest as an Enterprise Admin.

2. Run DSSITE.MSC and navigate to an affected RODC within its site, down to the NTDS Settings object. There may be no connections listed here, or there may be manually created connections.

dssitenedpyle1

3. Create a new connection object. Ideally, it will be named the same as the default (ex: "RODC Connection (FRS)").

dssitenedpyle2

4. Edit that connection in ADSIEDIT.MSC or with DSSITE.MSC attribute editor tab. Navigate to the "Options" attribute and add the value of "0x40".

dssitenedpyle3

dssitenedpyle4

5. Create more connections using these steps as necessary.

6. Force AD replication outbound from this DC to the RODCs, or wait for convergence. When the DFSR service on the RODC sees these connections, SYSVOL will begin replicating again.

More info about this 0x40 flag: http://msdn.microsoft.com/en-us/library/dd340911(PROT.10).aspx

RT (NTDSCONN_OPT_RODC_TOPOLOGY, 0x00000040): The NTDSCONN_OPT_RODC_TOPOLOGY bit in the options attribute indicates whether the connection can be used for DRS replication [MS-DRDM]. When set, the connection should be ignored by DRS replication and used only by FRS replication.

Despite the mention only of FRS in this article, the 0x40 value is required for both DFSR and FRS. Other connections for AD replication are still separately required and will exist on the RODC locally.

Question

What editor do you use to update and maintain this blog?

Answer

Windows Live Writer 2011 (here). Before this version I was hesitant to recommend it, as the older flavors had idiosyncrasies and were irritating. WLW 2011 is a joy, I highly recommend it. The price is right too: free, with no adware. And it makes adding content easy…

 
Like Peter Elson artwork.

Or the complete 5 minutes and 36 seconds of Lando Calrissian dialog
 
Map picture

Or Ned

GoatBlack Sheep
Or ovine-related emoticons.

 

That’s all for this week.

- Ned “Colonel Mustard” Pyle and Jonathan “Professor Plum” Stephens


Friday Mail Sack: The Gang’s All Here Edition

$
0
0

Hi folks, Ned here again with your questions and our answers. This is a pretty long one; looks like everyone is back from vacation, winter storms, and hiding from the boss. Today we talk Kerberos, KCC, SPNs, PKI, USN journaling, DFSR, auditing, NDES, PowerShell, SIDs, RIDs, DFSN, and other random goo.

Rawk!

Question

Is NIC teaming recommended on domain controllers?

Answer

It’s a sticky question – MS does not make a NIC teaming solution, so you are at the mercy of 3rd party vendor software and if there are any issues, we cannot help other than to break the team. So the question you need to answer is “do you trust your NIC vendor support?”

Generally speaking, we are not huge fans of NIC teaming, as we see customers having frequent driver issues and because a DC probably doesn’t need it. If clients are completely consuming 1Gbit or 10Gbit network interfaces, the DC is probably being overloaded with requests. Doubling that network would make things worse; it’s better to add more DCs. And if the DC is also running Exchange, file server, SQL, etc. you are probably talking about an environment without many users or clients.

A failover NIC solution is probably a better option if your vendor supports it. Meaning that the second NIC is only used if the first one burns out and dies, all on the same network. 

Question

We used to manually create SPNs with IP addresses to allow Kerberos without network name resolution. This worked in Windows XP and 2003 but stopped working in later operating systems. Is this expected?

Answer

Yes it is. Starting in Windows Vista and forever more, the OS examines the format of the SPN being requested and if it is only an IP address, Kerberos is not even attempted. There’s no way to override this behavior. If I look at it in practical terms, having manually set an IP Address for SPN:

image

Then I actually try mapping a driver here with an IP address (which would have worked in XP in this scenario):

image

No tickets were cached above. And in the network capture below, it’s clear that I am using NTLM:

image

image

This is why in this previous post– see the “I want to create a startup script via GPO” and “NTLM is not allowed for computer-to-computer communication” sections – I highly discouraged customers from this sort of hacking. What I didn’t realize when I wrote the old post was that I now have the power to control the future with my mind.

image
Actual MRI of my head, proving that I have an orange (i.e. “futurasmic”) brain

Question

I see that the DFSR staging folder can be moved, but can the Conflict and Deleted (\dfsrprivate\conflictanddeleted) folder be relocated?  If so, how?

Answer

It cannot be moved or renamed – this was once planned (and there is even an AD attribute that makes one think the location could be specified) but it never happened in the service code. Regardless of what you put in that attribute, DFSR ignores it and creates a C&D folder at the default location.

For example, here I specified a completely different C&D path using ADSIEDIT.MSC before DFSR even created the folder. Once I started the DFSR service, it ignored my setting and created the conflict folder with defaults:

clip_image002

Question

We are trying to find the best way to issue Active Directory "User" certificates to iPhones and iPads, so these users can authenticate to our third party VPN appliance using their "user" certificate. We were thinking that MS NDES could help up with this. Everything I have read says that NDES is used for non domain "computer or device" enrollment.

Answer

[From Rob Greene, author of previous post iPad / iPhone Certificate Issuance]

Just because the certificate template that is used by NDES must be of type computer does not mean you cannot build a SCEP protocol message to the NDES Server for use by a user account on the iPhone in question.

Keep in mind that the SCEP protocol was designed by Cisco for their network appliances to be able to enroll for certificates online.  Also understand what NDES means - Network Device Enrollment Service.

Realistically there is no reason why you cannot enroll for a certificate via SCEP interface with NDES and have a user account using the issued certificate.  However, NDES is code to specifically only allow for enrollment of computer based certificate templates.  If you put a user based template name in the registry for it to issue, it will fail with a not –so-easily deciphered message.

That said, keep in mind that the subject or Subject Alternative Name field identifies the user of the certificate not the template. 

So what you could do is:

  1. Duplicate the computer certificate template.
  2. Then change the subject to “Supply in the Request”
  3. Then give the template a unique name.
  4. Make sure that the NDES account and Administrator have security access to the template for Enroll.
  5. Assign the Template to be issued.
  6. Then you need to assign the template to one of the purposes in the NDES registry (You might want to use the one for both signing and encrypting).  See the blog.

Now you have a certificate with the EKU of Client Authentication and a subject / SAN of the user account, I don’t see why you could not use that for what you need. Not that I have tested this or can test this, mind you…

Question

Is there a “proper” USN Journal setting versus replicated data sizes, etc. on the respective volumes housing DFSR data? I've come across USN journal wrap issues (that properly self heal ... and then occur again a month or so later). I’m hoping to know a happy medium on USN journal sizing versus size of volume or data that resides on that volume.

Answer

I did a quick bit of research - in the history of all MS DFSR support cases, it was necessary to increase the USN journal size for five customers – not exactly a constant need. Our recommendation is not to alter it unless you get multiple 2202 events that can’t be fixed any other way:

Event ID=2202
Severity=Warning
The DFS Replication service has detected an NTFS change journal wrap on volume %2.
A journal wrap can occur for the following reasons:
1.The USN journal on the volume has been truncated. Chkdsk can truncate the
journal if it finds corrupt entries at the end of the journal.
2.The DFS Replication service was not running on this computer for an extended
period of time.
3.The DFS Replication service could not keep up with the rate of file changes
on the volume.
The service has automatically initiated the journal wrap recovery process.

Additional Information:
Volume: %1

Since you are getting multiple 2202 occurrences, I would recommend first figuring out why you are getting the journal wraps. The three reasons listed in the event need to be considered – the first two are avoidable (fix your disk or controller and stop turning the service off) and should be handled without a need to alter the USN journal.

The third one may mean you are not using DFSR as recommended, but that may be unavoidable. In that case, set the USN size value to 1GB and validate that the issue stops occurring. We have no real formula here (remember, only five customers ever), but if you cannot spare another 512MB on the drive you have much more important problems to consider around disk capacity. If still not enough, revisit if DFSR is the right solution for you – the amount of changes occurring would have to be so incredibly rapid that I doubt DFSR could ever realistically keep up and converge. And make sure that nothing else is updating all the files outside of the journal on that drive – there is only one journal and it contains entries for all files, even the ones not being replicated!

Just to answer the inevitable question: you use WMI to increase the USN journal size.

On Win2003 R2 only:

1. Determine the volume in question (USN journals are volume specific) and the GUID for that volume by running the following:

WMIC.EXE /namespace:\\root\Microsoftdfs path DfsrVolumeInfo get VolumePath
WMIC.EXE /namespace:\\root\Microsoftdfs path DfsrVolumeInfo get VolumeGUID

This will return (for example:)

VolumePath
\\.\C:
\\.\E:

VolumeGuid
4649C7A1-82D5-11DA-922B-806E6F6E6963
D1EB0B66-9403-11DA-B12E-0003FFD1390B

2a. Raise the USN Journal Size (for one particular volume):

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig.VolumeGuid="%GUID%" set minntfsjournalsizeinmb=%MB SIZE%

where you replace '%GUID%' with the volume GUID and '%MB SIZE%' with a larger USN size in MB. For example:

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig.VolumeGuid="D1EB0B66-9403-11DA-B12E-0003FFD1390B" set minntfsjournalsizeinmb=1024

This will return 'Property Update Successful' for that GUID.

2B. Raise the USN Journal Size (for all volumes)

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig set minntfsjournalsizeinmb=%MB SIZE%

This will return 'Property Update Successful' for ALL the volumes.

3. Restart server for new journal size to take effect in NTFS.

Update 4/15/2011 - On Win2008 or later:

1. Open Windows Explorer.
2. In Tools | Folder Options | View - uncheck 'Hide protected operating system files'.
3. Navigate to each drive's 'system volume information\dfsr\config' folder (you will need to add 'Administrators, Full Control' to prevent access denied error).
4. In Notepad, open the 'Volume_%GUID%.xml' file for each volume you want to increase.
5. There will be a set of tags that look like this:

<MinNtfsJournalSizeInMb>512</MinNtfsJournalSizeInMb>

6. Stop the DFSR service.
6. Change '512' to the new increased value.
7. Close and save that file, and repeat for any other volumes you want to up the journal size on.
8. Start the DFSR service back up.

Question

There is a list of DFS Namespace events for Server 2000 at http://support.microsoft.com/kb/315919. I was wondering if there is a similar list of Windows 2008 DFS Event Log Messages?

Answer

That event logging system in KB315919 exists only in Win2000 – Win2003 and later OSs don’t have it anymore. That KB is a bit misleading also: these events will never write unless you enable them through registry settings.

Registry Key: HKEY_LOCAL_MACHINE\SOFTWARE\MicroSoft\Windows NT\CurrentVersion\Diagnostics
Value name: RunDiagnosticLoggingDfs 
Value type: REG_DWORD
Value data: 0 (default: no logging), 2 (verbose logging)

Registry Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dfs
Value name: DfsSvcVerbose
Value type: REG_DWORD
Value data: Any one of the below three values:
0 (no debug output)
1 standard debug output
0x80000000 (standard debug output plus additional Dfs volume call info)

Value name: IDfsVolInfoLevel
Value type: REG_DWORD
Value data: Any combination of the following 3 flags:
0x00000001 Error
0x00000002 Warning
0x00000004 Trace

Dave and I scratched our heads and in our personal history of supporting DFSN, neither of us recalled ever turning this on or using those events for anything useful. Not that it matters now, Windows 2000 is as dead as fried chicken.

Question

We currently have inherited auditing settings on a lot of files and folders that live on our two main DFSR servers. The short story is that before the migration to DFSR, the audit settings were apparently added by someone to the majority of the files/folders. This was replicated by DFSR and now is set on both servers. Thankfully we do not have any audit policies turned on for those servers currently.

That is where the question comes in: there may be a time in the relatively near future that we will want to enable some auditing for a subset of files/folders. Any suggestions on how we could remove a lot of the audit entries on these servers, without forcing nearly every file to get processed by DFSR?

Answer

Nope, it’s going to cause an unavoidable backlog as DFSR reconciles all the security changes you just made – the audit security is part of the file just like the discretionary security. Don’t do that until you have a nice big change control window open. Maybe just do some folders at a time.

In the future, using Global Object Access Auditing would be an option (if you have Win2008 R2 on all DFSR servers). Since it is all derived by LSA and not directly stamped, DFSR won’t replicated the file – the files are never actually changed. It’s slick:

image

image

http://technet.microsoft.com/en-us/library/dd772630(WS.10).aspx

In theory, you could get rid of the auditing in place currently currently and just use GOAA someday when you need it. It’s the future of file auditing, in my opinion; using direct SACLs on files should be discouraged forever more.

Question

Does the SID for an object have to be unique across the entire forest? It is pretty clear from existing documentation that the SID does have to be unique within a domain because of the way the RID Master distributes RID pools to the DCs. Does the RID Master in the Forest Root domain actually keep track of all the unique base SIDs of all domains to ensure that there is no accidental duplication of the unique base domain SIDs?

Answer

A SID will be unique within a forest, as each domain has a unique base SID that combines with a RID. That’s why there’s a RID master per domain. There is no reasonable way for the domain SIDs to ever be duplicated by Windows, although I have seen some third party products that made it happen. All hell broke loose, we don’t plan for the impossible. :) Even if you use ADMT to migrate users with SID History within a forest, it will not be duplicated as the migration will always destroy the old user when it is “moved”.

The RID Masters don’t talk to each other within the forest (any more than they would between different forests, where a duplicate SID would cause just as many problems when you tried to create a trust). The base SID is a random 48 bit number, so there is no reasonable way it could be duplicated by accident in the same environment. It comes down to us relying on the odds of two domains that know of each other ending up with the same SID through pure random chance – highly unlikely math.

You’ll also find no mention of inter-RID master needs or error messages communication here:

http://msdn.microsoft.com/en-us/library/cc223751(PROT.13).aspx
http://technet.microsoft.com/en-us/library/cc756394(WS.10).aspx

Question

I have this message in a health report:

“A USN journal loss occurred 2 times in the past 7 days on E:. DFS Replication monitors the USN journal to detect changes made to the replicated folder. Although DFS Replication automatically recovers from this problem, replication stops temporarily for replicated folders stored on this volume. Repeated journal loss usually indicates disk issues. Event ID: 2204”

Is this how the health report indicates a journal wrap or can I take “loss” literally ?

Answer

Ouch. That’s not a wrap, the journal was deleted or irrevocably damaged. I have never actually seen that event in the field, only in a test lab where I deleted my journal intentionally (using the nasty command: FSUTIL.EXE USN DELETEJOURNAL). I would suspect either a failing disk or 3rd party disk management software. It’s CHKDSK and disk diagnostic time for you.

The net recovery process is similar to a wrap for event 2204 ; the journal gets recreated, then repopulated like a wrap recovery (it uses the same code). You get event 2206 to know that it’s fixed.

Question

How come there is no “Set-SPN” cmdlet in AD PowerShell?

Answer

Ahh, but there is… sort of. We hide service principal name maintenance off in the Set-ADUser, Set-ADComputer, and Set-ADServiceAccount cmdlets.

-ServicePrincipalNames <hashtable>
Specifies the service principal names for the account. This parameter sets the ServicePrincipalNames property of the account. The LDAP display name (ldapDisplayName) for this property is servicePrincipalName. This parameter uses the following syntax to add remove, replace or clear service principal name values.
    Syntax:
    To add values:
      -ServicePrincipalNames @{Add=value1,value2,...}
    To remove values:
      -ServicePrincipalNames @{Remove=value3,value4,...}
    To replace values:
      -ServicePrincipalNames @{Replace=value1,value2,...}
    To clear all values:
      -ServicePrincipalNames $null

You can specify more than one change by using a list separated by semicolons. For example, use the following syntax to add and remove service principal names.
   @{Add=value1,value2,...};@{Remove=value3,value4,...}

The operators will be applied in the following sequence:
..Remove
..Add
..Replace

The following example shows how to add and remove service principal names.
   -ServicePrincipalNames-@{Add="SQLservice\accounting.corp.contoso.com:1456"};{Remove="SQLservice\finance.corp.
contoso.com:1456"}

We do not have any special handling to retrieve SPNs using Get-AdComputer or Get-Aduser (nor any other attributes – they treat all as generic properties). For example:

get-adcomputer name –properties serviceprincipalnames | select-object –expand serviceprincipalnames

image

I used select-object –expand because when you get a really long returned list, PowerShell likes to start truncating the readable output. Also, when I don’t know which cmdlets support which things, I sometimes cheat use educated guesses:

image

Question

I have posted a TechNet forum question around the frequency of KCC nomination and rebuilding and I was hoping you could reply to it.

“…He had made an update to the Active Directory Schema and as a safety-net had switched off one of our domain controllers whilst he did it. The DC (2008 R2) that was switched off was at the time acting as the automatically determined bridgehead server for the site.

Obviously the next thing that has to happen is for the KCC to run, discover the bridgehead server is still offline and re-nominate. My colleague thinks that this re-nomination should take upto 2 hours to happen. However all the documentation I can find suggests that this should be every 15 minutes. His argument is that it is a process of sampling, that it realises the problem every 15 minutes but can take upto 2 hours to actually action the change of bridgehead.

Can anyone tell me which of us is right please and if we could have a problem?”

Answer

We are running an exchange program between MS Support and MS Premier Field Engineering and our current guest is AD topology guru Keith Brewer. He replied in exhaustive detail here:

http://social.technet.microsoft.com/Forums/en/winserverDS/thread/0d10914f-c44c-425a-8344-3dfbac3ed955

Attaboy Keith, now you’re doing it our way – when in doubt, use overwhelming force.

Other random goo


Unless it doesn’t.


  • Star Wars on Blu-ray coming in September, now up for pre-order. Damn, I guess I have to get Blu-ray. Hopefully Lucas uses the opportunity to remove all midichlorian references.
  • The 6 Most Insane Cities Ever Planned. This is from Cracked, so as usual… somewhat NSFW due to swearing.
  • Not sure which sci-fi apocalypse is right for you? Use this handy chart.
  • It was an interesting week for Artificial Intelligence and gaming, between Starcraft and Jeopardy.

Until next time.

Ned “and return to Han shooting first!” Pyle

Friday Mail Sack: No Redesign Edition

$
0
0

Hello folks, Ned here again. Today we talk PDCs, DFSN, DFSR, AGPM, authentication, PowerShell, Kerberos, event logs, and other random goo. Let’s get to it.

Question

Is the PDC Emulator required for user authentication? How long can a domain operate without a server that is running the PDC Emulator role?

Answer

It’s not required for direct user authentication unless you are using (unsupported) NT and older operating systems or some Samba flavors. I’ve had customers who didn’t notice their PDCE was offline for weeks or months. Plenty of non-fully routed networks exist where many users have no direct access to that server at all.

However!

It is used for a great many other things:

  • With the PDCE offline, users who have recently changed their passwords are more likely to get logon or access errors. They will also be more likely to stay locked out if using Account Lockout policies.
  • Time can more easily get out of sync, leading to Kerberos authentication errors down the road.
  • The PDCE being offline will also prevent the creation of certain well-known security groups and users when you are upgrading forests and domains.
  • The AdminSDHolder process will not occur when the PDCE is offline.
  • You will not be able to administer DFS Namespaces.
  • It is where group policies are edited (by default).
  • Finally - and not documented by us - I have seen various non-MS applications over the years that were written for NT and which would stop working if there is no PDCE. There’s no way to know which they might be – a great many were home-made application written by the customer themselves – so you will have to determine this through testing.

But don’t just trust me; I am a major plagiarizer!

How Operations Masters Work (see section “Primary Domain Controller (PDC) Emulator”)
http://technet.microsoft.com/en-us/library/cc780487(WS.10).aspx

Question

The DFSR help file recommends a full mesh topology only when there are 10 or fewer members. Could you kindly let me know reasons why? We feel that a full mesh will mean more redundancy.

Answer

It’s just trying to prevent a file server administrator from creating an unnecessarily complex or redundant topology, especially since the vast majority of file server deployments do not follow this physical network topology. The help file also makes certain presumptions about the experience level of the reader.

It’s perfectly ok – from a technical perspective - to make as many connections as you like if using Windows Server 2008 or later. This is not the case with Win2003 R2 (see this old post that applies only to that OS). The main downsides to a lot of connections are:

  • It may lead to replication along slower, non-optimal networks that are already served by other DFSR connections; DFSR does not sense bandwidth or use any site/connection costing. This may itself lead to the networks becoming somewhat slower overall.
  • It will generate slightly more memory and CPU usage on each individual member server (keeping track of all this extra topology is not free).
  • It’s more work to administer. And it’s more complex. And more work + more complex usually = less fun.

Question

I'm trying setup delegation for Kerberos but I can't configure it for user or computer accounts using AD Users and Computers (DSA.MSC). I’m logged as a domain administrator. Every time when I'm trying activate delegation I get error:

The following Active Directory error occurred: Access is denied.

Answer

It’s possible that someone has removed the user right for your account to delegate. Check your applied domain security policy (using RSOP or GPRESULT or whatever) to see if this has been monkeyed up:

Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment
"Enable computer and user accounts to be trusted for delegation"

The Default Domain Controllers policy will have the built-in Administrators group set for that user right assignment once you create a domain. The privilege serves no purpose being set on servers other than DCs, they don’t care. Changing the defaults for this assignment isn’t necessary or recommended, for reasons that should now be self-evident.

Question

I want to clear all of my event logs at once on Windows Vista/2008 or later computers. Back in XP/2003 this was pretty easy as there were only 6 logs, but now there are a zillion.

Answer

Your auditors must love you :). Paste this into a batch file and run in an elevated CMD prompt as an administrator:

Wevtutil el > %temp%\eventlistmsft.txt
For /f "delims=;" %%i in (%temp%\eventlistmsft.txt) do wevtutil cl "%%i"

If you run these two commands manually, remember to remove the double percent signs and make them singles; those are being escaped for running in a batch file. I hope you have a systemstate backup, this is forever!

Question

Can AGPM be installed on any DC? Should it be on all DCs? The PDCE?

Answer

[Answer from AGPM guru Sean Wright]

You can install it on any server as long as it’s part of the domain  - so a DC, PDCE, or a regular member server. Just needs to be on one computer.

Question

Is it possible to use Authentication Mechanism Assurance that is available in Windows Server 2008 R2 with a non-Microsoft PKI implementation? Is it possible to use Authentication Mechanism Assurance with any of Service Administration groups Domain Admins or Enterprise Admins? If that is possible what would be the consequences for built-in administrator account, would this account be exempt from Authentication Mechanism Assurance? So that administrators would have a route to fix issues that occurred in the environment, i.e. a get out of jail.

Answer

[Answer from security guru Rob Greene]

First, some background:

  1. This only works with Smart Card logon. 
  2. This works because the Issuance Policy OID is “added to” msDS-OIDToGroupLink on the OID object in the configuration partition.  There is a msDS-OIDToGroupLinkBl (back link) attribute on the group and on the OID object.
  3. The attribute msDS-OIDToGroupLink attribute on the OID object (in the configuration partition)stores the DN of the group that is going to use it.
  4. Not sure why, but the script expects the groups that are used in this configuration to be Universal groups.  So the question about Administrative groups, none of these are Universal groups except for “Enterprise Admins”.

So here are the answers:

Is it possible to use Authentication Mechanism Assurance that is available in Windows Server 2008 R2 with a non-Microsoft PKI implementation?

Yes, however, you will need to create the Issuance Policies that you plan to use by adding them through the Certificate Template properties as described in the TechNet article.

Is it possible to use Authentication Mechanism Assurance with any of Service Administration groups Domain Admins or Enterprise Admins?

This implementation requires that the group be a universal group in order for it to be used.  So the only group of those listed above that is universal is “Enterprise Admins”.  In theory this would work, however in practice it might not be such a great idea.

If that is possible what would be the consequences for built-in administrator account, would this account be exempt from Authentication Mechanism Assurance?

In most cases the built-in Administrator account is special cased to allow access to certain things even if their access has somehow been limited.  However, this isn’t the best way to design your security of administrative accounts if you are concerned about not being able to get back into the domain.  You would have similar issues if you made these administrative accounts require Smart Cards for logon, then for some reason the CA hierarchy did not publish a new CRL and the CA required a domain based admin to be able to logon interactively then you would be effectively locked out of your domain also.

Question

I find references on TechNet to a “rename-computer” PowerShell cmdlet added in Windows 7. But it doesn’t seem to exist.

Answer

Oops. Yeah, it was cut very late but still lives on in some documentation. If you need to rename a computer using PowerShell, the approach I use is:

(get-wmiobject Win32_ComputerSystem).rename("myputer")

That keeps it all on one line without need to specify an instance first or mess around with variables. You need to be in an elevated CMD prompt logged in as an administrator, naturally.

Then you can run restart-computer and you are good to go.

image

There are a zillion other ways to rename on the PowerShell command-line, shelling netdom.exe, wmic.exe, using various WMI syntax, new functions, etc.

Question

Does disabling a DFS Namespace link target still give the referral back to clients, maybe in with an “off” flag or something? We’re concerned that you might still accidentally access a disabled link target somehow.

Answer

[Oddly, this was asked by multiple people this week.]

Disable actually removes the target from referral responses and nothing but an administrator’s decision can enable it. To confirm this, connect through that DFS namespace and then run this DFSUTIL command-line (you may have to install the Win2003 Support Tools or RSAT or whatever, depending on where you run this):

DFSUTIL /PKTINFO

It will not list out your disabled link targets at all. For example, here I have two link targets – one enabled, one disabled. As far as DFS responds to referral requests, the other link target does not exist at all when disabled.

clip_image002

When I enable that link and flush the PKT cache, now I get both targets:

clip_image002[4]

Question

When DFSR staging fills to the high watermark, what happens to inbound and outbound replication threads? Do we stop replicating until staging is cleared?

Answer

Excellent question, Oz dweller.

  • When you hit the staging quota 90% high watermark, further staging will stop.
  • DFSR will try to delete the oldest files to get down to 60% under the quota.
  • Any files that are on the wire right now being transferred will continue to replicate. Could be one file, could be more.
  • If those files on the wire are ones that the staging cleanup is trying to delete, staging cleanup will not complete (and you get warning 4206).
  • No other files will replicate (even if they were not going to be cleaned out due to “newness”).
  • Once those outstanding active file transfers on the fire complete, staging will be cleaned out successfully.
  • Files will begin staging and replicating again (at least until the next time this happens).

So the importance of staging space for very large files remains to ensure that quota is at least as large as the N largest files that could be simultaneously replicated inbound/outbound, or you will choke yourself out. From the DFSR performance tuning post:

  • Windows Server 2003 R2: 9 largest files
  • Windows Server 2008: 32 largest files (default registry)
  • Windows Server 2008 R2: 32 largest files (default registry)
  • Windows Server 2008 R2 Read-Only: 16 largest files

If you want to find the 32 largest files in a replicated folder, here’s a sample PowerShell command:

Get-ChildItem <replicatedfolderpath> -recurse | Sort-Object length -descending | select-object -first 32 | ft name,length -wrap –auto

Question

If I create a domain-based namespace (\\contoso.com\root) and only have member servers for namespace servers, the share can’t browsed to in Windows Explorer. It is there, I just can’t browse it.

But if I add a DC as a namespace server it immediately appears. If I remove the DC from namespace it disappears from view again, but it is still there. Would this be expected behavior? Is this a “supported” way create a hidden namespace?

Answer

You are seeing some coincidental behavior based on the dual meaning of contoso.com in this scenario:

  • Contoso.com will resolve to a domain controller when using DNS
  • When a DC hosts a namespace share and you are browsing that DC, you are simply seeing all of its shares. One of those shares happens to be a DFS root namespace.
  • When you are browsing a domain-based namespace not hosted on a DC, you are not going to see that share as it doesn’t exist on the DCs.
  • You can see what’s happening here under the covers with a network capture.
  • Users can still access the root and link shares if they type them in, had them set via logon script, mapped drive, GP Preference Item, etc. This is only a browsing issues.

It’s not an “unsupported” way to hide shares, but it’s not necessarily effective in the long-term. The way to hide and prevent access to the links and files/folders is through permissions and ABE. This solution is like a share with $ being considered hidden: only as long as people don’t talk about it. :) Not to mention this method is easy for other admins to accidentally “break” it through ignorance or reading blog posts that tell them all the advantages of DFS running on a DC.

PS: Using a $ does work – at least on a Win2008 R2 DFS root server in a 2008 domain namespace:

clip_image002[7]

clip_image002[9]

clip_image002[11]

But only until your users talk about it in the break room…

Other Random Goo

  • The Cubs 2011 schedule is up and you can download the calendar file here. You know you wanna.
  • And in a related story, Kerry Wood has come back with a one year deal! Did you watch him strike out 20 as a rookie in 1998? It was insane. The greatest 1-hitter of all time.
  • IO9.com posted their spring sci-fi book wish list. Which means that I now have eight new books in my Amazon wish list. >_<
  • As a side note, does anyone like the new format of the Gawker Media blogs? I cannot get used to them and had to switch back to the classic view. The intarwebs seem to be on my side in this. I find myself visiting less often too, which is a real shame – hopefully for them this isn’t another scenario like Digg.com, redesigning itself into oblivion.
  • Netflix finally gets some serious competition – Amazon Prime now includes free TV and Movie streaming. Free as in $79 a year. Still, very competitive pricing and you know they will rock the selection.
  • I get really mad watching the news as it seems to be staffed primarily by plastic heads reading copy written by people that should be arrested for inciting to riot. So this Cracked article on 5 BS modern myths is helpful to reduce your blood pressure. As always, it is not safe for work and very sweary.

  • But while you’re there anyway (come on, I know you), check out the kick buttitude of Abraham Lincoln.
  • Finally: why are the Finnish so awesomely insane at everything?
And by everything, I mean only this and rally sport.

 

Have a nice weekend folks.

- Ned “simple and readable” Pyle

Friday Mail Sack: I Have No Idea What to Call This Edition

$
0
0

Hiya folks, Ned here with a slightly late Mail Sack coming your way. Today we discuss reading event logs, PowerShell, FSMO, DFSR, DFSN, GCs, virtualization, RDC, LDAP queries, DPM, SYSVOL migration, and Netmon.

Do it.

Question

Logparser.exe doesn’t seem to read the message body when run against Security event logs on Windows Server 2008 R2:

logparser -i:EVT -o:CSV -resolveSIDs:ON "SELECT * INTO goo.csv FROM security"

Security,97760,2011-03-09 07:57:23,2011-03-09 07:57:23,4689,8,Success Audit event,13313,The name for category 13313 in S
ource "Microsoft-Windows-Security-Auditing" cannot be found. The local computer may not have the necessary registry info
rmation or message DLL files to display messages from a remote computer,
Microsoft-Windows-Security-Auditing,S-1-5-21-336
6683618-1989269118-3947618792-500|administrator|CONTOSO|0x57e6f4|0x0|0xbc8|C:\Windows\System32\mmc.exe,2008r2-01-f.conto
so.com,,A process has exited. Subject: Security ID: S-1-5-21-3366683618-1989269118-3947618792-500 Account Name: administ
rator Account Domain: CONTOSO Logon ID: 0x57e6f4 Process Information: Process ID: 0xbc8 Process Name: C:\Windows\System3
2\mmc.exe Exit Status: 0x0 ,

Answer

I am able to reproduce this issue. I can also see LogParser failing to parse some other ‘modern’ events in other logs, like the Application event log. Considering the tool was written in 2005 and only lists its support as Win2003 and XP, this looks like expected behavior.

You can do pretty much everything LogParser is doing with the event logs using PowerShell 2 on the later OS though, so you may not care to run this all down:

Get-WinEvent
http://technet.microsoft.com/en-us/library/dd367894.aspx

It is crazy powerful and can do Xpath, structured XML queries, and hash-table queries.

Even WEVTUTIL.EXE can do much of this, although not with as much output formatting control like PowerShell. Leave logparser to the older OSes.

Question

We’re thinking about virtualizing DFSR and DFSN. Is it supported? Are a lot of customers virtualizing these workloads?

Answer

Totally supported. Like anything virtual though, expect a slight performance hit.

There is a huge amount of virtualization happening. Enough now that you can just assume anything Windows is being run virtualized a lot. Maybe not many by percentage, but when your OS install base is in the hundreds of millions…

The main concern we have in this scenario is one we see on physical a lot now also (Warren can attest to this): the use of el cheapo iSCSI solutions rather than fiber-channel and other beefier network fabrics, especially combined with cheap SANs that have poor to non-existent support. You absolutely get what you pay for in this environment. The other thing to keep in mind is that - like all multimaster database systems - you absolutely CANNOT use snapshots with it: http://support.microsoft.com/kb/2517913/ 

Question

Do cross-forest trusts figure into Infrastructure Master FSMO role placement? I.e. can the IM run on a GC if the other forests is not all GCs too? I have two single-domain forests with a cross-forest Kerberos trust.

Answer

  • In the single domain forest it doesn’t matter where it goes at all, as the IM has no work to do until you have multiple domains in that forest.
  • If that single domain forest ever adds a domain, each IM will need to run on a non-GC server unless all DCs in that individual domain are also GCs.
  • The IM doesn’t care about the other forest at all. The forest is a boundary of what the IM is tracking, it does not traverse Kerberos trusts to other forests.
  • One more bit of recent weirdness that we don’t mention often: Once you enable the AD Recycle Bin, the Infrastructure Master stops mattering as a FSMO role and each DC takes on the role of updating themselves in regards to cross-domain object references (see http://msdn.microsoft.com/en-us/library/cc223753(PROT.13).aspx)

Question

When using DFSR and you rename a file does the whole file get replicated? What about if the same file exists in two different folders folders: will each one replicate when a user makes copies of files between different folders?

Answer

1. Nope: http://blogs.technet.com/b/askds/archive/2009/04/01/understanding-dfsr-debug-logging-part-9-file-is-renamed-on-windows-server-2003-r2.aspx

2. Not if using at least one server with Enterprise Edition in the replication partnership, so that cross-file similarity can be used:

http://blogs.technet.com/b/askds/archive/2010/08/20/friday-mail-sack-scooter-edition.aspx (see Question “The documentation on DFSR's cross-file RDC is pretty unclear – do I need two Enterprise Edition servers or just one? Also, can you provide a bit more detail on what cross-file RDC does?”)

Proof on this one (as I don’t have an article with debug log example):

Two files in two folders, both identically named, data’ed, secured. They have sequential UID version numbers. Below is the inbound debug log from the server replicating the files (heavily edited for clarity and brevity).

20110308 10:26:38.491 2264 INCO  3282 InConnection::ReceiveUpdates Received: uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe session:8 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csId:{C929D10A-601B-41D8-A620-2D161733473B} csName:badseed ß the first file starts replicating inbound

20110308 10:26:38.491 2592 MEET  1342 Meet::Install Retries:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed updateType:remote

20110308 10:26:38.491 2592 MEET  4228 Meet::ProcessUid Uid related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  5692 Meet::FindNameRelated Access name conflicting file. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  4647 Meet::GetNameRelated Name related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  3346 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  1992 Meet::Download Start Download updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B} ß file replicated starts replicating inbound.

20110308 10:26:38.913 2592 RDCX   769 Rdc::SeedFile::Initialize RDC signatureLevels:1, uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe fileSize(approx):737280 csId:{C929D10A-601B-41D8-A620-2D161733473B} enableSim=1 ß added the file’s signature info to the cross-file RDC similarity table

20110308 10:26:39.131 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloading (refCount==1)

20110308 10:26:39.131 2592 STAG  4107 Staging::OpenForWrite name:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:39.225 2592 INCO  6593 InConnection::LogTransferActivity Received RAWGET uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe connId:{07C54B74-C2FB-4417-8830-3488E368480B} csId:{C929D10A-601B-41D8-A620-2D161733473B} stagedSize:361599ß file was replicated WITHOUT RDC as we had never seen this file before and had no similar files anywhere

20110308 10:26:39.225 2592 MEET  2163 Meet::Download Done downloading content updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.241 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==1)

20110308 10:26:39.241 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloading (refCount==0) ß done staging file

20110308 10:26:39.241 2592 MEET  2775 Meet::TransferToInstalling Transferring content from staging area into Installing updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  2808 Meet::TransferToInstalling Obtaining fid of the newly installed file updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  2821 Meet::TransferToInstalling Read 733988 bytes, wrote 733988 bytes updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed ß expanded from staging into the Installing folder

20110308 10:26:39.256 2592 MEET  2225 Meet::Download Download Succeeded : true updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:39.256 2592 MEET  4228 Meet::ProcessUid Uid related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  5692 Meet::FindNameRelated Access name conflicting file. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  4647 Meet::GetNameRelated Name related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3346 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3013 Meet::InstallRename Moving contents from Installing to final destination. Attributes:0x20 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3043 Meet::InstallRename File moved. rootVolume:{E6D66386-E6B2-11DF-845F-806E6F6E6963} parentFid:0x2AA00000000E2BD fidInInstalling:0x100000000E2C3 usn:0xb01ec28 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3143 Meet::InstallRename Update database with new contents updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3234 Meet::InstallRename Updating database. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3244 Meet::InstallRename -> DONE Install-rename completed updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B} ß moved the file into the replicated folder, done replicating for all intents and purposes

20110308 10:26:39.256 2592 MEET  1804 Meet::InstallStep Done installing file updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==0)

Now I copy the exact same file into another folder on the upstream server, with same security, attributes, data, and name. Just a different path.

 

20110308 10:26:56.497 2592 RDCX  1311 Rdc::SeedFile::UseSimilar similarrelated (SimMatches=16)uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} (related:

uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}) ß the server recognizes that the new file it was told about has an identical copy already replicated to another folder.

20110308 10:26:56.497 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==1)

20110308 10:26:56.497 2592 RDCX  1510 Rdc::SeedFile::UseRelated "SimilarityRelated" file already staged uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} (related: uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B})ß even better, the file is still staged, so we don’t have to go stage a copy

20110308 10:26:56.497 2592 RDCX  3742 Rdc::FrsSignatureIndexFile::Open Opening FrsSignatureIndexFile OK for write Levels=1..1 uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:56.497 2592 RDCX   467 StreamToIndex RDC generate begin: (0..1), uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:56.513 2592 RDCX   509 StreamToIndex RDC generate end: (0..1), uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:56.513 2592 RDCX  3742 Rdc::FrsSignatureIndexFile::Open Opening FrsSignatureIndexFile OK for read Levels=1..1 uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:56.513 2592 RDCX  2359 Rdc::SeedFile::OpenSeedSigDB Using seed file for uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} seed(type:SimilarityRelated uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe depth=1) ß we then create a new copy of the file using the signature bytes from the old copy. The actual new file is not copied over the wire.

20110308 10:26:56.653 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==0)

ß after this it will look just like the first file where it gets expanded to Installing, copied to real RF.

Question

Whenever I use LDIFDE or CSVDE to export just users, I also get computers. How do all these other LDAP apps do it? 

image

There should only be 14 users in this test domain but I get 33 entries that include computers.

Answer

There are a number of ways to skin this cat.

Give this LDAP filter a try:

ldifde -f foo.txt -r "(&(!objectclass=computer)(objectclass=user))"

image

See the difference? It is including any objects who have a class of ‘user’ but excluding (with the “!”) any that are also class of ‘computer’. This is necessary because computers are users. :) See the first few lines of one of the computers returned by the original query:

dn: CN=XP-05,CN=Computers,DC=contoso,DC=com
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: computer
cn: XP-05
distinguishedName: CN=XP-05,CN=Computers,DC=contoso,DC=com
instanceType: 4
whenCreated: 20101201143854.0Z
<snip>

A good alternative from the Comments: (&(objectCategory=person)(objectClass=user))

And another good one: (sAMAccountType=805306368)

(You guys think about this a lot don't you? :P) 

Question

Are DFSR and DPM compatible?

Answer

Yes, as long as your DFSR servers have this KB977381 version (or newer) of DFSR.EXE/DFSRS.EXE installed, they are compatible. The article doesn’t state it, but the filter driver I/O requests that DFSR didn’t understand were DPMs.

Question

Is it ok to migrate SYSVOL to DFSR before you have all domains in the forest at a Windows Server 2008 domain functional level, or the whole forest at Windows Server 2008 forest functional level? Do I need to be concerned about site-based policies that might be accessed through out the forest?

Answer

Per-domain is fine, the individual domains don’t matter to each other at all in regards to SYSVOL migration. GP is completely unaware of the replication type, so site-based policies don’t matter either. The main effect will be that once you have DFSR being used, you will hopefully have fewer GP problems due to replication latency and FRS’ general instability.

Regardless: make sure you are using our latest DFSRS, DFSRMIG and ROBOCOPY hotfixes.

KB972105 All files are conflicted on all domain controllers except the PDC Emulator when a DFSR migration of the SYSVOL share reaches the Redirected state in Windows Server 2008 or in Windows Server 2008 R2 -http://support.microsoft.com/default.aspx?scid=kb;EN-US;972105

KB968429  List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2 -http://support.microsoft.com/default.aspx?scid=kb;EN-US;968429

Netmon Loot

If you use NetMon, make sure you check out all of the sweet experts and parsers that keep coming out of various teams. We don’t advertise these well, but there are some really useful ones these days:

- Ned “Tired” Pyle

Friday Mail Sack: Goat Riding Bambino Edition

$
0
0

Hi folks, Ned here again. I’m trying to get back into the swing of having a mail sack every week but they can be pretty time consuming to write (hey, all this wit comes at a price!) so I am experimenting with making them a little shorter. This week we talk AD PowerShell secrets, USMT and Profile scalability, a little ADUC and DFSR, and some other random awesomeness.

Question

Can you explain how the AD PowerShell cmdlet Get-ADComputer gets IP information? (ex: Get-ADComputer -filter * -Properties IPv4Address). Properties are always AD attributes, but I can not find that IPv4Address attribute on any computer object and even after I removed the A records from DNS I still get back the right IP address for each computer.

Answer

That’s an excellent question and you were on the right track. This is what AD PowerShell refers to as an ‘extendedAttribute’ internally, but what a human might call a ‘calculated value’. AD PowerShell special-cases a few useful object properties that don’t exist in AD by using other LDAP attributes that do exist, and then uses that known data to query for the rest. In this case, the dnsHostName attribute is looked up normally, then a DNS request is sent with that entry to get the IP address.

Even if you removed the A record and restarted DNS, you could still be returning the DNS entry from your own cache. Make sure you flush DNS locally where you are running PowerShell or it will continue to “work”.

To demonstrate, here I run this the first time:

clip_image002

Which queries DNS right after the powershell.exe contacts the DC for the other info (all that buried under SSL here, naturally):

clip_image002[4]

Then I run the identical command again – note that there is no DNS request or response this time as I’m using cached info.

clip_image002[6]

It still tells me the IP address. Now I delete the A record and restart the DNS service, then flush the DNS cache locally where I am running PowerShell, and run the same PowerShell command:

clip_image002[8]

Voila! I have broken it. :)

Question

Is there is a limit on the number of profiles that USMT 4.0 can migrate? 3.01 used to have problems with many (20+) profiles, regardless of their size.

Answer

Updated with new facts and fun, Sept 1, 2011

Yes and no. There is no limit real limit, but depending on the quantity of profiles and their contents, combined with system resources on the destination computer, you can run into issues. If possible you should use hardlink migration, as that as fast as H… well, it’s really fast.

To demonstrate (and to show erstwhile USMT admins a quick and dirty way to create some stress test profiles):

1. I create 100 test users:

image

image

2. I log them all on and create/load their profiles, using PSEXEC.EXE:

image

image

3. Copy a few different files into each profile. I suggest using a tool that creates random files with random contents. In my case I added a half dozen 10MB files to each profile’s My Documents folder. You can’t use the same files in each profile, as USMT is smart enough to reuse them and you will not get the real user experience.

4. I run the harshest, slowest possible migration I can, where USMT writes to a compressed store on a remote file share, with AES_256 encryption, from an x86 Windows 7 computer with only 768MB of RAM, while cranking all logging to the max:

image

This (amazingly, if you ever used USMT 3.01) takes only 15 minutes and completes without errors. Loadstate memory and CPU isn’t very stressful (in one test, I did this with an XP computer that had only 256MB of RAM, using 3DES encryption).

5. I restore them all to another another computer – here’s the key: you need plenty of RAM on your destination Windows 7 computer. If you have 100 profiles that all have different contents, our experience shows that 4GB of RAM is required. Otherwise you can run out the OS resources and receive error “Close programs to prevent information loss. Your computer is low on memory. Save your files and close your programs: USMT: Loadstate”. More on this later.

image

This takes about 30 minutes and there are no issues as long as you have the RAM.

image

6. I bask in the turbulence of my magnificence.

If you do run into memory issues (so far we’ve only seen it with one customer since USMT 4.0 released more than two years ago), you have a few options:

a. Validate your scanstate/loadstate rules – despite what you may think, you might be gathering all profiles and not just fresh ones. Review: http://blogs.technet.com/b/askds/archive/2011/05/05/usmt-and-u-migrating-only-fresh-domain-profiles.aspx. Hopefully that cuts you down to way fewer than 100 per machine. Read that post carefully, there are some serious gotchas: such as once you run scanstate once on a computer, all profiles are made fresh afterwards for any subsequent scanstate runs. The odds that all 100+ profiles are actually active is pretty unlikely.

b. Get rid of old XP profiles with DELPROF before using USMT at all. This is safer than UEL, as like I mentioned, once you run scanstate that’s it – it has to work perfectly on the first try, as all profiles are now “fresh”. (On Vista+ you instead use http://support.microsoft.com/kb/940017, as I’m sure you remember)

c. Get more RAM.

Question

Is it possible in DSA.MSC to have the Find: Users, Contacts, and Groups default to finding computers or include computers with the user, contacts, and groups? Is there a better way to search for computers?

Answer

The Find tool does not provide for user customization – even starting it over without closing DSA.MSC loses your last setting. ADUC is a cruddy old tool, DSAC.EXE is the (much more flexible) replacement and it will do what you want for remembering settings.

There are a few zillion other ways to find computers also. Not knowing what you are trying to do, I can’t recommend one over the other; but there’s DSQUERY.EXE, CSVDE.EXE, many excellent and free 3rd parties, etc.

Question

If I delete or disable the outbound connection from a writable DFSR replicated folder, I get warning that the “topology is not fully connected”. Which is good.

image

But if that outbound connection is for a read-only replica, no errors. Is this right?

Answer

It’s an oversight on our part. While technically nothing bad will happen in this case (as read-only servers - of course - do not replicate outbound), you should get this message in all cases (There are also 6020 and 6022 DFSR warning events you can use to track this condition). A read-only can be converted to a read-write, and you will definitely want an outbound connection for that.

We’re looking into this; in the meantime, just don’t do it anywhere. :)

Other Things

Just to make myself feel better: “Little roller up along first. Behind the bag! It gets through Buckner!”

  • If you have parents, siblings, children away at college, nephews, cousins, grandparents, or friends, we have the newest weapon in the war on:
    1. Malware
    2. Your time monopolized as free tech support

Yes, it’s the all new, all web Microsoft Safety Scanner. It even has a gigantic button, so you know it’s gotta be good. Make those noobs mash it and tell you if there are any problems while you go make a sandwich.

  • Finally: thank goodness my wife hasn’t caught this craze yet. She has never met a shoe she didn’t buy.

Have a nice weekend folks.

Ned “86 years between championships? That’s nothing… try 103, you big babies!” Pyle

Friday Mail Sack: Tuesday To You Edition

$
0
0

Hi folks, Ned here again. It’s a long weekend here in the United States, so today I talk to you tell myself about a domain join issue one can only see in Win7/R2 or later, what USMT hard link migrations really do, how to poke LDAP in legacy PowerShell, time zone migration, and an emerging issue for which we need your feedback.

Question

None of our Windows Server 2008 R2 or Windows 7 computers can join the domain – they all show error:

“The following error occurred attempting to join the domain "contoso.com": The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.”

image

Windows Vista, Widows Server 2008, and older operating systems join without issue in the exact same domain while using the same user credentials.

Answer

Not a very actionable error – which service do you mean, Windows!? If you look at the System event log there are no errors or mention of broken services. Fortunately, any domain join operations are logged in another spot – %systemroot%\debug\netsetup.log. If you crack open that log and look for references to “service” you find:

05/27/2011 16:00:39:403 Calling NetpQueryService to get Netlogon service state.
05/27/2011 16:00:39:403 NetpJoinDomainLocal: NetpQueryService returned: 0x0.
05/27/2011 16:00:39:434 NetpSetLsaPrimaryDomain: for 'CONTOSO' status: 0x0
05/27/2011 16:00:39:434 NetpJoinDomainLocal: status of setting LSA pri. domain: 0x0
05/27/2011 16:00:39:434 NetpManageLocalGroupsForJoin: Adding groups for new domain, removing groups from old domain, if any.
05/27/2011 16:00:39:434 NetpManageLocalGroups: Populating list of account SIDs.
05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: status of modifying groups related to domain 'CONTOSO' to local groups: 0x0
05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: INFO: No old domain groups to process.
05/27/2011 16:00:39:465 NetpJoinDomainLocal: Status of managing local groups: 0x0
05/27/2011 16:00:39:637 NetpJoinDomainLocal: status of setting ComputerNamePhysicalDnsDomain to 'contoso.com': 0x0
05/27/2011 16:00:39:637 NetpJoinDomainLocal: Controlling services and setting service start type.
05/27/2011 16:00:39:637 NetpControlServices: start service 'NETLOGON' failed: 0x422
05/27/2011 16:00:39:637 NetpJoinDomainLocal: initiating a rollback due to earlier errors

Aha – the Netlogon service. Without that service running, you cannot join a domain. What’s 0x422?

c:\>err.exe 0x422

ERROR_SERVICE_DISABLED winerror.h
# The service cannot be started, either because it is
# disabled or because it has no enabled devices associated
# with it.

Nice, that’s our guy. It appears that the service was disabled and the join process is trying to start it. And it almost worked too – if you run services.msc, it will say that Netlogon is set to “Automatic” (and if you look at another machine you have not yet tried to join, it is set to “Disabled” instead of the default “Manual”). The problem here is that the join code is only setting the start state through direct registry edits instead of using Service Control Manager. This is necessary in Win7/R2 because we now always go through the offline domain join code (even when online) and for reasons that I can’t explain without showing you our source code, we can’t talk to SCM while we’re in the boot path or we can have hung startups. So the offline code set the start type correctly and the next boot up would have joined successfully – but since the service is still disabled according to SCM, you cannot start it. It’s one of those “it hurts if I do this” type issues.

And why did the older operating systems work? They don’t support offline domain join and are allowed to talk to the Service Control Manager whenever they like. So they tell him to set the Netlogon service start type, then tell him to start the service – and he does.

The lesson here is that a service set to Manual by default should not be set to disabled without a good reason. It’s not like it’s going to accidentally start in either case, nor will anyone without permissions be able to start it. You are just putting a second lock on the bank vault. It’s already safe enough.

Question

USMT is always going on about hard link migrations. I’ve used them and those migrations are fast… but what the heck is it and why do I care?

Answer

A hard link is simply a way for NTFS to point to the same file from multiple spots, always on the same volume. It has nothing to do with USMT (who is just a customer). Instead of making many copies of a file, you are making copies of how you get to the file. The file itself only exists once. Any changes to the file through one path or another are always reflected on the same physical file on the disk. This means that when USMT is storing a hard link “copy” of a file it is just telling NTFS to make another pointer to the same file data and is not copying anything – which makes it wicked fast.

Let’s say I have a file like so:

c:\hithere\bwaamp.txt

If I open it up I see:

image

Really though, it’s NTFS pointing to some file data with some metadata that tells you the name and path. Now I will use FSUTIL.EXE to create a hard link:

C:\>fsutil.exe hardlink create c:\someotherplace\bwaamp.txt c:\hithere\bwaamp.txt
Hardlink created for c:\someotherplace\bwaamp.txt <<===>> c:\hithere\bwaamp.txt

I can use that other path to open the same data (it helps if you don’t think of these as files):

image

I can even create a hard link where the file name is not the same (remember – we’re pointing to file data and giving the user some friendly metadata):

C:\>fsutil.exe hardlink create c:\yayntfs\sneaky!.txt c:\hithere\bwaamp.txt
Hardlink created for c:\yayntfs\sneaky!.txt <<===>> c:\hithere\bwaamp.txt

And it still goes to the same spot.

image

What if I edit this new "”sneaky!.txt” file and then open the original “bwaamp.txt”?

image

Perhaps a terrible Visio diagram will help:

hardlink

When you delete one of these representations of the file, you are actually deleting the hard link. When the last one is deleted, you are deleting the actual file data.

It’s magic, smoke and mirrors, hoodoo. If you want a more disk-oriented (aka: yaaaaaaawwwwnnn) explanation, check out this article. Rob and Joseph have never met a File Record Segment Header they didn’t like. I bet they are a real hit at parties…

Question

How can I use PowerShell to detect if a specific DC is reachable via LDAP? Don’t say AD PowerShell, this environment doesn’t have Windows 7 or 2008 R2 yet! :-)

Answer

One way is going straight to .NET and use the DirectoryServices namespace:

New-Object System.DirectoryServices.DirectoryEntry(LDAP://yourdc:389/dc=yourdomaindn)

For example:

image
Yay!

image
Boo!

Returning anything but success is a problem you can then evaluate.

As always, I welcome more in the Comments. I suspect people have a variety of techniques (third parties, WMI LDAP provider, and so on).

Question

Is USMT supposed to migrate the current time zone selection?

Answer

Nope. Whenever you use timedate.cpl, you are updating this registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation

Windows XP has very different data in that key when compared to Vista and Windows 7:

Windows XP

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

"ActiveTimeBias"=dword:000000f0

"Bias"=dword:0000012c

"DaylightBias"=dword:ffffffc4

"DaylightName"="Eastern Daylight Time"

"DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

 

"StandardBias"=dword:00000000

"StandardName"="Eastern Standard Time"

"StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

 

Windows 7

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

"ActiveTimeBias"=dword:000000f0

"Bias"=dword:0000012c

"DaylightBias"=dword:ffffffc4

"DaylightName"="@tzres.dll,-111"

"DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

"DynamicDaylightTimeDisabled"=dword:00000000

"StandardBias"=dword:00000000

"StandardName"="@tzres.dll,-112"

"StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

"TimeZoneKeyName"="Eastern Standard Time"

The developers from the Time team simply didn’t want USMT to assume anything as they knew there were significant version differences; to do so would have taken an expensive USMT plugin DLL for a task that would likely be redundant to most customer imaging techniques. There are manifests (such as "INTERNATIONAL-TIMEZONES-DL.MAN") that migrate any additional custom time zones to the up-level computers, but again, this does not include the currently specified time zone. Not even when migrating from Win7 to Win7.

But that doesn’t mean that you are out of luck. Come on, this is me! :-)

To migrate the current zone setting from XP to any OS you have the following options:

To migrate the current zone setting from Vista to Vista, Vista to 7, or 7 to 7, you have the following options:

  • Any of the three mentioned above for XP
  • Use this sample USMT custom XML (making sure that nothing else has changed since this blog post and you reading it). Woo, with fancy OS detection code!

<?xmlversion="1.0"encoding="utf-8" ?>
<
migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/currenttimezonesample">
<
componenttype="Application"context="System">
<
displayName>Copy the currently selected timezone as long as Vista or later OS</displayName>
<
rolerole="Settings">
<!--
Check as this is only valid for uplevel-level OS >= than Windows Vista –>
<
detects>
  <
detect>
   <
condition>MigXmlHelper.IsOSLaterThan("NT", "6.0.0.0")</condition>
  </
detect>
</
detects>
<
rules>
<
include>
  <
objectSet>
   <
patterntype="Registry">HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\* [*]</pattern>
  </
objectSet>
</
include>
</
rules>
</
role>
</
component>
</
migration>

Question for our readers

We’ve had a number of cases come in this week with the logon failure:

Logon Process Initialization Error
Interactive logon process initialization has failed.
Please consult the Event Logs for more details.

You may also find an application event if you connect remotely to the computer (interactive logon is impossible at this point):

ID: 4005
Source: Microsoft-Windows-Winlogon
Version: 6.0
Message: The Windows logon process has unexpectedly terminated.

In the cases we’ve seen this week, the problem was seen after restoring a backup when using a specific third party backup product. The backup was restored to either Hyper-V or VMware guests (but this may be coincidental). After the restore large portions of the registry were missing and most of our recovery tools (SFC, Recovery Console, diskpart, etc.) would not function. If you have seen this, please email us with the backup product and version you are using. We need to contact this vendor and get this fixed, and your evidence will help. I can’t mention the suspected company name here yet, as if we’re wrong I’d be creating a legal firestorm, but if all the private emails say the same company we’ll have enough justification for them to examine this problem and fix it.

------------

Have a safe weekend, and take a moment to think of what Memorial Day really means besides grilling, racing, and a day off.

Ned “I bet SGrinker has the bratwurst hookup” Pyle

Viewing all 76 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>