Quantcast
Channel: Ask the Directory Services Team
Viewing all 76 articles
Browse latest View live

ADWS has been released for Windows Server 2008 and Windows Server 2003

$
0
0

Ned here. The beta is over, and the new AD Web Service service introduced in Windows Server 2008 R2 has been released to the world for downlevel OS's. ADWS allows AD PowerShell to connect to domain controllers and do... work. It also allows the new AD Administration Center - which is a kissing cousin of the AD Users and and Computers snap-in - to manage AD objects. If you have only Windows 7 clients with RSAT, or a mix of Win2003, Win2008, and Win 2008 R2 DC's, this download is for you:

Download Active Directory Management Gateway Service (Active Directory Web Service for Windows Server 2003 and Windows Server 2008)

For more info on ADAC, take a look here.

I'll talk more about ADAC and ADWS in the coming weeks, but I figured you'd want this sucker sooner than later.

- Ned "I'm an AD" Pyle


Inventorying Computers with AD PowerShell

$
0
0

Hi, Ned here again. Have you ever had to figure out what operating systems are running in your domain environment so that you can plan for upgrades, service pack updates, or support lifecycle transitions? Did you know that you don’t have to connect to any of the computers to find out? It’s easier than you might think, and all possible once you start using AD PowerShell in Windows Server 2008 R2 or Windows 7 with RSAT.

Get-ADComputer

The cmdlet of choice for inventorying computers through AD is Get-ADComputer. This command automatically searches for computer objects throughout a domain, returning all sorts of info.

As I have written about previously my first step is to fire up PowerShell and import the ActiveDirectory module:

image

Then if I want to see all the details about using this cmdlet, I run:

Get-Help Get-ADComputer -Full

Getting OS information

Basics

Now I want to pull some data from my domain. I start by running the following:

Important note: in all my samples below, the lines are wrapped for readability.

Another important note(thanks dloder): I am going for simplicity and introduction here, so the -Filter and -Property switches are not designed for perfect efficiency. As you get comfortable with AD PowerShell, I highly recommend that you start tuning for less data to be returned - the "filter left, format right" model described here by Don Jones.

Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion -Wrap –Auto

image

This command is filtering all computers for all their properties. It then feeds the data (using that pipe symbol) into a formatted table. The only attributes that the table contains are the computer name, operating system description, service pack, and OS version. It also automatically sizes and wraps the data. When run, I see:

image

It looks like I have some work to do here – one Windows Server 2003 computer needs Service Pack 2 installed ASAP. And I still have a Windows 2000 server that is going to move quickly and replace that server.

Server Filtering

Now I start breaking down the results with filters. I run:

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

I have changed my filter to find all the computers that are running “Windows Server something”, using the –like filter. And I stopped displaying the OS version data because it was not providing me anything unique (yet!).

image

Cool, now only servers are listed! But wait… where’d my Windows 2000 server go? Ahhhh… sneaky. We didn’t start calling OS’s “Windows Server” until 2003. Before that it was “Windows 2000 Server”. I need to massage my filter a bit:

Get-ADComputer -Filter {OperatingSystem -Like "Windows *Server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

See the difference? I just added an extra asterisk to surround “Server”.

image

As you can see, my environment has a variety of Windows server versions running. I’m interested in the ones that are running Windows Server 2008 or Windows Server 2008 R2. And once I have that, I might just want to see the R2 servers – I have an upcoming DFSR clustering project that requires some R2 computers. I run these two sets of commands:

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*2008*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

Get-ADComputer -Filter {OperatingSystem -Like "Windows Server*r2*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

image

image

Starting to make sense? Repetition is key; hopefully you are following along with your own servers.

Workstation Filtering

Okeydokey, I think I’ve got all I need to know about servers – now what about all those workstations? I will simply switch from -Like to -Notlike with my previous server query:

Get-ADComputer -Filter {OperatingSystem -NotLike "*server*"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack -Wrap -Auto

And blammo:

image

Family filtering

By now these filters should be making more sense and PowerShell is looking less scary. Let’s say I want to filter by the “family” of operating system. This can be useful when trying to identify computers that started having a special capability in one OS release and all subsequent releases, and where I don’t care about it being server or workstation. An example of that would be BitLocker– it only works on Windows Vista, Windows Server 2008, and later. I run:

Get-ADComputer -Filter {OperatingSystemVersion -ge "6"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemVersion -Wrap -Auto

See the change? I am now filtering on operating system version, to be equal to or greater than 6. This means that any computers that have a kernel version of 6 (Vista and 2008) or higher will be returned:

image

If I just wanted my Windows Server 2008 R2 and Windows 7 family of computers, I can change my filter slightly:

Get-ADComputer -Filter {OperatingSystemVersion -ge "6.1"} -Property * | Format-Table Name,OperatingSystem,OperatingSystemVersion -Wrap -Auto

image

Getting it all into a file

So what we’ve done ‘til now was just use PowerShell to send goo out to the screen and stare. In all but the smallest domains, though, this will soon get unreadable. I need a way to send all this out to a text file for easier sorting, filtering, and analysis.

This is where Export-CSV comes in. With the chaining of an additional pipeline I can find all the computers, select the attributes I find valuable for them, then send them into a comma-separated text file that is even able to read the weirdo UTF-8 trademark characters that lawyers sometimes make us put in AD.

Hey, what do you call a million lawyers at the bottom of the ocean? A good start! Why don’t sharks eat lawyers? Professional courtesy! What do have when a lawyer is buried up to his neck in sand? Not enough sand! Haw haw… anyway:

Get-ADComputer -Filter * -Property * | Select-Object Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion | Export-CSV AllWindows.csv -NoTypeInformation -Encoding UTF8

image

Then I just crack open the AllWindows.CSV file in Excel and:

image

What about the whole forest?

You may be tempted to take some of the commands above and tack on the necessary arguments to search the entire forest. This means adding:

-searchbase “” –server <domain FQDN>:3268

That way you wouldn’t have to connect to a DC in every domain for the info – instead you’d just ask a single GC. Unfortunately, this won’t work; none of the operating system attributes are replicated by global catalog servers. Oh well, that’s not PowerShell’s fault. All the data must be pulled from domains individually, but that can be automated – I leave that to you as a learning exercise.

Conclusion

The point I made above about support lifecycle is no joke: 2010 is a very important year for a lot of Windows products’ support:

Hopefully these simple PowerShell commands make hunting down computers a bit easier for you.

Until next time.

- Ned “bird dog” Pyle

Friday Mail Sack – Big Picture Edition

$
0
0

Hi folks, Ned here again. Here are this week’s sample of interesting questions sent to AskDS.

Question

Is there a way to see information about the available RID pool for a domain?

Answer

Yes, with the attribute: RidAvailablePool

DN path: CN=RID Manager$,CN=System,DC= domain ,DC=com

Global RID space for an entire domain is defined in Ridmgr.h. as a large integer with upper and lower parts. The upper part defines the number of security principals that can be allocated per domain (0x3FFFFFFF or just over 1 billion). The lower part is the number of RIDs that have been allocated in the domain. To view both parts, use the Large Integer Converter command in the Utilities menu in Ldp.exe.

• Sample Value: 4611686014132422708 (Insert in Large Integer Calculator in the Utilities menu of Ldp.exe) 
• Low Part: 2100 (Beginning of next RID pool to be allocated) 
• High Part: 1073741823 (Total number of RIDS that can be created in a domain)
 

This is all (buried) in:

305475  Description of RID Attributes in Active Directory
http://support.microsoft.com/default.aspx?scid=kb;EN-US;305475

Update: and see comments - Rick has a slick alternative.

Question

I have an NT 4.0 and Exchange 5.5 environment… <other stuff>

Answer

We’ve got nothing for you, as those operating systems and applications have not been supported for years -the same way if you call Ford and ask about getting warranty work on your '96 Taurus. A handful of Premier contract customers pay a significant premium every year for a “Custom Support Agreement” to maintain support on deceased products. If you’re interested in CSA’s (and if you are running Windows 2000 and getting worried that July 13th is approaching fast), contact your TAM.

Otherwise, whatever you can dig up from our KB or the Internet is your best bet. Your best chance to get an NT 4.0 question answered from us is “I am trying to migrate to a later OS and…”

Question

I am setting up DFSR and I’ve been told the following are best practices:

  • Increase the RF staging quota to be at least as large as the 9 largest files on Windows Server 2003 R2 sets.
  • Increase the RF staging quota to be at least as large as the 32 largest files on Windows Server 2008 or Windows Server 2008 R2 READ-WRITE sets.
  • Increase the RF staging quota to be at least as large as the 16 largest files on Windows Server 2008 R2 READ-ONLY sets.

Is there any easy way to find the N largest files with PowerShell? DIR really blows and the Windows Search GUI is taking forever since I don’t index files.

Answer

Try this on for size (ha!):

Get-ChildItem d:\scratch -recurse | Sort-Object length -descending  | select-object -first 32 | ft directory,name,length -wrap –auto

The highlighted portions are what you need to change. The first one is the path and the second is how many items you want to list as the “biggest”.

image

Question

I hear that you’re a big Chicago Cubs fan, Ned. Is it true that they have not won the championship in over 100 years?

Answer

I hate you.

 

Have a great weekend folks,

Ned “the short picture” Pyle

Friday Mail Sack – While the Ned’s Away Edition

$
0
0

Hello Internet! Last week, Ned said there wouldn’t be a Mail Sack this week because he was going to be out of town. Well, the DS team was sitting around during our “Ned is out of our hair for a few days” party and we decided that since this is a Team Blog after all, we’d go ahead and post a Friday Mail Sack. So even though the volume was a little light this week, perhaps due to Ned’s announcement, we put one together all by ourselves.

So without further ado, here is this week’s Ned-less Mail Sack.

Certificate Template Supersedence

Q: I’m using the Certificate Wizard in OCS to generate a certificate request and submit it to my Enterprise CA. My CA isn’t configured to issue certificates based on the Web Server template, but I have duplicated the Web Server template and modified the settings. My new template is configured to supersede the Web Server template.

The request fails. Why doesn’t the CA issue the certificate based on my new template if it supersedes the default Web Server template?

A: While that would be a really cool feature, that’s not how Supersedence works. Supersedence is used when you want to replace certificates that have already been issued with a new certificate with modified settings. In addition, it only works with certificates that are being managed by Windows Autoenrollment.

For example, the Administrator has enabled Autoenrollment in the Computer Configuration of the Default Domain Policy:

image

Further, the Administrator has granted the Domain Computers group permission to Autoenroll for the Corporate Computer template. Appropriately, every Windows workstation and member server in the domain enrolls for a certificate based on this template.

Later, the Administrator decides that she needs to update the template in some fashion – add a new certificate purpose to the Enhanced Key Usage, change a key option, whatever. Our intrepid Admin duplicates her Corporate Computer template and creates a new Better Corporate Computer template. In the properties of this new template, she adds the now obsolete Corporate Computer template to the Superseded Templates list.

image

The Admin clicks Ok to commit the changes and then sits back and waits for all of the workstations and member servers in the domain to update their certificate. So how does that work, exactly?

On each workstation and member server, the Autoenrollment server wakes up about every 8 hours and checks to see if it has any work to do. As this occurs on each Windows computer, Autoenrollment determines it is enabled by policy and so checks Active Directory for a list of templates. It discovers that there is a new template for which this computer has Autoenrollment permissions. Further, this new template is configured to supersede the template a certificate it already has is based upon.

The Autoenrollment service then archives the current certificate and enrolls for a new certificate based on the superseding template.

In summary, supersedence doesn’t change the behavior of the CA at all, so you can’t use it to control how the CA will respond when it receives a request for a certain template. No, supersedence is merely a hint to tell Autoenrollment on the client that it needs to replace an existing certificate.

Active Directory Web Services

Q: I’m seeing the following warning event recorded in the Active Directory Web Services event log about once a minute.

Log Name:      Active Directory Web Services
Source:        ADWS
Date:          4/8/2010 3:13:53 PM
Event ID:      1209
Task Category: ADWS Instance Events
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      corp-adlds-01.corp.contoso.com
Description:
Active Directory Web Services encountered an error while reading the settings for the specified Active Directory Lightweight Directory Services instance.  Active Directory Web Services will retry this operation periodically.  In the mean time, this instance will be ignored.
Instance name: ADAM_ContosoAddressbook

I can’t find any Microsoft resources to explain why this event occurs, or what it means.

A: Well…we couldn’t find any documentation either, but we were curious ourselves so we dug into the problem. It turns out that event is only recorded if ADWS can’t read the ports that AD LDS is configured to use for LDAP and Secure LDAP (SSL). In our test environment, we deleted those values and restarted the ADWS service, and sure enough, those pesky warning events started getting logged.

The following registry values are read by ADWS:

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port LDAP
Type: REG_DWORD
Data: 1 - 65535 (default: 389)

Key: HKLM\SYSTEM\CurrentControlSet\Services\<ADAM_INSTANCE_NAME>\Parameters
Value: Port SSL
Type: REG_DWORD
Data: 1 - 65535 (default: 636)

Verify that the registry values described above exist and have the appropriate values. Also verify that the NT AUTHORITY\SYSTEM account has permission to read the values. ADWS runs under the Local System account.

Once you've corrected the problem, restart the ADWS service. If you have to recreate the registry values because they've been deleted, restart the AD LDS instance before restarting the ADWS service.

Thanks for sending us this question. We’ve created the necessary internal documentation, and if we see more issues like this we’ll promote it to the Knowledge Base.

Final Note

Well…that’s it for this week. Please keep posting your comments, observations, topic ideas and questions. And fear not, Ned will be back next week.

Jonathan “The Pretender” Stephens

Friday Mail Sack – Tweener Clipart Comics Edition

$
0
0

Hey folks, Ned here again. For those keeping score, you’ve probably noticed the full-on original article content has been a bit thin in the past few weeks. We have some stuff in the draft pipeline so hang in there. In the meantime, here’s a weeks worth of.. stuff.

I like to move it, move it.

Question

I am confused on what DFS features are different between Standard Edition and Enterprise Edition versions of Windows Server. This includes DFSN and DFSR.

Answer

There are only two* differences:

DFS Replication – Enterprise edition gives you the ability to use cross-file RDC. Cross-file RDC is a way to replicate files by using a heuristic to determine similar data in existing files on a downstream server and use that construct a file locally without the need to request the whole new file over the network from an upstream partner.

http://technet.microsoft.com/en-us/library/cc773238(WS.10).aspx#BKMK_cross_fileRDC_editions

DFS Namespace – A Standard Edition server can host only one root standalone namespace. It can, however, host multiple domain-based namespaces if running Win2003 SP2 or later. Nice bullet points here.

* There was a third difference prior to Windows Server 2003 SP2 and in Windows 2000 SP4 – those Standard Edition servers can only run one DFS root namespace, no matter if domain-based or standalone. Since 2000 is nearly dead and you are not supported running Win2003 non-SP2, don’t worry about it further.

Question

Can I use the miguser.xml and migapp.xml from USMT 3.01 to migrate data using USMT 4.0?

Answer

Yes, but with plenty of caveats. You would not have any errors or anything; the schema and migxml library are compatible. But you are going to miss out on plenty of new features:

  • New applications that were added will not migrate
  • New types of helper functions will not work
  • Updated migration features will not work
  • f you use an old config.xml it will be missing settings.

Plus if you are using miguser.xml, you are not using the new migdocs.xml, which is vastly improved in most scenarios for what it gathers and for performance. It’s a much better idea to use the new XML files and simply recreate any customizations that you had done if 3.01 – if you still need to use them, that is. A lot of 3.01 customizations may be duplication of effort in 4.0.

You can steer a car with your feet, but that doesn’t make it a good idea.

Question

Are there any free tools out there for reporting on AD? Stuff like number of objects, installed OS’s, functional levels, disabled user accounts, locked out users, domains, trusts, groups, etc. The gestalt of AD, basically.

Answer

You can pay for these sorts of tools, of course (rhymes with zest!). If you dig around the intarwebs you will also find some free options. You could of course script any of this you want with AD PowerShell– that’s why we wrote it. One fellow on my team recommends this nice free UNSUPPORTED project that lives on CodePlex called “Active Directory reporting”. It’s a way to use SQL Reporting Server to analyze AD. Feel free to pipe up in the comments with others you like.

Question

Does USMT migrate file information like security & attributes? The “metadata” aspects of NTFS.

Answer

USMT preserves the security (DACL/SACL) as well as the file attributes like hidden, read-only, the create date, etc. So if you have done this:

clip_image001 clip_image001[4]

It will end up migrating the same:

clip_image001[6] clip_image001[8]

Note that if you are using the /NOCOMPRESS option to a non-hard-link store, these permissions and attributes will not be set on that copy of the file. That extra data is stored in the migration catalog. So don’t use the data in an uncompressed store to see if this is working, it is not accurate. When restored, everything will get fixed up by USMT based on the catalog.

Don’t confuse all this with EFS though – that requires use of the /EFS switch to handle.

Question

When I deploy new AD forests, should I continue to use an empty root domain?

Answer

We stopped arbitrarily recommending empty forest roots a while back – but instead of saying that we just stopped talking about them. Documentation through omission! But if you read between the lines you’ll see that we don’t think they are a great idea anymore. Brian Puhl, the world’s oldest AD admin wishes they had never deployed an empty root in 1999. Mark Parris and Instan both provide a good comprehensive list of reasons not to use an empty root.

For me, the biggest reason is that it’s a lot more complex without providing a lot more value. Fine Grain Password Policy takes care of differing security needs since Win2008. The domain does not provide enough admin separation to be considered a full security barricade, but merely a boundary of functionality – meaning you are now maintaining multiple copies of group policy, multiple SYSVOLs, etc. All with more fragility. Better to have a single domain and arrange your business via OU’s, if possible.

PS: I mean that Brian runs the world’s oldest AD, not that he is old. Well, not that old.

Question

Is there a command-line way to create DFS links (i.e. “folders”)? I need to make a few hundred.

Answer

In 2008/2008R2 & Vista/7 RSAT:

dfsutil.exe link add

In 2003/XP Support Tools:

dfscmd.exe /map

=====

Finally – the clock is ticking down on Windows 2000 end of life – now just 7 weeks to go. If you have not begun planning your upgrade, migration, or removal of Windows 2000 in your environment, you are officially behind the eight ball. Soon you will be running an OS that does not get security updates. Then it will be immediately owned by some new malware that your AV vendor fails to catch.

Then your boss will be all like

image

and you will be all like

image

and your users will be all like

image

and your week will be all like

image

and your company’s bottom line will be all like

image

and you don’t want that. So get to our Windows 2000 portal and make your move to a supported operating system before it’s too late: Windows 2000 End-of-Support Solution Center. Also, Windows Server 2003 enters extended support the same day, so don’t bother asking for bug fixes after that. Get on Win2008/R2 and we’ll be all ears…

Until next time,

- Ned  “like”  Pyle

Friday Mail Sack – It’s About To Get Real Edition

$
0
0

Hello Terra, it’s Ned here again. Before I get rolling, a big announcement:

On May 16th all the MSDN and TechNet blogs are being migrated to a new platform. This will get us back in line with modern blogging software, and include new features, better search, more user customization, and generally remove a lot of suck. Because AskDS is a very popular blog – thanks to youwe rated extra sandbox testing and migration support and we believe things are going to go smoothly. The migration will be running for a week (although many sites will be done before then) and during this time commenting will be turned off; just email us through our contact form if you need to chat. You can read more about the new features and track progress on the migration here.

On to this week’s most interesting questions.

Question

What happened to the GPMC scripts in Windows 7 and Win2008 R2?

Answer

Those went buh-bye when Vista came out. They can be downloaded from here if you like and I’ll wager they’ll work fine on 7, but the future of scripting GP is in PowerShell. Recommended reading:

Question

KB832017 (Services Overview and Network Port Requirements...) lists port 5722/TCP as being used for DFSR -- but only on Server 2008 or Server 2008 R2 DCs.  What exactly happens over 5722/TCP?  KB832017 is practically the only time I've ever seen that port mentioned.

Answer

There’s no special reasoning here, it’s a bug. :-) In a simple check to determine if a computer was a member client or member server, we forgot that it might also be a domain controller. So the code ends up specifying a port that was supposed to be reserved for some client code. Amazingly, no Premier contract customer has ever opened a DCR with us asking to have it fixed. I keep waiting…

Nothing else weird happens here, and it will look just like normal DFSR RPC communication in all other respects – because it is normal. :)

5722portcapturemedpyle

You can still change the port with DFSRDIAG STATICRPC <options> if you need to traverse a firewall or something. You are not stuck with this.

Question

I am missing tabs in Active Directory Users and Computers (DSA.MSC) when using the Windows 7 RSAT tools. I found some of your old Vista content about this, but you later said most of this has been fixed. Whiskey Tango Hotel?

Answer

As is often the case with RSAT (a tool designed by committee due to all the various development groups, servicing rules, and other necessities of this suite), there are a series of steps here to make this work. I’ll go through this systematically:

After installing RSAT on a domain-joined Windows 7 client, you add the Role Administration Tools for "AD DS Snap-ins and Command-line Tools":

nedpylersatremotefeature3

You then start DSA.MSC and examine the properties of a user. You notice that some or all of the following tabs are missing:

Published Certificates
Password Replication
Object
Security
Attribute Editor
Environment
Sessions
Remote Control
Remote Desktop Services Profile
Personal Virtual Desktop
UNIX Attributes
Dial-in

1. Enable "Advanced Features" via the View menu. This will show at least the following new tabs:

Published Certificates
Password Replication
Object
Security
Attribute Editor

image

2. If still not seeing tabs:

Environment
Sessions
Remote Control
Personal Virtual Desktop
Remote Desktop Services Profile

Add the following RSAT feature: "Remote Desktop Services Tools". Then restart DSA.MSC and if Advanced View is on, these tabs will appear.

 nedpylersatremotefeature

3. If still not seeing tab:

UNIX Attributes

Add the following RSAT feature: "Server for NIS Tools". Then restart DSA.MSC and if Advanced View is on, this tab will appear.

nedpylersatremotefeature2

4. The "Dial-In" tab will always be missing, as its libraries are not included in RSAT due to a design decision by the networking Product Group. If you need this one added, open a Premier contract support case and file a DCR. We’ve had a number of customers complain about this, but none of them bothered to actually file a design change request so my sympathy wanes. Until they do, there is no possibility of this being changed.

Question

What tools will synchronize passwords from AD to ADAM or ADLDS?

Answer

MIIS/IIFP (now Forefront Identity Management 2010) can do that. We don't have any in-box tools or options for this. [Thanks to our resident ADAM expert Jody Lockridge for this answer. He’s forgotten more about ADAM than I’ll ever know - Ned]

Question

I am trying to script changing user home folders to match the users’ logon ID’s. I’ve tried this:

dsquery.exe user OU=AD_ABC,DC=domain,DC=local | dsmod.exe user -hmdir \\servername\%username%

But this only places the currently logged on username in all users profile. How can I make this work?

Answer

DSMOD.EXE includes a special token you can use called $username$. It automatically uses the SAM account name passed in from DSQUERY commands and works with the –hmdir, –email, –webpg, and –profile arguments.

So if I do this to locate all my users and update their home directory:

clip_image002

I get this:

clip_image002[5]

Question

When will the Windows Server 2008 Resource Kit utilities and tools be released?

Answer

Never. If it didn’t happen 3 years ago, it’s not going to happen now. The books do include helpful scripts and such, but the days of providing unsupported out of band reskit binaries are behind us - and it’s for the best. If you want to buy the 2008 books, here’s the place:

2008 Resource Kit -  http://www.microsoft.com/learning/en/us/book.aspx?ID=10345&locale=en-us
2008 GP Resource Kit - http://www.microsoft.com/learning/en/us/book.aspx?ID=9556&locale=en-usR

Question

Something something somethingAuditingsomething something something.

Answer

While I find Windows security auditing quite interesting and periodically write about it, if you want retroactive answers to every common audit question you need to visit Eric Fitzgerald’s  blog "Windows Security Logging and Other Esoterica”. Eric was once the PM of Windows Security auditing and helped design the new audit system in Vista/2008, then he moved on to helping design the Audit Collection Service, and gosh knows what he does now – he’d probably have to kill me after he told me. A million years ago, Eric was also a Support Engineer in my organization, so he knows your pain better than most Windows developers. Many questions I get asked about auditing have already been answered on his blog so give it a look before searching the rest of the Internet. Eric is also a funny, decent guy and a good writer – pick any blog post and you will learn something. I wish he wrote more often.

 

Finally, we had a nice visit this week from Tim Springston – yes, that  Tim Springston. Tim’s been working on a new system designed to make it easier for you to open support cases and have them route correctly so he bored us to tears demo’ed all that to us. Make sure you stop by his blog and check it out.

Until next time.

Ned “fingers crossed on the blog migration” Pyle

Friday Mail Sack: Shut Up Laura Edition

$
0
0

Hello again folks, Ned here for another grab bag of questions we’ve gotten this week. This late posting thing is turning into a bad habit, but I’ve been an epileptic octopus here this week with all the stuff going on. Too many DFSR questions though, you guys need to ask other stuff!

Let’s crank.

Question

Is it possible to setup a DFSR topology between branch servers and hub servers, where the branches are an affiliate company that are not a member of our AD forest?

Answer

Nope, the boundary of DFSR replication is the AD forest. Computers in another forest or in a workgroup cannot participate. They can be members of different domains in the same forest. In that scenario, you might explore scripting something like:

robocopy.exe /mot /mir<etc>

Question

I was examining KB 822158 – with the elegant title of “Virus scanning recommendations for Enterprise computers that are running currently supported versions of Windows” - and wanted to make sure these recommendations are correct for potential anti-virus exclusions in DFSR.

Answer

They better be, I wrote the DFSR section! :-)

Question

Is there any way to tell that a user’s password was reset, either by the user or by an admin, when running Win2008 domains?

Answer

Yes – once you have rolled out Win2008 or R2 AD and have access to granular auditing, this becomes two easy events to track once you enable the subcategory User Account Management:

ID 

Message 

4723 

An attempt was made to change an account's password.  

4724  

An attempt was made to reset an account's password.

 

Once that is turned on, the 4724 event tells you who changed who’s password:

clip_image002

And if you care, the 4738 confirms that it did change:

image 

If a user changes their own password, you get the same events but the Subject Security ID and Account Name change to that user.

Question

Any recommendations (especially books) around how to program for the AD Web Service/AD Management Gateway service?

Answer

Things are a little thin here so far for specifics, but if you examine the ADWS Protocol specification and start boning up on the Windows Communication Foundation you will get rolling.

Windows Communication Foundation
http://msdn.microsoft.com/en-us/library/dd456779(v=VS.100).aspx

WCF Books - http://www.amazon.com/s/ref=pd_lpo_k2_dp_sr_sq_top?ie=UTF8&cloe_id=05ebc737-d598-45a3-9aec-b37cc04e3946&attrMsgId=LPWidget-A1&keywords=windows%20communication%20foundation&index=blended&pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0672329484&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1NQD69FBHSA2RM8PR97K)

[MS-ADCAP]: Active Directory Web Services: Custom Action Protocol Specification
http://msdn.microsoft.com/en-us/library/dd303965(v=PROT.10).aspx

Remember that we don’t do developer support here on AskDS so you should direct your questions over to the AD PowerShell devs if you get stuck in code specifics.

Question

Is their any guidance around using DFSR with satellite link connections?

Answer

Satellite connections create a unique twist to network connectivity – they often have relatively wide bandwidth compared to low-end WAN circuits, but also have comparitively high latency and error levels. When transmitting a packet through a geosynchronous orbit hop, it hits the limitation of the speed of light – how fast you can send a packet 22,000 miles up, down, then reply with a packet up and down again. And when talking about a TCP conversation using RPC, one always uses round trip times as part of the equation. You will be lucky to average 1400 millisecond response times with satellite, compared to a frame relay circuit that might be under 50ms. This also does not account for the higher packet loss and error rates typically seen with satellite ISP’s. Not to mention what happens when it, you know, rains :-).  In a few years you can think about using medium and low earth orbit satellites to cut down latency, but those are not commercially viable yet. The ones in place have very little bandwidth.

When it comes to DFSR, we have no specific guidance except to use Win2008 R2 (or if you must, Win2008) and not Win2003 R2. That first version of DFSR uses synchronous RPC for most communications and will not reliably work over satellite’s high latency and higher error rates – Win2008 R2 uses asynchronous RPC. Even Win2008 R2 may perform poorly on the lower bandwidth ranges. Make sure you pre-seed data and do not turn off RDC on those connections.

Other

Totally unrelated, I found this slick MCP business card thing we’re doing now since we stopped handing out the laminates. It’s probably been around for a while now, but hey, new to me. :) If you go to https://www.mcpvirtualbusinesscard.com and provide your MCP ID # and Live ID you can get virtual business cards that link to your transcript.

Then you can have static cards: 

Or get fancy stuff like this javascript version. Mouse over the the right side to see what I mean:


Oh yeah, did you know my name is really Edward? They have a bunch of patterns and other linking options if you don't want graphics; give it a look. 

 

Finally, I want to welcome the infamous Laura E. Hunter to the MSFT borg collective. Author and contributor to TechNet Magazine, the AD Cookbook, AD Field Guide, Microsoft Certified Masters, and endlessboring a considerable body of ADFS documents, Laura is most famously known for her www.ShutUpLaura.com blog. And now she’s gone blue – welcome to Microsoft, Laura! Now get to work.

Have a nice weekend folks,

- Ned “what does the S stand for Bobby?” Pyle

Using AD Recycle Bin to restore deleted DNS zones and their contents in Windows Server 2008 R2

$
0
0

Ned here again. Beginning in Windows Server 2008 R2, Active Directory supports an optional AD Recycle Bin that can be enabled forest-wide. This means that instead of requiring a System State backup and an authoritative subtree restore, a deleted DNS zone can now be recovered on the fly. However, due to how the DNS service "gracefully" deletes, recovering a DNS zone requires more steps than a normal AD recycle bin operation.

Before you roll with this article, make sure you have gone through my article here on AD Recycle Bin:

The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting

Note: All PowerShell lines are wrapped; they are single lines of text in reality.

Restoring a deleted AD integrated zone

Below are the steps to recover a deleted zone and all of its records. In this example the deleted zone was called "ohnoes.contoso.com" and it existed in the Forest DNS Application partition of the forest “graphicdesigninstitute.com”. In your scenario you will need to identify the zone name and partition that hosted it before continuing, as you will be feeding those to PowerShell. 

1. Start PowerShell as an AD admin with rights to all of DNS in that partition (preferably an Enterprise Admin) on a DC that hosted the zone and is authoritative for it.

2. Load the AD modules with:

Import-Module ActiveDirectory

3. Validate that the deleted zone exists in the Deleted Objects container with the following sample PowerShell command:

get-adobject -filter 'isdeleted -eq $true -and msds-lastKnownRdn -eq "..Deleted-ohnoes.contoso.com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" -property *

Note: the zone name was changed by the DNS service to start with "..-Deleted-", which is expected behavior. This behavior means that when you are using this command to validate the deleted zone you will need to prepend whatever the old zone name was with this "..Deleted-" string. Also note that in this sample, the deleted zone is in the forest DNS zones partition of a completely different naming context, just to make it interesting.

4. Restore the deleted zone with:

get-adobject -filter 'isdeleted -eq $true -and msds-lastKnownRdn -eq "..Deleted-ohnoes.contoso.com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" | restore-adobject

Note: the main changes in syntax now are removing the "-property *" argument and pipelining the output of get-adobject to restore-adobject.

5. Restore all child “DNSnode” objects of the recovered zone with:

get-adobject -filter 'isdeleted -eq $true -and lastKnownParent -eq "DC=..Deleted-ohnoes.contoso.com,CN=MicrosoftDNS,DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com"' -includedeletedobjects -searchbase "DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" | restore-adobject

Note: the "msds-lastKnownRdn" has now been removed and replaced by "lastKnownParent", which is now pointed to the recovered (but still mangled) version of the domain zone. All objects with that as a previous parent will be restored to their old location. Because DNS stores all of its node values as flattened leaf objects, the structure of deleted records will be perfectly recovered.

6. Rename the recovered zone back to its old name with:

rename-adobject "DC=..Deleted-ohnoes.contoso.com,CN=MicrosoftDNS,DC=ForestDnsZones,DC=graphicdesigninstitute,DC=com" -newname "ohnoes.contoso.com"

Note: the rename operation here is just being told to remove the old "..Deleted-" string from the name of the zone. I’m using PowerShell to be consistent but you could just use ADSIEDIT.MSC at this point, we’re done with the fancy bits.

7. Restart the DNS service or wait for it to figure out the zone has recovered (I usually had to restart the service in repros, but then once it worked by itself for some reason – maybe a timing issue; a service restart is likely your best bet). The zone will load without issues and contain all of its recovered records.

Special notes

If the deleted zone was the delegated _msdcs zone (or both the primary zone and delegated _msdcs zone were deleted and you now need to get the _msdcs zone back):

a. First restore the primary zone and all of its contents like above.

b. Then restore the _msdcs zone like in step 4 (with no contents).

c. Next, restore all the remaining deleted _msdcs records using the lastKnownParent DN which will now be the real un-mangled domain name of that zone. When done in this order, everything will come back together delegated and working correctly.

d. Rename it like in step 6.

Note: If you failed to do step c before renaming the zone because you want to recover select records, the recovered zone will fail to load. The DNS snap-in will display the zone but selecting the zone will report “the zone data is corrupt”. This error occurs because the “@” record is missing. If this record was not restored prior to the rename simply rename the zone back to “..Deleted-“, restore the “@” record, rename the zone once more and restart the DNS Server service. I am intentionally not giving a PowerShell example here as I want you to try all this out in your lab, and this will get you past the “copy and paste” phase of following the article. The key to the recycle bin is getting your feet wet before you have the disaster!

A couple more points

  • If the zones were deleted outside of DNS (i.e. not using DNS tools) then the renaming steps will be unnecessary and you can just restore it normally. If that happens someone was really being a goof ball.
  • The AD Recycle Bin can only recover DNS zones that were AD-integrated; if the zones were Standard Primary and stored in the old flat file format, I cannot help you.
  • I have no idea why DNS has this mangling behavior and asking around the Networking team didn’t give me any clues. I suspect it is similar to the reasoning behind the “inProgress” zone renaming that occurs when a zone is converted from standard primary to AD Integrated, in order to somehow make the zone invalid prior to deletion, but… it’s being deleted, so who could care? Meh. If someone really desperately has to know, ping me in Comments and I’ll see about a code review at some point. Maybe.

As always, you can also “just” run an authoritative subtree restore with your backups and ntdsutil.exe also. If you think my steps looked painful, you should see those. KB’s don’t get much longer.

- Ned “let’s go back to WINS” Pyle


Friday Mail Sack: Barbados Edition

$
0
0

Hello world, Ned here again. I’m back to write this week’s mail sack – just in time to be gone for the next two weeks on vacation and work travel. In the meantime Jonathan and Scott will be running the show, so be sure to spam the heck out of them with whatever tickles you. This week we discuss DFSR, Certificates, PKI, PowerShell, Audit, Infrastructure, Kerberos, NTLM, Active Directory Migration Tool, Disaster Recovery, and not-art.

Catluck en ’ dogluck!

image

Question

I need to understand what the difference between the various AD string type attribute syntaxes are. From http://technet.microsoft.com/en-us/library/cc961740.aspx : String(Octet), String(Unicode), Case-Sensitive String, String(Printable), String(IA5) et al. While I understand each type represents a different way to encode the data in the AD database, it isn't clear to me:

  1. Why so many?
  2. What differences are there in managing/using/querying them?
  3. If an application uses LDAP to update/read an attribute of one string type, is it likely to encounter issues if the same routine is used to update/read a different string type?

Answer

Active Directory has to support data-storage needs for multiple computer systems that may use different standards for representing data. Strings are the most variable data to be encoded because one has to account for different languages, scripts, and characters. Some standards limit characters to the ANSI character set (8-bit) while others specify another encoding type altogether (IA5 or PrintableString for X.509, for example).

Since Active Directory needs to store data suitable for all of these various systems, it needs to support multiple encodings for string data.

Management/query/read/write differences will depend very much on how you access the directory. If you use PowerShell or ADSI to access the directory, some level of automation is involved to properly handle the syntax type. PowerShell leverages the System.String class of the .NET Framework which handles, pretty much invisibly, the various string types.

Also, when we are talking about the 255-character extended ANSI character set, which includes the Latin alphabet used in English and most European Languages, then the various encodings are pretty much identical. You really won't encounter much of a problem until you start working in 2-byte character sets like Kanji or other Eastern scripts.

Question

Is it possible / advisable to run the CA service under an account different from SYSTEM with EFS enabled for some files that should not be read by system or would another solution be more appropriate?

Answer

No, running the CA service under any account other than Network Service is not supported. Users who are not trusted for Administrator access to the server should not be granted those rights.

[PKI and string type answers courtesy of Jonathan Stephens, the “Blaster” in our symbiotic “Master Blaster” relationship – Ned]

Question

Tons of people asking us about this article http://blogs.technet.com/b/activedirectoryua/archive/2010/08/04/conditions-for-kerberos-to-be-used-over-an-external-trust.aspx and if it is true or false or confused or what.

Answer

It’s complicated and we’re getting this ironed out. Jonathan is going to create a whole blog post on how User Kerberos can function perfectly without a Kerberos Trust, or with an NTLM trust, or with no trust. It’s all smoke and mirrors basically – you don’t need a trust in all circumstances to use User Kerberos. Heck, don’t even have to use a domain-joined computer. For now, disregard that article please.

Question

I followed the steps outlined in this blog post: http://blogs.msdn.com/b/ericfitz/archive/2005/08/04/447951.aspx. Works like a champ and I see the data correctly in the Event Viewer. But when I try to use PowerShell 2.0 on one of those Win2003 DC’s with this syntax:

Get-EventLog -logname security -Newest 1 -InstanceId 566 | Where-Object { $_.entrytype -match "Success" } | Format-List

A bunch of the outputs are broken and unreadable (they look like un-translated GUID’s and variables). Like Object Type and Object Name, for example:

image

Answer

Ick, I can repro that myself.

This appears to be an issue in PowerShell 2.0 Get-EventLog cmdlet on Win2003 where an incorrect value is being displayed. You can’t have the issue on Win2008/2008 R2, I verified. Hopefully one of our Premier contract customers will report this issue so we can investigate further and see what the long term fix options are.

In the meantime though, here’s some sample workaround code I banged up using an alternative legacy cmdlet Get-WmiObject to do the same thing (including returning the latest event only, which makes this pretty slow):

Get-WmiObject -query "SELECT * FROM Win32_NTLogEvent Where Logfile = 'Security' and EventCode=566" | sort timewritten –desc | select –first 1

Slower and more CPU intensive, but it works.

image

A better long term solution (for both auditing and PowerShell) is get your DC’s running Win2008 R2.

Question

Do you have suggestions for pros/cons on breaking up a large DFSR replication group? One of our many replication groups has only one replicated folder. Over time that folder has gotten to be a bit large with various folders and shares (hosted as links) nested within. Occasionally there are large changes to the data and the replication backlog obviously impacts the ENTIRE folder. I have thought about breaking the group into several individual replication folders, but then I begin to shudder at the management overhead and monitoring all the various backlogs, etc.

  1. Is there a smooth way to transition an existing replication group with one replicated folder into one with many replicated folders? By "smooth" I mean no disruption to current replication if at all possible, and without re-replicating the data.
  2. What are the major pros/cons on how many replicated folders a given group has configured?

Answer

There’s no real easy answer – any change of membership or replicated folder within an RG means a re-synch of replication; the boundaries are discrete and there’s no migration tool. The fact that a backlog is growing won’t be helped by more or fewer RG/RF combos though, unless the RG/RF’s now involve totally different servers. Since the DFSR service’s inbound/outbound file transfer model is per server, moving things around locally doesn’t change backlogs significantly*.

So:

  1. No way to do this without total replication disruption (as you must rebuild the RG’s/RF’s in DFSR from scratch; the only saving grace here is if you don’t have to move data, you would get some pre-seeding for free).
  2. Since each RF would still have a staging/conflictanddeleted/installing/deleted folder each, there’s not much performance reasoning behind rolling a bunch of RF’s into a single RG. And no, you cannot use a shared structure. :) The main piece of an RG is administrative convenience: delegation is configured at an RG level for example, so if you had a file server admin that ran all the same servers that were replicating… stuff… it would be easier to organize those all as one RG.

* As a regular reader though, I imagine you’ve already seen this, which has some other ways to speed things up; that may help some of the choke ups:

http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

Question

Is there an Add-QADPermission (from Quest) equivalent command is in AD PowerShell?

Answer

There is not a one-to-one cmdlet. But it can be done:

http://blogs.msdn.com/b/adpowershell/archive/2009/10/13/add-object-specific-aces-using-active-directory-powershell.aspx

It is – to be blunt – a kludge in our current implementation.

Question

I am working on an inter-forest migration that will involve a transitional forest hop. If I have to move the objects a second time to get them from a transition forest into our forest then will I lose the original SID History that is in the SID History attribute.?

Answer

You will end up with multiple SID history entries. It’s not an uncommon scenario to see customers would have been through multiple acquisitions and mergers end up with multiple SID histories. As far as authorization goes, it works fine and having more than one is fine:

http://msdn.microsoft.com/en-us/library/ms679833(VS.85).aspx

Contains previous SIDs used for the object if the object was moved from another domain. Whenever an object is moved from one domain to another, a new SID is created and that new SID becomes the objectSID. The previous SID is added to the sIDHistory property.

The real issue is user profiles. You have to make sure that ADMT profile translation is performed so that after users and computers are migrated the ProfileList registry entries are updated to use the user’s real current SID info. If you do not do this, when you someday need to use USMT to migrate data it will fail as it does not know or care about old SID history, only the SID in the profile and the current user’s real SID.

And then you will be in a world of ****.

image 
Picture courtesy of the IRS

Question

Do you know if there is any problem with creating a DNS record with the name ldap.contoso.com name? Or maybe there will be some problems with other components of Active Directory if there is a record called “LDAP”?

Answer

Windows certainly will not care and we’ve had plenty of customers use that specific DNS name. We keep a document of reserved names as well, so if you don’t see something in this list, you are usually in good shape from a purely Microsoft perspective:

909264  Naming conventions in Active Directory for computers, domains, sites, and OUs
http://support.microsoft.com/default.aspx?scid=kb;EN-US;909264

This article is also good for winning DNS-related bar bets. If you drink at a pub called “The Geek and Spanner”, I suppose…

image
This is not that pub

Question

I'm currently working on a migration to Windows Server 2008 R2 AD forest – specifically the Disaster Recovery plan. Is it good idea to take one of the DCs offline, and after every successful "adprep operation" bring it back online? Or in case if something will go bad use this offline one to recreate domain?

Answer

The best solution is to put these plans in place:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

That way no matter what happens under any circumstances (not just adprep), you have a way out. You can’t imagine how many customers we deal with every day that have absolutely no AD Disaster Recovery system in place at all.

Question

How did you make this kind of picture in your DFSR server replacement series?

image

[From a number of readers]

Answer

MS Office to the rescue for a non-artist like me. This is a modified version of the “relaxed perspective” picture format preset.

1. Create your picture, then select it and use the Picture Tools Format ribbon tab.

image

2. Use the arrows to see more of the style options, and you’ll see the one called “Relaxed Perspective, White”. Select that and your picture will now look like a three dimensional piece of paper.

image

3. I find that the default is a little too perspective though, so right-click it and select “Format Picture”.

 image 

4. Use the 3-D Rotation menu to adjust the perspective and Y axis.

image

You can get pretty crazy with Office picture formatting.

image
Why yes sir, we do have plastic duck eight-ball clipart. Just the one today?

See you all in a few weeks,

Ned “please don’t audit me, I was kidding” Pyle

Friday Mail Sack: Cluedo Edition

$
0
0

Hello there folks, it's Ned. I’ve been out of pocket for a few weeks and I am moving to a new role here, plus Scott and Jonathan are busy as #$%#^& too, so that all adds up to the blog suffering a bit and the mail sack being pushed a few times. Never fear, we’re back with some goodness and frankness. Heck, Jonathan answered a bunch of these rather than sipping cognac while wearing a smoking jacket, which is his Friday routine. Today we talk certs, group policy, backups, PowerShell, passwords, Uphclean, RODC+FRS+SYSVOL+DFSR, and blog editing. There were a lot of questions in the past few weeks that required some interesting investigations on our part – keep them coming.

Let us adjourn to the conservatory.

Question

Do you know of a way to set user passwords to expire after 30 days of inactivity?

Answer

There is no automatic method for this, but with a bit of scripting it would be pretty trivial to implement. Run this sample command as an admin user (in your test environment first!!!):

Dsquery.exe user -inactive 4 | dsmod.exe user –mustchpwd yes

Dsquery will find all users in that domain that have not logged in for 4 weeks or longer, then pipe that list of DN’s into the Dsmod command that sets the “must change password at next logon” (pwdlastset) flag on each of those users.

image

You can also use AD PowerShell in Win2008 R2/Windows 7 RSAT to do this.

search-adaccount –accountinactive –timespan 30 –usersonly | set-aduser –changepasswordatlogon 1

The PowerShell method works a little differently; Dsquery only considers inactive accounts that have logged on. Search-adaccount also considers users that have never logged on. This means it will find a few “users” that cannot usually have their password change flags enabled, such as Guest, KRBTGT, and TDO accounts that are actually trusts between domains. If someone wants to post a slick example of bypassing those, please send them along (as the clock ran down here).

Question

As it’s stated here: http://technet.microsoft.com/en-us/library/cc753609%28WS.10%29.aspx  

"You are not required to run the ntdsutil snapshot operation to use Dsamain.exe. You can instead use a backup of the AD DS or AD LDS database or another domain controller or AD LDS server. The ntdsutil snapshot operation simply provides a convenient data input for Dsamain.exe."

I should be able to mount snapshot and use dsamain to read AD content, with only full backup of AD. But I can't. Using ntdsutil I can list and mount snapshot from AD, but I can't do "dsamain -dbpath full_path_to_ntds.dit".

Answer

You have to extract the .DIT file from the backup.

1. First run wbadmin get versions. In the output, locate your most recent backup and note the Version identifier:

wbadmin get versions

2. Extract the Active Directory files from the backup. Run:

 wbadmin start recovery -versions:<version identifier> -itemtype:app -items:AD -recoverytarget:<drive>

3. A folder called Active Directory will be created on the recovery drive. Contained therein you'll find the NTDS.DIT file. To mount it, run:

dsamain -dbpath <recovery folder>\ntds.dit -ldapPort 4321

4. The .DIT file will be mounted, and you can use LDP or ADSIEDIT to connect to the the directory on port 4321 and browse it.

Question

I has run into the issue described in KB976922 where "Run only specified Windows Applications" or “Run only allowed Windows applications” is blank when you are mixing Windows XP/Windows Server 2003 and Windows Server 2008/R2 Windows 7 computers. Some forum posts on TechNet state that this was being fixed in Win7 and Win2008 R2 though, which appears to be untrue. Is this being fixed in SP1 or later or something?

Answer

It’s still broken in Win7/R2 and still broken in SP1. It’s quite likely to remain broken forever as there are so many workarounds and the technology in question actually dates back to before group policy – it was part of Windows 95 (!!!) system policies. Using this policy isn’t very safe. It’s often something that was configured many years ago  that lives on through inertia.

Windows 7 and Windows Server 2008 R2 introduced AppLocker to:

  • Help prevent malicious software (malware) and unsupported applications from affecting computers in your environment.
  • Prevent users from installing and using unauthorized applications.
  • Implement application control policy to satisfy security policy or compliance requirements in your organization.

Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 all support Software Restriction Policies (SAFER) which also control applications similarly to AppLocker. Both AppLocker and SAFER replace that legacy policy setting with something less easily bypassed and limited.

For more information about AppLocker, please review:
http://technet.microsoft.com/en-us/library/dd723678(WS.10).aspx

For more information about SAFER, please review:
http://technet.microsoft.com/en-us/library/bb457006.aspx

I updated the KB to reflect all this too.

Question

Is it possible to store computer certificates in a Trusted Platform Module (TPM) in Windows 7?

Answer

The default Windows Key Storage Provider (KSP) does not use a TPM to store private keys. That doesn't mean that some third party can't provide a KSP that implements the Trusted Computer Group (TCG) 1.2 standard to interact with a TPM and use it to store private keys. It just means that Windows 7 doesn't have such a KSP by default.

Question

It appears that there is a new version of Uphclean available (http://www.microsoft.com/downloads/en/details.aspx?FamilyId=1B286E6D-8912-4E18-B570-42470E2F3582&displaylang=en). What’s new about this version and is it safe to run on Win2003?

Answer

The new 1.6 version only fixes a security vulnerability and is definitely recommended if you are using older versions. It has no other announced functionality changes. As Robin has said previously, Uphclean is otherwise deceased and 2.0 beta will not be maintained or updated. Uphclean has never been an officially supported MS tool, so use is always at your own risk.

Question

My RODCs are not replicating SYSVOL even though there are multiple inbound AD connections showing when DSSITE.MSC is pointed to an affected RODC. Examining the DFSR event log shows:

Log Name: DFS Replication
Source: DFSR
Date: 5/20/2009 10:54:56 AM
Event ID: 6804
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: 2008r2-04.contoso.com
Description:
The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group.

New RODCs that are promoted work fine. Demoting and promoting an affected RODC fixes the issue.

Answer

Somebody has deleted the automatically generated "RODC Connection (FRS)" objects for these affected RODCs.

  • This may have been done because the customer saw that the connections were named "FRS" and they thought that with DFSR replicating SYSVOL that they were no longer required.
  • Or they may have created manual connection objects per their own processes and deleted these old ones.

RODCs require a special flag on their connection objects for SYSVOL replication to work. If not present, SYSVOL will not work for FRS or DFSR. To fix these servers:

1. Logon to a writable DC in the affected forest as an Enterprise Admin.

2. Run DSSITE.MSC and navigate to an affected RODC within its site, down to the NTDS Settings object. There may be no connections listed here, or there may be manually created connections.

dssitenedpyle1

3. Create a new connection object. Ideally, it will be named the same as the default (ex: "RODC Connection (FRS)").

dssitenedpyle2

4. Edit that connection in ADSIEDIT.MSC or with DSSITE.MSC attribute editor tab. Navigate to the "Options" attribute and add the value of "0x40".

dssitenedpyle3

dssitenedpyle4

5. Create more connections using these steps as necessary.

6. Force AD replication outbound from this DC to the RODCs, or wait for convergence. When the DFSR service on the RODC sees these connections, SYSVOL will begin replicating again.

More info about this 0x40 flag: http://msdn.microsoft.com/en-us/library/dd340911(PROT.10).aspx

RT (NTDSCONN_OPT_RODC_TOPOLOGY, 0x00000040): The NTDSCONN_OPT_RODC_TOPOLOGY bit in the options attribute indicates whether the connection can be used for DRS replication [MS-DRDM]. When set, the connection should be ignored by DRS replication and used only by FRS replication.

Despite the mention only of FRS in this article, the 0x40 value is required for both DFSR and FRS. Other connections for AD replication are still separately required and will exist on the RODC locally.

Question

What editor do you use to update and maintain this blog?

Answer

Windows Live Writer 2011 (here). Before this version I was hesitant to recommend it, as the older flavors had idiosyncrasies and were irritating. WLW 2011 is a joy, I highly recommend it. The price is right too: free, with no adware. And it makes adding content easy…

 
Like Peter Elson artwork.

Or the complete 5 minutes and 36 seconds of Lando Calrissian dialog
 
Map picture

Or Ned

GoatBlack Sheep
Or ovine-related emoticons.

 

That’s all for this week.

- Ned “Colonel Mustard” Pyle and Jonathan “Professor Plum” Stephens

Friday Mail Sack: The Gang’s All Here Edition

$
0
0

Hi folks, Ned here again with your questions and our answers. This is a pretty long one; looks like everyone is back from vacation, winter storms, and hiding from the boss. Today we talk Kerberos, KCC, SPNs, PKI, USN journaling, DFSR, auditing, NDES, PowerShell, SIDs, RIDs, DFSN, and other random goo.

Rawk!

Question

Is NIC teaming recommended on domain controllers?

Answer

It’s a sticky question – MS does not make a NIC teaming solution, so you are at the mercy of 3rd party vendor software and if there are any issues, we cannot help other than to break the team. So the question you need to answer is “do you trust your NIC vendor support?”

Generally speaking, we are not huge fans of NIC teaming, as we see customers having frequent driver issues and because a DC probably doesn’t need it. If clients are completely consuming 1Gbit or 10Gbit network interfaces, the DC is probably being overloaded with requests. Doubling that network would make things worse; it’s better to add more DCs. And if the DC is also running Exchange, file server, SQL, etc. you are probably talking about an environment without many users or clients.

A failover NIC solution is probably a better option if your vendor supports it. Meaning that the second NIC is only used if the first one burns out and dies, all on the same network. 

Question

We used to manually create SPNs with IP addresses to allow Kerberos without network name resolution. This worked in Windows XP and 2003 but stopped working in later operating systems. Is this expected?

Answer

Yes it is. Starting in Windows Vista and forever more, the OS examines the format of the SPN being requested and if it is only an IP address, Kerberos is not even attempted. There’s no way to override this behavior. If I look at it in practical terms, having manually set an IP Address for SPN:

image

Then I actually try mapping a driver here with an IP address (which would have worked in XP in this scenario):

image

No tickets were cached above. And in the network capture below, it’s clear that I am using NTLM:

image

image

This is why in this previous post– see the “I want to create a startup script via GPO” and “NTLM is not allowed for computer-to-computer communication” sections – I highly discouraged customers from this sort of hacking. What I didn’t realize when I wrote the old post was that I now have the power to control the future with my mind.

image
Actual MRI of my head, proving that I have an orange (i.e. “futurasmic”) brain

Question

I see that the DFSR staging folder can be moved, but can the Conflict and Deleted (\dfsrprivate\conflictanddeleted) folder be relocated?  If so, how?

Answer

It cannot be moved or renamed – this was once planned (and there is even an AD attribute that makes one think the location could be specified) but it never happened in the service code. Regardless of what you put in that attribute, DFSR ignores it and creates a C&D folder at the default location.

For example, here I specified a completely different C&D path using ADSIEDIT.MSC before DFSR even created the folder. Once I started the DFSR service, it ignored my setting and created the conflict folder with defaults:

clip_image002

Question

We are trying to find the best way to issue Active Directory "User" certificates to iPhones and iPads, so these users can authenticate to our third party VPN appliance using their "user" certificate. We were thinking that MS NDES could help up with this. Everything I have read says that NDES is used for non domain "computer or device" enrollment.

Answer

[From Rob Greene, author of previous post iPad / iPhone Certificate Issuance]

Just because the certificate template that is used by NDES must be of type computer does not mean you cannot build a SCEP protocol message to the NDES Server for use by a user account on the iPhone in question.

Keep in mind that the SCEP protocol was designed by Cisco for their network appliances to be able to enroll for certificates online.  Also understand what NDES means - Network Device Enrollment Service.

Realistically there is no reason why you cannot enroll for a certificate via SCEP interface with NDES and have a user account using the issued certificate.  However, NDES is code to specifically only allow for enrollment of computer based certificate templates.  If you put a user based template name in the registry for it to issue, it will fail with a not –so-easily deciphered message.

That said, keep in mind that the subject or Subject Alternative Name field identifies the user of the certificate not the template. 

So what you could do is:

  1. Duplicate the computer certificate template.
  2. Then change the subject to “Supply in the Request”
  3. Then give the template a unique name.
  4. Make sure that the NDES account and Administrator have security access to the template for Enroll.
  5. Assign the Template to be issued.
  6. Then you need to assign the template to one of the purposes in the NDES registry (You might want to use the one for both signing and encrypting).  See the blog.

Now you have a certificate with the EKU of Client Authentication and a subject / SAN of the user account, I don’t see why you could not use that for what you need. Not that I have tested this or can test this, mind you…

Question

Is there a “proper” USN Journal setting versus replicated data sizes, etc. on the respective volumes housing DFSR data? I've come across USN journal wrap issues (that properly self heal ... and then occur again a month or so later). I’m hoping to know a happy medium on USN journal sizing versus size of volume or data that resides on that volume.

Answer

I did a quick bit of research - in the history of all MS DFSR support cases, it was necessary to increase the USN journal size for five customers – not exactly a constant need. Our recommendation is not to alter it unless you get multiple 2202 events that can’t be fixed any other way:

Event ID=2202
Severity=Warning
The DFS Replication service has detected an NTFS change journal wrap on volume %2.
A journal wrap can occur for the following reasons:
1.The USN journal on the volume has been truncated. Chkdsk can truncate the
journal if it finds corrupt entries at the end of the journal.
2.The DFS Replication service was not running on this computer for an extended
period of time.
3.The DFS Replication service could not keep up with the rate of file changes
on the volume.
The service has automatically initiated the journal wrap recovery process.

Additional Information:
Volume: %1

Since you are getting multiple 2202 occurrences, I would recommend first figuring out why you are getting the journal wraps. The three reasons listed in the event need to be considered – the first two are avoidable (fix your disk or controller and stop turning the service off) and should be handled without a need to alter the USN journal.

The third one may mean you are not using DFSR as recommended, but that may be unavoidable. In that case, set the USN size value to 1GB and validate that the issue stops occurring. We have no real formula here (remember, only five customers ever), but if you cannot spare another 512MB on the drive you have much more important problems to consider around disk capacity. If still not enough, revisit if DFSR is the right solution for you – the amount of changes occurring would have to be so incredibly rapid that I doubt DFSR could ever realistically keep up and converge. And make sure that nothing else is updating all the files outside of the journal on that drive – there is only one journal and it contains entries for all files, even the ones not being replicated!

Just to answer the inevitable question: you use WMI to increase the USN journal size.

On Win2003 R2 only:

1. Determine the volume in question (USN journals are volume specific) and the GUID for that volume by running the following:

WMIC.EXE /namespace:\\root\Microsoftdfs path DfsrVolumeInfo get VolumePath
WMIC.EXE /namespace:\\root\Microsoftdfs path DfsrVolumeInfo get VolumeGUID

This will return (for example:)

VolumePath
\\.\C:
\\.\E:

VolumeGuid
4649C7A1-82D5-11DA-922B-806E6F6E6963
D1EB0B66-9403-11DA-B12E-0003FFD1390B

2a. Raise the USN Journal Size (for one particular volume):

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig.VolumeGuid="%GUID%" set minntfsjournalsizeinmb=%MB SIZE%

where you replace '%GUID%' with the volume GUID and '%MB SIZE%' with a larger USN size in MB. For example:

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig.VolumeGuid="D1EB0B66-9403-11DA-B12E-0003FFD1390B" set minntfsjournalsizeinmb=1024

This will return 'Property Update Successful' for that GUID.

2B. Raise the USN Journal Size (for all volumes)

WMIC /namespace:\\root\microsoftdfs path dfsrvolumeconfig set minntfsjournalsizeinmb=%MB SIZE%

This will return 'Property Update Successful' for ALL the volumes.

3. Restart server for new journal size to take effect in NTFS.

Update 4/15/2011 - On Win2008 or later:

1. Open Windows Explorer.
2. In Tools | Folder Options | View - uncheck 'Hide protected operating system files'.
3. Navigate to each drive's 'system volume information\dfsr\config' folder (you will need to add 'Administrators, Full Control' to prevent access denied error).
4. In Notepad, open the 'Volume_%GUID%.xml' file for each volume you want to increase.
5. There will be a set of tags that look like this:

<MinNtfsJournalSizeInMb>512</MinNtfsJournalSizeInMb>

6. Stop the DFSR service.
6. Change '512' to the new increased value.
7. Close and save that file, and repeat for any other volumes you want to up the journal size on.
8. Start the DFSR service back up.

Question

There is a list of DFS Namespace events for Server 2000 at http://support.microsoft.com/kb/315919. I was wondering if there is a similar list of Windows 2008 DFS Event Log Messages?

Answer

That event logging system in KB315919 exists only in Win2000 – Win2003 and later OSs don’t have it anymore. That KB is a bit misleading also: these events will never write unless you enable them through registry settings.

Registry Key: HKEY_LOCAL_MACHINE\SOFTWARE\MicroSoft\Windows NT\CurrentVersion\Diagnostics
Value name: RunDiagnosticLoggingDfs 
Value type: REG_DWORD
Value data: 0 (default: no logging), 2 (verbose logging)

Registry Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dfs
Value name: DfsSvcVerbose
Value type: REG_DWORD
Value data: Any one of the below three values:
0 (no debug output)
1 standard debug output
0x80000000 (standard debug output plus additional Dfs volume call info)

Value name: IDfsVolInfoLevel
Value type: REG_DWORD
Value data: Any combination of the following 3 flags:
0x00000001 Error
0x00000002 Warning
0x00000004 Trace

Dave and I scratched our heads and in our personal history of supporting DFSN, neither of us recalled ever turning this on or using those events for anything useful. Not that it matters now, Windows 2000 is as dead as fried chicken.

Question

We currently have inherited auditing settings on a lot of files and folders that live on our two main DFSR servers. The short story is that before the migration to DFSR, the audit settings were apparently added by someone to the majority of the files/folders. This was replicated by DFSR and now is set on both servers. Thankfully we do not have any audit policies turned on for those servers currently.

That is where the question comes in: there may be a time in the relatively near future that we will want to enable some auditing for a subset of files/folders. Any suggestions on how we could remove a lot of the audit entries on these servers, without forcing nearly every file to get processed by DFSR?

Answer

Nope, it’s going to cause an unavoidable backlog as DFSR reconciles all the security changes you just made – the audit security is part of the file just like the discretionary security. Don’t do that until you have a nice big change control window open. Maybe just do some folders at a time.

In the future, using Global Object Access Auditing would be an option (if you have Win2008 R2 on all DFSR servers). Since it is all derived by LSA and not directly stamped, DFSR won’t replicated the file – the files are never actually changed. It’s slick:

image

image

http://technet.microsoft.com/en-us/library/dd772630(WS.10).aspx

In theory, you could get rid of the auditing in place currently currently and just use GOAA someday when you need it. It’s the future of file auditing, in my opinion; using direct SACLs on files should be discouraged forever more.

Question

Does the SID for an object have to be unique across the entire forest? It is pretty clear from existing documentation that the SID does have to be unique within a domain because of the way the RID Master distributes RID pools to the DCs. Does the RID Master in the Forest Root domain actually keep track of all the unique base SIDs of all domains to ensure that there is no accidental duplication of the unique base domain SIDs?

Answer

A SID will be unique within a forest, as each domain has a unique base SID that combines with a RID. That’s why there’s a RID master per domain. There is no reasonable way for the domain SIDs to ever be duplicated by Windows, although I have seen some third party products that made it happen. All hell broke loose, we don’t plan for the impossible. :) Even if you use ADMT to migrate users with SID History within a forest, it will not be duplicated as the migration will always destroy the old user when it is “moved”.

The RID Masters don’t talk to each other within the forest (any more than they would between different forests, where a duplicate SID would cause just as many problems when you tried to create a trust). The base SID is a random 48 bit number, so there is no reasonable way it could be duplicated by accident in the same environment. It comes down to us relying on the odds of two domains that know of each other ending up with the same SID through pure random chance – highly unlikely math.

You’ll also find no mention of inter-RID master needs or error messages communication here:

http://msdn.microsoft.com/en-us/library/cc223751(PROT.13).aspx
http://technet.microsoft.com/en-us/library/cc756394(WS.10).aspx

Question

I have this message in a health report:

“A USN journal loss occurred 2 times in the past 7 days on E:. DFS Replication monitors the USN journal to detect changes made to the replicated folder. Although DFS Replication automatically recovers from this problem, replication stops temporarily for replicated folders stored on this volume. Repeated journal loss usually indicates disk issues. Event ID: 2204”

Is this how the health report indicates a journal wrap or can I take “loss” literally ?

Answer

Ouch. That’s not a wrap, the journal was deleted or irrevocably damaged. I have never actually seen that event in the field, only in a test lab where I deleted my journal intentionally (using the nasty command: FSUTIL.EXE USN DELETEJOURNAL). I would suspect either a failing disk or 3rd party disk management software. It’s CHKDSK and disk diagnostic time for you.

The net recovery process is similar to a wrap for event 2204 ; the journal gets recreated, then repopulated like a wrap recovery (it uses the same code). You get event 2206 to know that it’s fixed.

Question

How come there is no “Set-SPN” cmdlet in AD PowerShell?

Answer

Ahh, but there is… sort of. We hide service principal name maintenance off in the Set-ADUser, Set-ADComputer, and Set-ADServiceAccount cmdlets.

-ServicePrincipalNames <hashtable>
Specifies the service principal names for the account. This parameter sets the ServicePrincipalNames property of the account. The LDAP display name (ldapDisplayName) for this property is servicePrincipalName. This parameter uses the following syntax to add remove, replace or clear service principal name values.
    Syntax:
    To add values:
      -ServicePrincipalNames @{Add=value1,value2,...}
    To remove values:
      -ServicePrincipalNames @{Remove=value3,value4,...}
    To replace values:
      -ServicePrincipalNames @{Replace=value1,value2,...}
    To clear all values:
      -ServicePrincipalNames $null

You can specify more than one change by using a list separated by semicolons. For example, use the following syntax to add and remove service principal names.
   @{Add=value1,value2,...};@{Remove=value3,value4,...}

The operators will be applied in the following sequence:
..Remove
..Add
..Replace

The following example shows how to add and remove service principal names.
   -ServicePrincipalNames-@{Add="SQLservice\accounting.corp.contoso.com:1456"};{Remove="SQLservice\finance.corp.
contoso.com:1456"}

We do not have any special handling to retrieve SPNs using Get-AdComputer or Get-Aduser (nor any other attributes – they treat all as generic properties). For example:

get-adcomputer name –properties serviceprincipalnames | select-object –expand serviceprincipalnames

image

I used select-object –expand because when you get a really long returned list, PowerShell likes to start truncating the readable output. Also, when I don’t know which cmdlets support which things, I sometimes cheat use educated guesses:

image

Question

I have posted a TechNet forum question around the frequency of KCC nomination and rebuilding and I was hoping you could reply to it.

“…He had made an update to the Active Directory Schema and as a safety-net had switched off one of our domain controllers whilst he did it. The DC (2008 R2) that was switched off was at the time acting as the automatically determined bridgehead server for the site.

Obviously the next thing that has to happen is for the KCC to run, discover the bridgehead server is still offline and re-nominate. My colleague thinks that this re-nomination should take upto 2 hours to happen. However all the documentation I can find suggests that this should be every 15 minutes. His argument is that it is a process of sampling, that it realises the problem every 15 minutes but can take upto 2 hours to actually action the change of bridgehead.

Can anyone tell me which of us is right please and if we could have a problem?”

Answer

We are running an exchange program between MS Support and MS Premier Field Engineering and our current guest is AD topology guru Keith Brewer. He replied in exhaustive detail here:

http://social.technet.microsoft.com/Forums/en/winserverDS/thread/0d10914f-c44c-425a-8344-3dfbac3ed955

Attaboy Keith, now you’re doing it our way – when in doubt, use overwhelming force.

Other random goo


Unless it doesn’t.


  • Star Wars on Blu-ray coming in September, now up for pre-order. Damn, I guess I have to get Blu-ray. Hopefully Lucas uses the opportunity to remove all midichlorian references.
  • The 6 Most Insane Cities Ever Planned. This is from Cracked, so as usual… somewhat NSFW due to swearing.
  • Not sure which sci-fi apocalypse is right for you? Use this handy chart.
  • It was an interesting week for Artificial Intelligence and gaming, between Starcraft and Jeopardy.

Until next time.

Ned “and return to Han shooting first!” Pyle

Friday Mail Sack: No Redesign Edition

$
0
0

Hello folks, Ned here again. Today we talk PDCs, DFSN, DFSR, AGPM, authentication, PowerShell, Kerberos, event logs, and other random goo. Let’s get to it.

Question

Is the PDC Emulator required for user authentication? How long can a domain operate without a server that is running the PDC Emulator role?

Answer

It’s not required for direct user authentication unless you are using (unsupported) NT and older operating systems or some Samba flavors. I’ve had customers who didn’t notice their PDCE was offline for weeks or months. Plenty of non-fully routed networks exist where many users have no direct access to that server at all.

However!

It is used for a great many other things:

  • With the PDCE offline, users who have recently changed their passwords are more likely to get logon or access errors. They will also be more likely to stay locked out if using Account Lockout policies.
  • Time can more easily get out of sync, leading to Kerberos authentication errors down the road.
  • The PDCE being offline will also prevent the creation of certain well-known security groups and users when you are upgrading forests and domains.
  • The AdminSDHolder process will not occur when the PDCE is offline.
  • You will not be able to administer DFS Namespaces.
  • It is where group policies are edited (by default).
  • Finally - and not documented by us - I have seen various non-MS applications over the years that were written for NT and which would stop working if there is no PDCE. There’s no way to know which they might be – a great many were home-made application written by the customer themselves – so you will have to determine this through testing.

But don’t just trust me; I am a major plagiarizer!

How Operations Masters Work (see section “Primary Domain Controller (PDC) Emulator”)
http://technet.microsoft.com/en-us/library/cc780487(WS.10).aspx

Question

The DFSR help file recommends a full mesh topology only when there are 10 or fewer members. Could you kindly let me know reasons why? We feel that a full mesh will mean more redundancy.

Answer

It’s just trying to prevent a file server administrator from creating an unnecessarily complex or redundant topology, especially since the vast majority of file server deployments do not follow this physical network topology. The help file also makes certain presumptions about the experience level of the reader.

It’s perfectly ok – from a technical perspective - to make as many connections as you like if using Windows Server 2008 or later. This is not the case with Win2003 R2 (see this old post that applies only to that OS). The main downsides to a lot of connections are:

  • It may lead to replication along slower, non-optimal networks that are already served by other DFSR connections; DFSR does not sense bandwidth or use any site/connection costing. This may itself lead to the networks becoming somewhat slower overall.
  • It will generate slightly more memory and CPU usage on each individual member server (keeping track of all this extra topology is not free).
  • It’s more work to administer. And it’s more complex. And more work + more complex usually = less fun.

Question

I'm trying setup delegation for Kerberos but I can't configure it for user or computer accounts using AD Users and Computers (DSA.MSC). I’m logged as a domain administrator. Every time when I'm trying activate delegation I get error:

The following Active Directory error occurred: Access is denied.

Answer

It’s possible that someone has removed the user right for your account to delegate. Check your applied domain security policy (using RSOP or GPRESULT or whatever) to see if this has been monkeyed up:

Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment
"Enable computer and user accounts to be trusted for delegation"

The Default Domain Controllers policy will have the built-in Administrators group set for that user right assignment once you create a domain. The privilege serves no purpose being set on servers other than DCs, they don’t care. Changing the defaults for this assignment isn’t necessary or recommended, for reasons that should now be self-evident.

Question

I want to clear all of my event logs at once on Windows Vista/2008 or later computers. Back in XP/2003 this was pretty easy as there were only 6 logs, but now there are a zillion.

Answer

Your auditors must love you :). Paste this into a batch file and run in an elevated CMD prompt as an administrator:

Wevtutil el > %temp%\eventlistmsft.txt
For /f "delims=;" %%i in (%temp%\eventlistmsft.txt) do wevtutil cl "%%i"

If you run these two commands manually, remember to remove the double percent signs and make them singles; those are being escaped for running in a batch file. I hope you have a systemstate backup, this is forever!

Question

Can AGPM be installed on any DC? Should it be on all DCs? The PDCE?

Answer

[Answer from AGPM guru Sean Wright]

You can install it on any server as long as it’s part of the domain  - so a DC, PDCE, or a regular member server. Just needs to be on one computer.

Question

Is it possible to use Authentication Mechanism Assurance that is available in Windows Server 2008 R2 with a non-Microsoft PKI implementation? Is it possible to use Authentication Mechanism Assurance with any of Service Administration groups Domain Admins or Enterprise Admins? If that is possible what would be the consequences for built-in administrator account, would this account be exempt from Authentication Mechanism Assurance? So that administrators would have a route to fix issues that occurred in the environment, i.e. a get out of jail.

Answer

[Answer from security guru Rob Greene]

First, some background:

  1. This only works with Smart Card logon. 
  2. This works because the Issuance Policy OID is “added to” msDS-OIDToGroupLink on the OID object in the configuration partition.  There is a msDS-OIDToGroupLinkBl (back link) attribute on the group and on the OID object.
  3. The attribute msDS-OIDToGroupLink attribute on the OID object (in the configuration partition)stores the DN of the group that is going to use it.
  4. Not sure why, but the script expects the groups that are used in this configuration to be Universal groups.  So the question about Administrative groups, none of these are Universal groups except for “Enterprise Admins”.

So here are the answers:

Is it possible to use Authentication Mechanism Assurance that is available in Windows Server 2008 R2 with a non-Microsoft PKI implementation?

Yes, however, you will need to create the Issuance Policies that you plan to use by adding them through the Certificate Template properties as described in the TechNet article.

Is it possible to use Authentication Mechanism Assurance with any of Service Administration groups Domain Admins or Enterprise Admins?

This implementation requires that the group be a universal group in order for it to be used.  So the only group of those listed above that is universal is “Enterprise Admins”.  In theory this would work, however in practice it might not be such a great idea.

If that is possible what would be the consequences for built-in administrator account, would this account be exempt from Authentication Mechanism Assurance?

In most cases the built-in Administrator account is special cased to allow access to certain things even if their access has somehow been limited.  However, this isn’t the best way to design your security of administrative accounts if you are concerned about not being able to get back into the domain.  You would have similar issues if you made these administrative accounts require Smart Cards for logon, then for some reason the CA hierarchy did not publish a new CRL and the CA required a domain based admin to be able to logon interactively then you would be effectively locked out of your domain also.

Question

I find references on TechNet to a “rename-computer” PowerShell cmdlet added in Windows 7. But it doesn’t seem to exist.

Answer

Oops. Yeah, it was cut very late but still lives on in some documentation. If you need to rename a computer using PowerShell, the approach I use is:

(get-wmiobject Win32_ComputerSystem).rename("myputer")

That keeps it all on one line without need to specify an instance first or mess around with variables. You need to be in an elevated CMD prompt logged in as an administrator, naturally.

Then you can run restart-computer and you are good to go.

image

There are a zillion other ways to rename on the PowerShell command-line, shelling netdom.exe, wmic.exe, using various WMI syntax, new functions, etc.

Question

Does disabling a DFS Namespace link target still give the referral back to clients, maybe in with an “off” flag or something? We’re concerned that you might still accidentally access a disabled link target somehow.

Answer

[Oddly, this was asked by multiple people this week.]

Disable actually removes the target from referral responses and nothing but an administrator’s decision can enable it. To confirm this, connect through that DFS namespace and then run this DFSUTIL command-line (you may have to install the Win2003 Support Tools or RSAT or whatever, depending on where you run this):

DFSUTIL /PKTINFO

It will not list out your disabled link targets at all. For example, here I have two link targets – one enabled, one disabled. As far as DFS responds to referral requests, the other link target does not exist at all when disabled.

clip_image002

When I enable that link and flush the PKT cache, now I get both targets:

clip_image002[4]

Question

When DFSR staging fills to the high watermark, what happens to inbound and outbound replication threads? Do we stop replicating until staging is cleared?

Answer

Excellent question, Oz dweller.

  • When you hit the staging quota 90% high watermark, further staging will stop.
  • DFSR will try to delete the oldest files to get down to 60% under the quota.
  • Any files that are on the wire right now being transferred will continue to replicate. Could be one file, could be more.
  • If those files on the wire are ones that the staging cleanup is trying to delete, staging cleanup will not complete (and you get warning 4206).
  • No other files will replicate (even if they were not going to be cleaned out due to “newness”).
  • Once those outstanding active file transfers on the fire complete, staging will be cleaned out successfully.
  • Files will begin staging and replicating again (at least until the next time this happens).

So the importance of staging space for very large files remains to ensure that quota is at least as large as the N largest files that could be simultaneously replicated inbound/outbound, or you will choke yourself out. From the DFSR performance tuning post:

  • Windows Server 2003 R2: 9 largest files
  • Windows Server 2008: 32 largest files (default registry)
  • Windows Server 2008 R2: 32 largest files (default registry)
  • Windows Server 2008 R2 Read-Only: 16 largest files

If you want to find the 32 largest files in a replicated folder, here’s a sample PowerShell command:

Get-ChildItem <replicatedfolderpath> -recurse | Sort-Object length -descending | select-object -first 32 | ft name,length -wrap –auto

Question

If I create a domain-based namespace (\\contoso.com\root) and only have member servers for namespace servers, the share can’t browsed to in Windows Explorer. It is there, I just can’t browse it.

But if I add a DC as a namespace server it immediately appears. If I remove the DC from namespace it disappears from view again, but it is still there. Would this be expected behavior? Is this a “supported” way create a hidden namespace?

Answer

You are seeing some coincidental behavior based on the dual meaning of contoso.com in this scenario:

  • Contoso.com will resolve to a domain controller when using DNS
  • When a DC hosts a namespace share and you are browsing that DC, you are simply seeing all of its shares. One of those shares happens to be a DFS root namespace.
  • When you are browsing a domain-based namespace not hosted on a DC, you are not going to see that share as it doesn’t exist on the DCs.
  • You can see what’s happening here under the covers with a network capture.
  • Users can still access the root and link shares if they type them in, had them set via logon script, mapped drive, GP Preference Item, etc. This is only a browsing issues.

It’s not an “unsupported” way to hide shares, but it’s not necessarily effective in the long-term. The way to hide and prevent access to the links and files/folders is through permissions and ABE. This solution is like a share with $ being considered hidden: only as long as people don’t talk about it. :) Not to mention this method is easy for other admins to accidentally “break” it through ignorance or reading blog posts that tell them all the advantages of DFS running on a DC.

PS: Using a $ does work – at least on a Win2008 R2 DFS root server in a 2008 domain namespace:

clip_image002[7]

clip_image002[9]

clip_image002[11]

But only until your users talk about it in the break room…

Other Random Goo

  • The Cubs 2011 schedule is up and you can download the calendar file here. You know you wanna.
  • And in a related story, Kerry Wood has come back with a one year deal! Did you watch him strike out 20 as a rookie in 1998? It was insane. The greatest 1-hitter of all time.
  • IO9.com posted their spring sci-fi book wish list. Which means that I now have eight new books in my Amazon wish list. >_<
  • As a side note, does anyone like the new format of the Gawker Media blogs? I cannot get used to them and had to switch back to the classic view. The intarwebs seem to be on my side in this. I find myself visiting less often too, which is a real shame – hopefully for them this isn’t another scenario like Digg.com, redesigning itself into oblivion.
  • Netflix finally gets some serious competition – Amazon Prime now includes free TV and Movie streaming. Free as in $79 a year. Still, very competitive pricing and you know they will rock the selection.
  • I get really mad watching the news as it seems to be staffed primarily by plastic heads reading copy written by people that should be arrested for inciting to riot. So this Cracked article on 5 BS modern myths is helpful to reduce your blood pressure. As always, it is not safe for work and very sweary.

  • But while you’re there anyway (come on, I know you), check out the kick buttitude of Abraham Lincoln.
  • Finally: why are the Finnish so awesomely insane at everything?
And by everything, I mean only this and rally sport.

 

Have a nice weekend folks.

- Ned “simple and readable” Pyle

Friday Mail Sack: I Have No Idea What to Call This Edition

$
0
0

Hiya folks, Ned here with a slightly late Mail Sack coming your way. Today we discuss reading event logs, PowerShell, FSMO, DFSR, DFSN, GCs, virtualization, RDC, LDAP queries, DPM, SYSVOL migration, and Netmon.

Do it.

Question

Logparser.exe doesn’t seem to read the message body when run against Security event logs on Windows Server 2008 R2:

logparser -i:EVT -o:CSV -resolveSIDs:ON "SELECT * INTO goo.csv FROM security"

Security,97760,2011-03-09 07:57:23,2011-03-09 07:57:23,4689,8,Success Audit event,13313,The name for category 13313 in S
ource "Microsoft-Windows-Security-Auditing" cannot be found. The local computer may not have the necessary registry info
rmation or message DLL files to display messages from a remote computer,
Microsoft-Windows-Security-Auditing,S-1-5-21-336
6683618-1989269118-3947618792-500|administrator|CONTOSO|0x57e6f4|0x0|0xbc8|C:\Windows\System32\mmc.exe,2008r2-01-f.conto
so.com,,A process has exited. Subject: Security ID: S-1-5-21-3366683618-1989269118-3947618792-500 Account Name: administ
rator Account Domain: CONTOSO Logon ID: 0x57e6f4 Process Information: Process ID: 0xbc8 Process Name: C:\Windows\System3
2\mmc.exe Exit Status: 0x0 ,

Answer

I am able to reproduce this issue. I can also see LogParser failing to parse some other ‘modern’ events in other logs, like the Application event log. Considering the tool was written in 2005 and only lists its support as Win2003 and XP, this looks like expected behavior.

You can do pretty much everything LogParser is doing with the event logs using PowerShell 2 on the later OS though, so you may not care to run this all down:

Get-WinEvent
http://technet.microsoft.com/en-us/library/dd367894.aspx

It is crazy powerful and can do Xpath, structured XML queries, and hash-table queries.

Even WEVTUTIL.EXE can do much of this, although not with as much output formatting control like PowerShell. Leave logparser to the older OSes.

Question

We’re thinking about virtualizing DFSR and DFSN. Is it supported? Are a lot of customers virtualizing these workloads?

Answer

Totally supported. Like anything virtual though, expect a slight performance hit.

There is a huge amount of virtualization happening. Enough now that you can just assume anything Windows is being run virtualized a lot. Maybe not many by percentage, but when your OS install base is in the hundreds of millions…

The main concern we have in this scenario is one we see on physical a lot now also (Warren can attest to this): the use of el cheapo iSCSI solutions rather than fiber-channel and other beefier network fabrics, especially combined with cheap SANs that have poor to non-existent support. You absolutely get what you pay for in this environment. The other thing to keep in mind is that - like all multimaster database systems - you absolutely CANNOT use snapshots with it: http://support.microsoft.com/kb/2517913/ 

Question

Do cross-forest trusts figure into Infrastructure Master FSMO role placement? I.e. can the IM run on a GC if the other forests is not all GCs too? I have two single-domain forests with a cross-forest Kerberos trust.

Answer

  • In the single domain forest it doesn’t matter where it goes at all, as the IM has no work to do until you have multiple domains in that forest.
  • If that single domain forest ever adds a domain, each IM will need to run on a non-GC server unless all DCs in that individual domain are also GCs.
  • The IM doesn’t care about the other forest at all. The forest is a boundary of what the IM is tracking, it does not traverse Kerberos trusts to other forests.
  • One more bit of recent weirdness that we don’t mention often: Once you enable the AD Recycle Bin, the Infrastructure Master stops mattering as a FSMO role and each DC takes on the role of updating themselves in regards to cross-domain object references (see http://msdn.microsoft.com/en-us/library/cc223753(PROT.13).aspx)

Question

When using DFSR and you rename a file does the whole file get replicated? What about if the same file exists in two different folders folders: will each one replicate when a user makes copies of files between different folders?

Answer

1. Nope: http://blogs.technet.com/b/askds/archive/2009/04/01/understanding-dfsr-debug-logging-part-9-file-is-renamed-on-windows-server-2003-r2.aspx

2. Not if using at least one server with Enterprise Edition in the replication partnership, so that cross-file similarity can be used:

http://blogs.technet.com/b/askds/archive/2010/08/20/friday-mail-sack-scooter-edition.aspx (see Question “The documentation on DFSR's cross-file RDC is pretty unclear – do I need two Enterprise Edition servers or just one? Also, can you provide a bit more detail on what cross-file RDC does?”)

Proof on this one (as I don’t have an article with debug log example):

Two files in two folders, both identically named, data’ed, secured. They have sequential UID version numbers. Below is the inbound debug log from the server replicating the files (heavily edited for clarity and brevity).

20110308 10:26:38.491 2264 INCO  3282 InConnection::ReceiveUpdates Received: uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe session:8 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csId:{C929D10A-601B-41D8-A620-2D161733473B} csName:badseed ß the first file starts replicating inbound

20110308 10:26:38.491 2592 MEET  1342 Meet::Install Retries:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed updateType:remote

20110308 10:26:38.491 2592 MEET  4228 Meet::ProcessUid Uid related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  5692 Meet::FindNameRelated Access name conflicting file. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  4647 Meet::GetNameRelated Name related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  3346 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:38.491 2592 MEET  1992 Meet::Download Start Download updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B} ß file replicated starts replicating inbound.

20110308 10:26:38.913 2592 RDCX   769 Rdc::SeedFile::Initialize RDC signatureLevels:1, uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe fileSize(approx):737280 csId:{C929D10A-601B-41D8-A620-2D161733473B} enableSim=1 ß added the file’s signature info to the cross-file RDC similarity table

20110308 10:26:39.131 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloading (refCount==1)

20110308 10:26:39.131 2592 STAG  4107 Staging::OpenForWrite name:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:39.225 2592 INCO  6593 InConnection::LogTransferActivity Received RAWGET uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe connId:{07C54B74-C2FB-4417-8830-3488E368480B} csId:{C929D10A-601B-41D8-A620-2D161733473B} stagedSize:361599ß file was replicated WITHOUT RDC as we had never seen this file before and had no similar files anywhere

20110308 10:26:39.225 2592 MEET  2163 Meet::Download Done downloading content updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.241 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==1)

20110308 10:26:39.241 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloading (refCount==0) ß done staging file

20110308 10:26:39.241 2592 MEET  2775 Meet::TransferToInstalling Transferring content from staging area into Installing updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  2808 Meet::TransferToInstalling Obtaining fid of the newly installed file updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  2821 Meet::TransferToInstalling Read 733988 bytes, wrote 733988 bytes updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed ß expanded from staging into the Installing folder

20110308 10:26:39.256 2592 MEET  2225 Meet::Download Download Succeeded : true updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:39.256 2592 MEET  4228 Meet::ProcessUid Uid related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  5692 Meet::FindNameRelated Access name conflicting file. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  4647 Meet::GetNameRelated Name related not found. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3346 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3013 Meet::InstallRename Moving contents from Installing to final destination. Attributes:0x20 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3043 Meet::InstallRename File moved. rootVolume:{E6D66386-E6B2-11DF-845F-806E6F6E6963} parentFid:0x2AA00000000E2BD fidInInstalling:0x100000000E2C3 usn:0xb01ec28 updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3143 Meet::InstallRename Update database with new contents updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3234 Meet::InstallRename Updating database. updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 MEET  3244 Meet::InstallRename -> DONE Install-rename completed updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed csId:{C929D10A-601B-41D8-A620-2D161733473B} ß moved the file into the replicated folder, done replicating for all intents and purposes

20110308 10:26:39.256 2592 MEET  1804 Meet::InstallStep Done installing file updateName:samefile.exe uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 connId:{07C54B74-C2FB-4417-8830-3488E368480B} csName:badseed

20110308 10:26:39.256 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==0)

Now I copy the exact same file into another folder on the upstream server, with same security, attributes, data, and name. Just a different path.

 

20110308 10:26:56.497 2592 RDCX  1311 Rdc::SeedFile::UseSimilar similarrelated (SimMatches=16)uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} (related:

uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}) ß the server recognizes that the new file it was told about has an identical copy already replicated to another folder.

20110308 10:26:56.497 2592 STAG  1215 Staging::LockedFiles::Lock Successfully locked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==1)

20110308 10:26:56.497 2592 RDCX  1510 Rdc::SeedFile::UseRelated "SimilarityRelated" file already staged uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} (related: uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B})ß even better, the file is still staged, so we don’t have to go stage a copy

20110308 10:26:56.497 2592 RDCX  3742 Rdc::FrsSignatureIndexFile::Open Opening FrsSignatureIndexFile OK for write Levels=1..1 uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:56.497 2592 RDCX   467 StreamToIndex RDC generate begin: (0..1), uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:56.513 2592 RDCX   509 StreamToIndex RDC generate end: (0..1), uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B}

20110308 10:26:56.513 2592 RDCX  3742 Rdc::FrsSignatureIndexFile::Open Opening FrsSignatureIndexFile OK for read Levels=1..1 uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222

20110308 10:26:56.513 2592 RDCX  2359 Rdc::SeedFile::OpenSeedSigDB Using seed file for uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12223 fileName:samefile.exe csId:{C929D10A-601B-41D8-A620-2D161733473B} seed(type:SimilarityRelated uid:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 gvsn:{0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 fileName:samefile.exe depth=1) ß we then create a new copy of the file using the signature bytes from the old copy. The actual new file is not copied over the wire.

20110308 10:26:56.653 2592 STAG  1263 Staging::LockedFiles::Unlock Unlocked file UID: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 GVSN: {0F26D474-860E-4A5D-9466-19B11C468E26}-v12222 state: Downloaded (refCount==0)

ß after this it will look just like the first file where it gets expanded to Installing, copied to real RF.

Question

Whenever I use LDIFDE or CSVDE to export just users, I also get computers. How do all these other LDAP apps do it? 

image

There should only be 14 users in this test domain but I get 33 entries that include computers.

Answer

There are a number of ways to skin this cat.

Give this LDAP filter a try:

ldifde -f foo.txt -r "(&(!objectclass=computer)(objectclass=user))"

image

See the difference? It is including any objects who have a class of ‘user’ but excluding (with the “!”) any that are also class of ‘computer’. This is necessary because computers are users. :) See the first few lines of one of the computers returned by the original query:

dn: CN=XP-05,CN=Computers,DC=contoso,DC=com
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: computer
cn: XP-05
distinguishedName: CN=XP-05,CN=Computers,DC=contoso,DC=com
instanceType: 4
whenCreated: 20101201143854.0Z
<snip>

A good alternative from the Comments: (&(objectCategory=person)(objectClass=user))

And another good one: (sAMAccountType=805306368)

(You guys think about this a lot don't you? :P) 

Question

Are DFSR and DPM compatible?

Answer

Yes, as long as your DFSR servers have this KB977381 version (or newer) of DFSR.EXE/DFSRS.EXE installed, they are compatible. The article doesn’t state it, but the filter driver I/O requests that DFSR didn’t understand were DPMs.

Question

Is it ok to migrate SYSVOL to DFSR before you have all domains in the forest at a Windows Server 2008 domain functional level, or the whole forest at Windows Server 2008 forest functional level? Do I need to be concerned about site-based policies that might be accessed through out the forest?

Answer

Per-domain is fine, the individual domains don’t matter to each other at all in regards to SYSVOL migration. GP is completely unaware of the replication type, so site-based policies don’t matter either. The main effect will be that once you have DFSR being used, you will hopefully have fewer GP problems due to replication latency and FRS’ general instability.

Regardless: make sure you are using our latest DFSRS, DFSRMIG and ROBOCOPY hotfixes.

KB972105 All files are conflicted on all domain controllers except the PDC Emulator when a DFSR migration of the SYSVOL share reaches the Redirected state in Windows Server 2008 or in Windows Server 2008 R2 -http://support.microsoft.com/default.aspx?scid=kb;EN-US;972105

KB968429  List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2 -http://support.microsoft.com/default.aspx?scid=kb;EN-US;968429

Netmon Loot

If you use NetMon, make sure you check out all of the sweet experts and parsers that keep coming out of various teams. We don’t advertise these well, but there are some really useful ones these days:

- Ned “Tired” Pyle

Friday Mail Sack: Goat Riding Bambino Edition

$
0
0

Hi folks, Ned here again. I’m trying to get back into the swing of having a mail sack every week but they can be pretty time consuming to write (hey, all this wit comes at a price!) so I am experimenting with making them a little shorter. This week we talk AD PowerShell secrets, USMT and Profile scalability, a little ADUC and DFSR, and some other random awesomeness.

Question

Can you explain how the AD PowerShell cmdlet Get-ADComputer gets IP information? (ex: Get-ADComputer -filter * -Properties IPv4Address). Properties are always AD attributes, but I can not find that IPv4Address attribute on any computer object and even after I removed the A records from DNS I still get back the right IP address for each computer.

Answer

That’s an excellent question and you were on the right track. This is what AD PowerShell refers to as an ‘extendedAttribute’ internally, but what a human might call a ‘calculated value’. AD PowerShell special-cases a few useful object properties that don’t exist in AD by using other LDAP attributes that do exist, and then uses that known data to query for the rest. In this case, the dnsHostName attribute is looked up normally, then a DNS request is sent with that entry to get the IP address.

Even if you removed the A record and restarted DNS, you could still be returning the DNS entry from your own cache. Make sure you flush DNS locally where you are running PowerShell or it will continue to “work”.

To demonstrate, here I run this the first time:

clip_image002

Which queries DNS right after the powershell.exe contacts the DC for the other info (all that buried under SSL here, naturally):

clip_image002[4]

Then I run the identical command again – note that there is no DNS request or response this time as I’m using cached info.

clip_image002[6]

It still tells me the IP address. Now I delete the A record and restart the DNS service, then flush the DNS cache locally where I am running PowerShell, and run the same PowerShell command:

clip_image002[8]

Voila! I have broken it. :)

Question

Is there is a limit on the number of profiles that USMT 4.0 can migrate? 3.01 used to have problems with many (20+) profiles, regardless of their size.

Answer

Updated with new facts and fun, Sept 1, 2011

Yes and no. There is no limit real limit, but depending on the quantity of profiles and their contents, combined with system resources on the destination computer, you can run into issues. If possible you should use hardlink migration, as that as fast as H… well, it’s really fast.

To demonstrate (and to show erstwhile USMT admins a quick and dirty way to create some stress test profiles):

1. I create 100 test users:

image

image

2. I log them all on and create/load their profiles, using PSEXEC.EXE:

image

image

3. Copy a few different files into each profile. I suggest using a tool that creates random files with random contents. In my case I added a half dozen 10MB files to each profile’s My Documents folder. You can’t use the same files in each profile, as USMT is smart enough to reuse them and you will not get the real user experience.

4. I run the harshest, slowest possible migration I can, where USMT writes to a compressed store on a remote file share, with AES_256 encryption, from an x86 Windows 7 computer with only 768MB of RAM, while cranking all logging to the max:

image

This (amazingly, if you ever used USMT 3.01) takes only 15 minutes and completes without errors. Loadstate memory and CPU isn’t very stressful (in one test, I did this with an XP computer that had only 256MB of RAM, using 3DES encryption).

5. I restore them all to another another computer – here’s the key: you need plenty of RAM on your destination Windows 7 computer. If you have 100 profiles that all have different contents, our experience shows that 4GB of RAM is required. Otherwise you can run out the OS resources and receive error “Close programs to prevent information loss. Your computer is low on memory. Save your files and close your programs: USMT: Loadstate”. More on this later.

image

This takes about 30 minutes and there are no issues as long as you have the RAM.

image

6. I bask in the turbulence of my magnificence.

If you do run into memory issues (so far we’ve only seen it with one customer since USMT 4.0 released more than two years ago), you have a few options:

a. Validate your scanstate/loadstate rules – despite what you may think, you might be gathering all profiles and not just fresh ones. Review: http://blogs.technet.com/b/askds/archive/2011/05/05/usmt-and-u-migrating-only-fresh-domain-profiles.aspx. Hopefully that cuts you down to way fewer than 100 per machine. Read that post carefully, there are some serious gotchas: such as once you run scanstate once on a computer, all profiles are made fresh afterwards for any subsequent scanstate runs. The odds that all 100+ profiles are actually active is pretty unlikely.

b. Get rid of old XP profiles with DELPROF before using USMT at all. This is safer than UEL, as like I mentioned, once you run scanstate that’s it – it has to work perfectly on the first try, as all profiles are now “fresh”. (On Vista+ you instead use http://support.microsoft.com/kb/940017, as I’m sure you remember)

c. Get more RAM.

Question

Is it possible in DSA.MSC to have the Find: Users, Contacts, and Groups default to finding computers or include computers with the user, contacts, and groups? Is there a better way to search for computers?

Answer

The Find tool does not provide for user customization – even starting it over without closing DSA.MSC loses your last setting. ADUC is a cruddy old tool, DSAC.EXE is the (much more flexible) replacement and it will do what you want for remembering settings.

There are a few zillion other ways to find computers also. Not knowing what you are trying to do, I can’t recommend one over the other; but there’s DSQUERY.EXE, CSVDE.EXE, many excellent and free 3rd parties, etc.

Question

If I delete or disable the outbound connection from a writable DFSR replicated folder, I get warning that the “topology is not fully connected”. Which is good.

image

But if that outbound connection is for a read-only replica, no errors. Is this right?

Answer

It’s an oversight on our part. While technically nothing bad will happen in this case (as read-only servers - of course - do not replicate outbound), you should get this message in all cases (There are also 6020 and 6022 DFSR warning events you can use to track this condition). A read-only can be converted to a read-write, and you will definitely want an outbound connection for that.

We’re looking into this; in the meantime, just don’t do it anywhere. :)

Other Things

Just to make myself feel better: “Little roller up along first. Behind the bag! It gets through Buckner!”

  • If you have parents, siblings, children away at college, nephews, cousins, grandparents, or friends, we have the newest weapon in the war on:
    1. Malware
    2. Your time monopolized as free tech support

Yes, it’s the all new, all web Microsoft Safety Scanner. It even has a gigantic button, so you know it’s gotta be good. Make those noobs mash it and tell you if there are any problems while you go make a sandwich.

  • Finally: thank goodness my wife hasn’t caught this craze yet. She has never met a shoe she didn’t buy.

Have a nice weekend folks.

Ned “86 years between championships? That’s nothing… try 103, you big babies!” Pyle

Friday Mail Sack: Tuesday To You Edition

$
0
0

Hi folks, Ned here again. It’s a long weekend here in the United States, so today I talk to you tell myself about a domain join issue one can only see in Win7/R2 or later, what USMT hard link migrations really do, how to poke LDAP in legacy PowerShell, time zone migration, and an emerging issue for which we need your feedback.

Question

None of our Windows Server 2008 R2 or Windows 7 computers can join the domain – they all show error:

“The following error occurred attempting to join the domain "contoso.com": The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.”

image

Windows Vista, Widows Server 2008, and older operating systems join without issue in the exact same domain while using the same user credentials.

Answer

Not a very actionable error – which service do you mean, Windows!? If you look at the System event log there are no errors or mention of broken services. Fortunately, any domain join operations are logged in another spot – %systemroot%\debug\netsetup.log. If you crack open that log and look for references to “service” you find:

05/27/2011 16:00:39:403 Calling NetpQueryService to get Netlogon service state.
05/27/2011 16:00:39:403 NetpJoinDomainLocal: NetpQueryService returned: 0x0.
05/27/2011 16:00:39:434 NetpSetLsaPrimaryDomain: for 'CONTOSO' status: 0x0
05/27/2011 16:00:39:434 NetpJoinDomainLocal: status of setting LSA pri. domain: 0x0
05/27/2011 16:00:39:434 NetpManageLocalGroupsForJoin: Adding groups for new domain, removing groups from old domain, if any.
05/27/2011 16:00:39:434 NetpManageLocalGroups: Populating list of account SIDs.
05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: status of modifying groups related to domain 'CONTOSO' to local groups: 0x0
05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: INFO: No old domain groups to process.
05/27/2011 16:00:39:465 NetpJoinDomainLocal: Status of managing local groups: 0x0
05/27/2011 16:00:39:637 NetpJoinDomainLocal: status of setting ComputerNamePhysicalDnsDomain to 'contoso.com': 0x0
05/27/2011 16:00:39:637 NetpJoinDomainLocal: Controlling services and setting service start type.
05/27/2011 16:00:39:637 NetpControlServices: start service 'NETLOGON' failed: 0x422
05/27/2011 16:00:39:637 NetpJoinDomainLocal: initiating a rollback due to earlier errors

Aha – the Netlogon service. Without that service running, you cannot join a domain. What’s 0x422?

c:\>err.exe 0x422

ERROR_SERVICE_DISABLED winerror.h
# The service cannot be started, either because it is
# disabled or because it has no enabled devices associated
# with it.

Nice, that’s our guy. It appears that the service was disabled and the join process is trying to start it. And it almost worked too – if you run services.msc, it will say that Netlogon is set to “Automatic” (and if you look at another machine you have not yet tried to join, it is set to “Disabled” instead of the default “Manual”). The problem here is that the join code is only setting the start state through direct registry edits instead of using Service Control Manager. This is necessary in Win7/R2 because we now always go through the offline domain join code (even when online) and for reasons that I can’t explain without showing you our source code, we can’t talk to SCM while we’re in the boot path or we can have hung startups. So the offline code set the start type correctly and the next boot up would have joined successfully – but since the service is still disabled according to SCM, you cannot start it. It’s one of those “it hurts if I do this” type issues.

And why did the older operating systems work? They don’t support offline domain join and are allowed to talk to the Service Control Manager whenever they like. So they tell him to set the Netlogon service start type, then tell him to start the service – and he does.

The lesson here is that a service set to Manual by default should not be set to disabled without a good reason. It’s not like it’s going to accidentally start in either case, nor will anyone without permissions be able to start it. You are just putting a second lock on the bank vault. It’s already safe enough.

Question

USMT is always going on about hard link migrations. I’ve used them and those migrations are fast… but what the heck is it and why do I care?

Answer

A hard link is simply a way for NTFS to point to the same file from multiple spots, always on the same volume. It has nothing to do with USMT (who is just a customer). Instead of making many copies of a file, you are making copies of how you get to the file. The file itself only exists once. Any changes to the file through one path or another are always reflected on the same physical file on the disk. This means that when USMT is storing a hard link “copy” of a file it is just telling NTFS to make another pointer to the same file data and is not copying anything – which makes it wicked fast.

Let’s say I have a file like so:

c:\hithere\bwaamp.txt

If I open it up I see:

image

Really though, it’s NTFS pointing to some file data with some metadata that tells you the name and path. Now I will use FSUTIL.EXE to create a hard link:

C:\>fsutil.exe hardlink create c:\someotherplace\bwaamp.txt c:\hithere\bwaamp.txt
Hardlink created for c:\someotherplace\bwaamp.txt <<===>> c:\hithere\bwaamp.txt

I can use that other path to open the same data (it helps if you don’t think of these as files):

image

I can even create a hard link where the file name is not the same (remember – we’re pointing to file data and giving the user some friendly metadata):

C:\>fsutil.exe hardlink create c:\yayntfs\sneaky!.txt c:\hithere\bwaamp.txt
Hardlink created for c:\yayntfs\sneaky!.txt <<===>> c:\hithere\bwaamp.txt

And it still goes to the same spot.

image

What if I edit this new "”sneaky!.txt” file and then open the original “bwaamp.txt”?

image

Perhaps a terrible Visio diagram will help:

hardlink

When you delete one of these representations of the file, you are actually deleting the hard link. When the last one is deleted, you are deleting the actual file data.

It’s magic, smoke and mirrors, hoodoo. If you want a more disk-oriented (aka: yaaaaaaawwwwnnn) explanation, check out this article. Rob and Joseph have never met a File Record Segment Header they didn’t like. I bet they are a real hit at parties…

Question

How can I use PowerShell to detect if a specific DC is reachable via LDAP? Don’t say AD PowerShell, this environment doesn’t have Windows 7 or 2008 R2 yet! :-)

Answer

One way is going straight to .NET and use the DirectoryServices namespace:

New-Object System.DirectoryServices.DirectoryEntry(LDAP://yourdc:389/dc=yourdomaindn)

For example:

image
Yay!

image
Boo!

Returning anything but success is a problem you can then evaluate.

As always, I welcome more in the Comments. I suspect people have a variety of techniques (third parties, WMI LDAP provider, and so on).

Question

Is USMT supposed to migrate the current time zone selection?

Answer

Nope. Whenever you use timedate.cpl, you are updating this registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation

Windows XP has very different data in that key when compared to Vista and Windows 7:

Windows XP

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

"ActiveTimeBias"=dword:000000f0

"Bias"=dword:0000012c

"DaylightBias"=dword:ffffffc4

"DaylightName"="Eastern Daylight Time"

"DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

 

"StandardBias"=dword:00000000

"StandardName"="Eastern Standard Time"

"StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

 

Windows 7

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

"ActiveTimeBias"=dword:000000f0

"Bias"=dword:0000012c

"DaylightBias"=dword:ffffffc4

"DaylightName"="@tzres.dll,-111"

"DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

"DynamicDaylightTimeDisabled"=dword:00000000

"StandardBias"=dword:00000000

"StandardName"="@tzres.dll,-112"

"StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

"TimeZoneKeyName"="Eastern Standard Time"

The developers from the Time team simply didn’t want USMT to assume anything as they knew there were significant version differences; to do so would have taken an expensive USMT plugin DLL for a task that would likely be redundant to most customer imaging techniques. There are manifests (such as "INTERNATIONAL-TIMEZONES-DL.MAN") that migrate any additional custom time zones to the up-level computers, but again, this does not include the currently specified time zone. Not even when migrating from Win7 to Win7.

But that doesn’t mean that you are out of luck. Come on, this is me! :-)

To migrate the current zone setting from XP to any OS you have the following options:

To migrate the current zone setting from Vista to Vista, Vista to 7, or 7 to 7, you have the following options:

  • Any of the three mentioned above for XP
  • Use this sample USMT custom XML (making sure that nothing else has changed since this blog post and you reading it). Woo, with fancy OS detection code!

<?xmlversion="1.0"encoding="utf-8" ?>
<
migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/currenttimezonesample">
<
componenttype="Application"context="System">
<
displayName>Copy the currently selected timezone as long as Vista or later OS</displayName>
<
rolerole="Settings">
<!--
Check as this is only valid for uplevel-level OS >= than Windows Vista –>
<
detects>
  <
detect>
   <
condition>MigXmlHelper.IsOSLaterThan("NT", "6.0.0.0")</condition>
  </
detect>
</
detects>
<
rules>
<
include>
  <
objectSet>
   <
patterntype="Registry">HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\* [*]</pattern>
  </
objectSet>
</
include>
</
rules>
</
role>
</
component>
</
migration>

Question for our readers

We’ve had a number of cases come in this week with the logon failure:

Logon Process Initialization Error
Interactive logon process initialization has failed.
Please consult the Event Logs for more details.

You may also find an application event if you connect remotely to the computer (interactive logon is impossible at this point):

ID: 4005
Source: Microsoft-Windows-Winlogon
Version: 6.0
Message: The Windows logon process has unexpectedly terminated.

In the cases we’ve seen this week, the problem was seen after restoring a backup when using a specific third party backup product. The backup was restored to either Hyper-V or VMware guests (but this may be coincidental). After the restore large portions of the registry were missing and most of our recovery tools (SFC, Recovery Console, diskpart, etc.) would not function. If you have seen this, please email us with the backup product and version you are using. We need to contact this vendor and get this fixed, and your evidence will help. I can’t mention the suspected company name here yet, as if we’re wrong I’d be creating a legal firestorm, but if all the private emails say the same company we’ll have enough justification for them to examine this problem and fix it.

------------

Have a safe weekend, and take a moment to think of what Memorial Day really means besides grilling, racing, and a day off.

Ned “I bet SGrinker has the bratwurst hookup” Pyle


Fun with the AD Administrative Center

$
0
0

Hi folks, Ned here again. We introduced the AD Administrative Center in Windows Server 2008 R2 to much fanfare. Wait, I mean we told no one and for good measure, we left the old AD Users and Computers tool in-place. Then we continued referencing it in all our documentation.

And people say we're a marketing company.

I've talked previously about using ADAC as a replacement for acctinfo.dll. Today I run through some of the hidden coolness that ADAC brings to the table as well as techniques that make using it easier. If you're never used this utility, make sure you review the requirements and if you don't have any Windows Server 2008 R2 DCs, install the AD Management Gateway and its updates on at least one of your older DCs in each domain. ADAC is included in RSAT.

I am going to demo as much as possible, so I hope you have some bandwidth this month, oppressed serfs Canucks and Aussies. Since this is me, I'll also show you how to work around some ADAC limitations - this isn’t a sales pitch. To make things interesting, I am using one of my more complex forests where I test the ADRAP tools.

image

Fire up DSAC.EXE and follow along.

ADAC isn't ADUC

The first lesson is "do not fight the interface". Don’t try to make ADAC into AD Users and Computers simply because that's what you’re used to. ADUC wants to click everywhere, expanding trees of data. It's also has short-term memory loss - every time you restart it you have to set it up all over again.

ADAC realizes that you probably stick to a few areas most of the time. So rather than heading to the Tree View tab right away to start drilling down, like this:

image

… instead, consider using navigation nodes to add areas you are frequently accessing. In my case here, the Users container is an obvious choice:

image

image

This pins that container in the navigation pane so that I don’t have to click around next time.

image

It's even more useful if I use many deeply nested OU structures in the domain. For example, rather than clicking all the way into this hierarchy each time:

image

I can instead pin the areas I plan to visit that week for a project:

image

Nice! It even preserves the visual hierarchy for me. Notice another thing here - ADAC keeps the last three areas I visited in the recent view list under that domain. Even if I had not pinned that OU, I'd still get it free if I kept returning to it:

image

Once you open one of those users, you don't have to dig through a dozen tabs for commonly used attributes. The important stuff is right up front.

image

For a real-world example of how this does not suck, see this article. The old tabs are down there in the extensions section still, if you need them:

image

A lot of people have a lot of domains

One thing AD Users and Computers isn’t very good at is scale: it can only show you one domain at a time, requiring you to open multiple dialogs or create your own custom MMC console.

image

In ADAC, it’s no sweat - just insert any domains you want using Add Navigation Nodes again:

image

I can add other navigation nodes for those domains without adding the domains themselves too. Each domain gets that three-entry "recently used" list too. I'm also free to move the pinned nodes up and down the list with the right-click menu, if I have OCD. For instance, if I want the Users and Computers container from three domains, it's nothing to have them readily available, in the order I want:

image

image

Come on now, you have to admit that is slick, right?

Always look for the nubbin arrow

Scattered around the UI are little arrows that allow you to hide and expose various data views. For instance, you can give yourself more real estate by hiding the navigation pane:

image

Or see a user's logon information:

image

Or hide a bunch of sections in groups that you don't usually care about, leaving the one you constantly examine:

image

Note: It's not really called the nubbin arrow except by Mike Stephens and me. Join our cool gang!

Views and Search are better than Find

AD Users and Computers is an MMC snap-in: this means a UI designed for NT 4.0. When it lets you search, you are limited to the Find menu, which lets you return data, but not preserve it. After closing each search, ADUC's moron brain forgets what you just asked, like a binary pothead.

ADAC came after the birth of search and in a time where AD is now ubiquitous and huge. That means everywhere you go, it wants to help you search rather than browse. Moreover, it wants to remember things you found useful. If I am looking at my Users container, the Filter menu is right there beckoning:

image

It lets me do quick and reasonable searches without a complicated menu system:

image

As well as create complex queries for common attributes:

image

Then save those queries for later, for use within any spot in the forest:

image

I can also use global search. And I do mean global - for example, I can search all my domains at once and not be limited to Global Catalog lookups that are often missing less-travelled attributes:

image

For example here, I use ambiguous name resolution to find all objects called Administrator - note how this automatically wildcards.

image

Not bad, but I want only users that are going to have their passwords expire in the next month. Moreover, I've been trying to improve my LDAP query skills when scripting. No sweat, I can do it the easy way then convert it to LDAP:

image

image

Or maybe I let ADAC do the hard work of things like date range calculation:

image

Then I take that query:

image

And modify it to do what I want. Like only show me groups modified in the past three days:

image

Neato - on demand quasi-auditing.

A few tricks of the trade

Return to defaults

If you want to zero out the ADAC console and get an out of box experience, there's no menu or button. However, if you delete this folder, you delete the whole cache of settings:

%appdata%\IsolatedStorage\StrongName.um0icba0dwq40nfvuftw3i5jvholhn3k

ADAC will be slow to start the next time you run it (just as it was the first time you ever ran it) but it will be quick again after that.

The Management List

Have some really ginormous containers? If you navigate into one using ADAC, you will see an error like this:

image
"The number of items in this container exceeds the maximum number blah blah blah…"

The error tells you what to do - just change the "Management List" options. Right! So… ehhh… where is the management list? You have to hit the ALT key to expose that menu. Argh…

image

Then you can set the turned object count as low as 2000 or as high as 100000. If you have to do this though, you need to work on organizing your objects better.

Just think "Explorer"

In many ways, we designed ADAC like 7's Windows Explorer. It has a breadcrumb bar, a refresh button, and forward/back buttons.

image

It lets you use the address bar to quickly navigate and browse, with minimal real estate usage.

image

The buttons offer a history:

image

It has an obvious and "international" refresh button - very handy. ADUC made you learn weird habits like F5, which may seem natural to you now, but isn't not very friendly for new admins.

image

That new Explorer probably took some getting used to but once you had them, returning to XP seems like visiting the dusty hometown you left years ago: Quaint. Inefficient. Boring. Having used the new one for a few years now, ADAC should be more intuitive.

Sum Up

I'm not here to argue against AD Users and Computers; it has its advantages (I miss the Copy… menu). And it's certainly familiar after 11 years of use. However, the AD Administrative Center deserves a place at any domain admins' table and can make your life easier once you know where to look. Try it for a week and see for yourself. If you come back to ADUC, it's ok - we already cashed your check.

Until next time.

- Ned "Ok, maybe 'fun' was a stretch" Pyle

Friday Mail Sack: Best Post This Year Edition

$
0
0

Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

So put down that nicotine gum and get to reading!

Question

Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

Answer

The official stance is from the KB that states how to do it:

Generally, Microsoft recommends that you do not modify these special shared resources.

Even better, here are many things that will break if you do this:

Overview of problems that may occur when administrative shares are missing
http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

This is one of those “don’t do it just because you can” customizations.

Question

The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

Answer

You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 123.

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

clip_image002[6]

Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

image

Get-ADDomainController can find all sorts of interesting things via its –service argument:

PrimaryDC
GlobalCatalog
KDC
TimeService
ReliableTimeService
ADWS

The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

Question

I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

Answer

Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

image

I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

image 

Oh, and check out this gem:

image

I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

“The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

Question

We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

Answer

You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

clip_image001
XP, before migrating

clip_image001[5]
Win7, after migrating

If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

Question

Should we stop the DFSR service before performing a backup or restore?

Answer

Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

Event ID=1102
Severity=Informational
The DFS Replication service has temporarily stopped replication because another
application is performing a backup or restore operation. Replication will resume
after the backup or restore operation has finished.

Event ID=1104
Severity=Informational
The DFS Replication service successfully restarted replication after a backup
or restore operation.

Another bit of implied evidence – Windows Server Backup does not stop the service.

Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

Question

In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

Answer

[via Fabian “Wunderbar” Müller  – Ned]

From my point of view:

1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

Not a perfect answer, but manageable.

Question

In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

I assume this is not expected behavior.  Is there something I’m missing?

Answer

Expected behavior, pretty well hidden though:

http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

image

image

Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

Question

I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

Answer

[via Jonathan “scissor fingers” Stephens  – Ned]

You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

Question

Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

Answer

Nope, sorry. It’s all static defined:

  • Severity = 5
  • Max log messages per log = 10000
  • Max number of log files = 999

Question

In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

Answer

[via Rob “there's a Squatch in These Woods” Greene  – Ned]

The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

Other stuff

The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

Welcome to your nightmare (Thanks Mark!)

Totally immature and therefore funny. Doubles as a gender test.

Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


Indy whipped first!

I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

   
The mail sack becomes meta of meta of meta

Like Legos? Love Simon Pegg? This is for you.

Best sci-fi books of 2011, according to IO9.

What’s your New Year’s resolution? Mine is to stop swearing so much.

 

Until next time,

- Ned “$#%^&@!%^#$%^” Pyle

Friday Mail Sack: It’s a Dog’s Life Edition

$
0
0

Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

Fetch!

Question

We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

Answer

You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

Question

When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

Answer

DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

image

Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

clip_image002
Whoops, I need to fix that user’s group memberships!

Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

See, I don’t always make you use Windows PowerShell for your pipelines. ツ

Question

If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

Answer

Just run this DFSRMIG command:

dfsrmig.exe /getglobalstate

That tells you what the current state of the SYSVOL DFSR topology and migration.

If it says:

  • “Eliminated”

… they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

If it says:

  • “Prepared”
  • “Redirected”

… they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

If it says:

  • “Start”
  • “DFSR migration has not yet initialized”
  • “Current domain functional level is not Windows Server 2008 or above”

… they are using FRS for SYSVOL.

Question

When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

Answer

DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

image

image

Question

I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

Answer

Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

There is also the option - if you just care about users' interactive or runas logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

Question

I was reading your older blog post about setting legal notice text and had a few questions:

  1. Has Windows 7 changed to make this any easier or better?
  2. Any way to change the font or its size?
  3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

Answer

[Courtesy of that post’s author, Mike “DiNozzo” Stephens]

  1. No
  2. No
  3. No

:)

#3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

image

 [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

Question

I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
Here are the tests we've performed:

  1. Verified that the CRL is accessible
  2. Smartcard logon with the working certificate
  3. Revoked the certificate + waited for the next CRL publish
  4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
  5. Tested smartcard logon with the revoked certificate

We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

None of them were found.

Answer

First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

Question

When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

Answer

It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

  • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
  • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

Question

Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

Answer

Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

  1. Enable Role Separation and use different user accounts for each role.
  2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

[Now back to Ned for the idiotic finish!]

Other Stuff

My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


Boing boing boing

And this:


Wait for the pit!

Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

At last, Ninja-related sticky notes.

For all the geek parents out there. My favorite is:

adorbz-ewok
For once, an Ewok does not enrage me

It was inevitable.

 

Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

Have a nice weekend folks,

Ned “my dogs are not quite as athletic” Pyle

Friday Mail Sack: Carl Sandburg Edition

$
0
0

Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

Hog Butcher for the World,
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation's Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

Cool, huh?

Anyway, today we talk about:

And awayyy we go!

Question

When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

Log Name: System
Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
Event ID: 7
Level: Error
Description:

The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

Error 170000C0

I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

Answer

That extended error is:

C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

Question

Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

Answer

It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

Question

Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

Answer

DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

There is no real value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. If clustering you'd create them on one of your drive-lettered shared disks.

Question

How do you back up the Themes folder using USMT4 in Windows 7?

Answer

The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

Note here how after scanstate, my USMT store’s Themes folder is empty:

clip_image001

After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

clip_image002

However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

<?xmlversion="1.0"encoding="UTF-8"?>

<migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

<componenttype="Documents"context="User">

<!-- sample theme folder migrator -->

<displayName>ThemeFolderMigSample</displayName>

 <rolerole="Data">

  <rules>

   <includefilter='MigXmlHelper.IgnoreIrrelevantLinks()'>

   <objectSet>

    <patterntype="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

   </objectSet>

  </include>

 </rules>

 </role>

And here it is in action:

clip_image004

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, you're all good.

Question

I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

Answer

So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

  1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
  2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

    for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

Question

What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

Answer

All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the System.DirectoryServices.AccountManagement.AuthenticablePrincipal .NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

(UserAccountControl:1.2.840.113556.1.4.803:=2)

Other stuff

I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

And just for Ned (because he REALLY loves this stuff!): Kittens!

No need to rush back, dude.

Jonathan “Payback is a %#*@&!” Stephens

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

* If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

1. New forest: always install DNS
2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
3. Replica: if the current domain hosts DNS, install DNS

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys

C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys

Windows Server 2003

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyoneprobably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodesmigrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<componentdisplayname="Networking Connections"migrate="yes"ID="network_and_internet\networking_connections">

  <componentdisplayname="Microsoft-Windows-Wlansvc"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-VWiFi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasConnectionManager"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasApi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-PeerToPeerCollab"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Native-80211"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-MPR"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Dot3svc"migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6]clip_image004clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

Viewing all 76 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>