Windows 7 Networking Information

Someone asked me today about some information and links about the Windows 7 networking stack especially regarding IPv6. I’m going to cache my response here for future reference and updating:

Generally speaking Windows 7 shares the same networking stack architecture as vista plus the following stuff:

  • DirectAccess
    • IPv6 transition technology improvements
    • IPTLS transport
  • BranchCache
  • Network Tracing and Diagnostics
  • Better Firewall Multihoming behavior

Random Stuff to go read:


Getting the ical/ics feed from a King County Library

I’ve been working on being an event calendar curator ala Jon Udell’s system and was stuck getting a good calendar feed from the King Country Library System for my local Snoqualmie library. As in most projects, you start with some HTML page of calendar entries. In this case a search for Snoqualmie Library leads to this URL:

Which is missing the ICS feed. So the next step in the process is to try fusecal, which had done the trick many times before. However this time the page formatting was preventing fusecal from having useful event titles.

Today I came back and looked at the problem afresh. My first thought was that I could have something scrape the VCS links on the page and build a ical, but I really didn’t want to own any automation on my own servers (and I wasn’t ready yet to write a service on something like azure). So I poked into the link to the software maker of this calandar: Evanced. Fortunately there I discovered that some ical feature had been added to an eventsxml.asp. Putting that on the KCLS url I got:

Yay, I can see one event in XML. This is progress. Some sample code for client side rendering of eventsxml.asp gave me a couple parameters to try:

The next step was to filter to my specific library. Back on the HTML page there was one url parameter that seemed to be relevant; that ln=36. On the eventsxml.asp the same parameter doesn’t work, but lib=all was a tempting place to put my magic number “36:

Bingo! Now I’ve got a nice xml feed for local library. However I need ics. and trying dm=ics didn’t work. Remember back to the feature list calling it ical, I tried

Which along with the North Bend, Carnation, Fall City and Duvall is now in my delicious events curator list for Snoqualmie valley.

All New Code?

Why is it that people believe that every release of Windows is entirely new code? I’ve never seen anyone from Microsoft ever claim any such thing, but every release I see people talking about the claim. Having said that, in every OS release almost every component gets touched if just to fix potential security vulnerabilities found by automated tools. That’s the advantage of a full OS release, you get the most complete testing cycle Microsoft can manage (internally and externally). Let’s see if I can introduce a lexicon for people to talk about OS release changes. Here are some categories to count and measure:

  1. Absolute Development Time – Each release only has so many developer resources for a period of time, so even if it’s just cleaning up almost invisible implementation issues, or major new features there is a an absolute amount of effort put in to each OS release. While people talk about vista in terms of 5 years since XP, the reality is that most of the windows organization for a bunch of that time was focused on the first and especially the second XP service pack.
  2. Subsystem Replacements – Instead of incremental changes to a couple components, this implies major rewrites and replacements. Windows ME to XP involved replacing the the windows 9x OS with the Windows 2000/NT codebase especially at the lower levels of the OS. Much of that code had been shipped and tested as Windows NT and Windows 2000, so for the development team this was incremental work, but for the consumer OS customers is was a new code base with all the pain involved. IIRC a decent amount of Windows ME was getting the driver ecosystem compatible with the Windows 2000 codebase so that Windows XP wouldn’t be as painful of a switchover. (There is a lesson here, you got to ship an OS which will get a negative reputation to move the market whenver making major changes that affect drivers, 64bit Vista is playing that role right now for future 64bit Windows OS versions). In Vista, there were at least three major subsystem replacements, the video, audio and networking stack each got rewrite/replacement level changes. The primary motivation for a subsystem replacement is to provide an better foundation for new features, but often pulls in a couple new features themselves (like IPV6 getting all the features the IPV4 stack had). This type of change is the most exciting and also the most likely to break existing drivers and applications.
  3. Architectural Rewiring – This is where we restructure existing code for modularity and potentially new release possibilities. Server Code and MinWin fall into this category of changes. To the upper layers of the OS (applications) it looks like nothing has changed, but you now have the ability to more easily release a super stripped down version of the OS, or let different parts of the OS evolve independently.One of the sins of Windows was the circular dependencies between some components, and we are in the middle of multi-release work to clean it up. A focus of Vista was to map out the system and put in controls to make sure we never introduce more. As a OS Geek, this is exciting stuff, as a OS user, this is something that is sucking up development dollars without apparent affect.
  4. UI Changes – For a user of the OS this is what they typically use to judge how much an OS has changed. Sometimes this implies a lot of work, sometimes this isn’t so much work. Because of the attention, every product typically has some UI change for the sake of change alone, and that change is usually one of the most protected secrets about the OS. There is a balancing act between holding these changes secret, and testing the OS as a final product. Often a ugly theme that utilities the same features as the final theme/UI is introduced to help mitigate the risk. (Therefore pre-release builds shouldn’t be judged on ascetics).
  5. New Features/Components – These are the functional improvements in the products. I think people have a pretty good grasp of this type of change.
  6. Changing Defaults – Relatively simple code/setting changes might make drastic changes to the user experience. Turning off old protocols, making new users non admin by default, etc.
  7. Bake Time/Cleanup – This is the relatively boring but critical process of fixing bugs, incremental performance tuning and just general "make things better" that takes of the majority of a development cycle and extends post release into service packs and the next release. It’s healthy to occasionally have a release that the majority of it is in this category, specifically targeting the things that were too risky for a service pack, but isn’t really a new feature. Unfortunately this type of changes tends to not sell new copies of the OS. This type of next release time is getting institutionalized at Microsoft in the form a Quality Milestone done during product planning when the development team doesn’t have much to do yet.
  8. Platform development – This is the type of work done that might be in the OS, but doesn’t really have any exposure or use until a corresponding server release, or other product takes advantage of it. For example: Windows XP had a feature for restoring automatic backups of previous file versions that only showed up when attached to a server that supported it. Vista (and XP via a separate download) has an amazing new GUI support for applications called the windows presentation foundation, but nothing in the OS itself takes advantage of it. It usually takes a while before we see application developers get used to the new libraries and choose to develop for it (normally a developer doesn’t want to develop for an OS version that users aren’t using in bulk).

Looking forward, we already know that some Architectural Rewiring is happing in the next Windows release with MinWin and with such major Subsystem Replacements in Vista and the compressed schedule for the next release, I can’t imagine too many Subsystem Replacements happening, but I guess we’ll have to wait and see.

Thoughts on Monoculture

I was reading the comments in a post by Schneier today regarding some office in the military thinking about apple servers to avoid the attacks made against windows, and kept seeing the meme of "monoculture == bad". This is not a new debate, but I want to jot some thoughts.

  • Having a diversity of systems with equal access to a resource just means that you’ve extended your attack surface for all of the STRIDE classifications except Denial of Service.
  • Having a diversity of systems each serving a different partition of data/context does result in higher security if you are looking at all the information together. If an attacker only cares about the data of one partition, then diversity doesn’t help.
  • A diversity of systems that need to interoperate often use older and less secure protocols.
  • The primary argument (I’ve seen) against monoculture is that a system in the monoculture is weaker then it otherwise would have been because there are more attackers on that system and these attackers benefit from network effects in data sharing.
  • At a certain threshold, defenders start getting  network effect benefits too.
    • First, security vulns in a monoculture tend to not remain private (disadvantage or the network effects), allowing defenders to deal with not only specific issues, but learn to defend against classes of issues.
    • Security mitigation techniques are easier to deploy against like machines then diverse machines.
  • The monoculture reference to biology is actually harmful to understanding security. Biological systems care the most about survivability (Denial of Service?). Other aspects of security might have some defense as a side effect, but there really isn’t much of a notion of defending against information disclosure.

Finally, some good arguments against OpenXML

Stéphane Rodriguez has an article about issues one hits when trying to implement or use OpenXML. They don’t have the idiotic and artificial type of arguments that lists like groklaw has created, but some of his examples feel a bit extended to make a good story.

Lets see what the summary of his issues are with my bottom line comments. Also note I’m no expert at this stuff, I’m a geek, not a word processing file format geek and I certainly don’t speak for Microsoft on these issues.

  1. Self-exploding spreadsheets
    • Removing formulas from a spreadsheet is non trivial because there are other files with references to the forumla to update, such as the calculation chain
    • You can’t rebuild the calculation chain without going through the whole document.
    • While the calculation chain can be excluded it is non optimal to do so because some one who does need to understand the whole spreadsheet will have to recalculate it.
    • Some ZIP libraries don’t deal efficiently with doing the sort of operations needed to manipulate these zip based documents structures
    • Bottom Line 1: Invalidating the Calculation Chain should be automatic, so that simple manipulation tools work better
    • Bottom Line 2: Classic engineering tradeoff, you can precalc stuff if you want, but then you have to be able to precalculate it and keep some sort of invalidating state.
  2. Entered versus stored values
    • The intuition that what you type in excel is what is stored is incorrect. Excel does magic to make it more user friendly like automatically adjusting to local convention (like , instead of . in number formatting) and auto converting to a type instead of treating everything as a string or forcing the user to be explicit
    • The stored number values are affected by IEEE rounding rules
    • Stored values are not locale dependant (This is a bad thing?)
    • Bottom Line: It’s not clear how this affects the usability or usefulness of the format to me. Maybe a different example where values that aren’t in this format (generated by a third party tool) fail in excel?
  3. Optimization artefacts become a feature instead of an embarrasment
    • Worksheet shared forulas are listed as “copy from Cell X” instead of having a neutral non cell reference that everything uses
    • This leads to a lot more work to change a formula in one place if others reference it.
    • Bottom Line: Sounds like a valid complaint to me
  4. VML isn’t XML
    • VML is supposed to be deprecated but gets used in some places like comments
    • 10 year old memo from Gates that has little to no bearing on the world or Microsoft today
    • Bottom Line: I’m not familiar enough with the spec to know if this is an issue or not, but it sounds like comments in Excel is hard to work with and that’s bad.
  5. Open packaging parts minefield
    • You can’t delete a part and know who relies on it without parsing through everything in the file
    • Bottom Line: sounds sucky
  6. International, but US English first and foremost
    • The functional things in the format for excel is in english (like the SUM() function)
    • VML and DrawingML have a number of encoding notes to help with localization which aren’t documented well
    • Applications on top of OpenXML have to localize everything themselves
    • Bottom Line: Maybe I’m missing it, but this seems like a feature, my spreadsheet manipulator doesn’t have to be aware of all the possible language encoding of the word “SUM”

I’m going to cut off this post here for now (wife wants my attention 🙂 ) and maybe continue it another day

Major themes from the list so far:

  • The excel format seems to be not well designed for targeted modification of existing files. You have to load an understand the whole thing and then write it all back out again. (unless you are using the custom schema stuff, but that is out of scope)
  • VML interacts with parts of openXML is not well describe ways

— Ari

Windows Security Boundaries

I was reading Raymond’s post on Escalation of Privilege bugs that don’t actually escalate your privilege and then quickly read the earlier episode of the series. There I saw a lot of commenter rebilling against the concept of post by drawing new security boundaries which the hypothetical exploit would cross. This crystallized a concept for me that there are certain security boundaries in windows that are harder then others and there is much confusion in this area. Since I haven’t seen this information in one place anywhere, I’ll try to consolidate my understanding of it here.

Security Boundaries control the flow of information and execution between two distinct environments. We consider a boundary breached when arbitrary data or execution is no longer prevented from occurring. Most of the time we consider one of the environments a superset of the other, for example, going from executing as a single user to controlling the entire Operating System. However any attack that gives you more privileges then you currently have can be considered an escalation of privilege.

  • Primary Security Boundaries
    1. The Remote Boundary (is there a better name?)
      • This boundary separates things executing off your computer and on your computer. When an attacker can remotely make your computer do arbitrary things in a security context that would be crossing the remote/machine boundary.
    2. The User Principle Boundary
      • This refers to the security boundary created by executing code under a security principal and the ACLs that details which user has access to which resources. This is what keeps one user from snooping on another user’s files. If untrusted code manages to run in your user account, it’s not really your user account any more. This can also refer to non user accounts such as services.
    3. The Administrator/Kernel vrs Not Boundary
      • This is the boundary between a normal user and running as administrator or executing code in the kernel. Once untrusted code is running in either administrator or in the kernel, it is not your box anymore.
    4. Privileges
      • These carve out boundaries like ACLs.
    5. The Operating System Boundary
      • This boundary refers to the ability to read files and execute when it is allowed to execute outside the context of the operating system normally in control of the resources. If the OS isn’t running it can’t protect secrets. Technologies like bitlocker and the one-way encryption of passwords are attempt to deal with breaches of this boundary. Vitalization is making this area more interesting.This is also the point of Immutable Law #3.
    6. Managed Code (CLR/Java) sandboxing
  • Mitigation Boundaries (These are bypass-able, have uses and may be put together to make something stronger but alone do not form a primary security boundary, see Mark’s blog)
    1. Power User/Administrator/Kernel/System
      • You can switch between these without much difficulty.
    2. Vista Admin account UAC
      • The split token helps but doesn’t make a full boundary
    3. Session boundaries
      • Different user sessions have different named object namespaces ACL’d to them, however one user could reach over and mess with then session of another instance of the same user.
    4. Restricted Tokens
    5. IL Levels
    6. Software Restriction Policies
    7. UAC elevated processes in a user session
    8. Kernel Driver Signing
    9. NATs/Most Firewalls
    10. Kiosk style, certain applications only hacks/setting changes
    11. System File Protection
    12. Windows Data Protection – DPAPI
    13. Code Signing

Much of the confusion occurs from “breaching” a Mitigation Boundary instead of one of the Primary Security Boundaries. Aside from some nice new Mitigation Boundaries, the main thing that Vista does is move most users from the Administrator/Kernel side to the rest side or the primary boundaries #3, and that is a big deal.

Safari on Windows: Seeing the ugly beast

My first reaction to the news was, ah so that’s how they will allow people to develop and test their apps for the iphone. Then we loaded it up on a test box and I had three reactions. First: Why does the window frame look like crap? Second: Why is all their web page text so fuzzy to the point I felt sick? Third: How the heck does one open a new tab? It seems to be the pattern that whenever apple ships software for windows it looks much uglier then a default hello world message box type app. Hopefully they will someday improve upon their porting kit and make something that doesn’t look so awful. I can also understand apple’s hostility to windows, if I had to use/test apps that looked like that all day I would be hostile too. 🙂
Oh and a couple more quick usage notes:

  • The back button on my mouse doesn’t do anything in Safari
  • Not having an edge of the window to use for resizing is pretty annoying
  • I can’t find any way to add wikipedia to the search box
  • If you don’t have any binary legacy support to worry about, why are you going 32bit only? Get the extension market used to 64 bit now before it becomes a legacy hassle.
  • Drag and drop customization of the UI elements is pretty cool
  • CFNetwork.dll? This could be fun to play with…

Overall, this has a serious case of portcitus, when your app looks or acts lame because you are more focused on a compatible source tree and exact rending with the other platforms then taking advantage of the platform you are porting to.

Update: Oh yeah… and do some security testing 🙂

Joel praises the Windows Branching and Quality gate model

Joel writes:

Of all the things broken at Microsoft, the way they use source control on the Windows team is not one of them.

When you’re working with source control on a huge team, the best way to organize things is to create branches and sub-branches that correspond to your individual feature teams, down to a high level of granularity. If your tools support it, you can even have private branches for every developer. So they can check in as often as they want, only merging up when they feel that their code is stable. Your QA department owns the “junction points” above each merge. That is, as soon as a developer merges their private branch with their team branch, QA gets to look at it and they only merge it up if it meets their quality bar.


So where does the branching model have issues in Windows?

First, we haven’t gone to a branch(s) per developer so there are semi redundant tools for managing checked in code and tools for managing potential changes not checked in. This causes friction in building and testing such changes. Also a branch implies a path for a change to get some main place or product, and managing the path can be annoying. You get emails of, “The old path is getting shut down, migrate your code to the new path”. At times there is no place to do your work and check it in. Another set of problems come via the quality gates on RIs. Constainsts around how many branches can be built a night and the velocity of change to the overall code base resulting in a need to meet the quality gates quickly and in a automated way. You see, if you take to long to RI, your test results may not be valid anymore becuase the OS has changed enough from other teams.

A lot of this system came as a result of the famed Longhorn Reset and thier was growing pains in such a huge change, so it’ll be intresting to see what system we come up for the next release.

Inline Search for IE (including 7 on Vista)

Dare was kind enough to point out that there is a nice non-modal inline search for IE, available right now. I haven’t told so many people to install a piece of software as a must have for a long time.