ETW: Event Tracing for Windows, part 3: architecture

(See all of my ETW posts)


It’s time for an ETW architecture discussion. There is a reasonably good official description of the architecture, but it’s a bit long and hard to follow. Here’s my short and hard to follow version:

The Windows kernel, many Windows components, and a scattering of 3rd party libraries and applications all have a mechanism for reporting various events of interest. Heck, they use the same mechanism to report a ton of events of no interest at all. They are all said to serve as event providers. There are a couple of different types of providers: providers for performance counters (eg number of page faults), event traces, alerts, and configuration settings. I will ignore everything but event traces until the end of this article. Controllers enable and disable providers, and also associate one or more providers with each event trace session. Each session is associated with a logging engine. Finally, event consumers hook up to one or more event trace sessions to receive the events. And even more finally, you may end up with event trace logs (*.etl files) that store an archive of a set of events from some trace session. Sometimes these will be produced directly by a logging engine, and sometimes by an event consumer that may just write out the currently buffered events from a trace session, or might even merge together multiple sources of information to produce the final file.

Note that while this is a nice happy high-level data model, reality is more constrained. For example, while the data model says that the same provider can provide event streams to multiple trace sessions, this is only true when using newer “manifest-based” providers (which I may or may not blog about in the future.) “Classic” providers will only provide data to a single session, and the kernel provider is a totally special case: it can only be hooked up to two specific predefined sessions, and you can’t mix any other providers with it at all. If you try to associate a classic provider with more than one session, bad stuff will happen. My various sources are inconsistent about what exactly happens. One says that the newer session will win and the older session will stop receiving events, another says that the activation will fail. Also, no more than eight sessions are allowed to receive data from a single manifest-based provider.

Controllers can activate providers at any time, even while the providing application is running. Controllers can also dynamically change the set of providers feeding an event session (so theoretically you could wait for a particular event to happen or a threshold to be reached, then add in events from the kernel provider for a while, then turn it off again later). The ETW overhead for a provider application is low when no active loggers are requesting information (not as low as eg dtrace, which compiles down to a no-op when not in use, but still only a handful of instructions). The standard built-in controller includes scheduling facilities, so eg you can configure a session to start up at 3am and gather events for an hour every day. I believe the command-line tools logman and xperf are both controllers, as is the perfmon GUI tool (also known, I think, as RPM, for Reliability and Performance Monitor.) logman and perfmon both seem to be old-school tools. xperf is newer (meaning manifest-aware and part of the Crimson aka Unified Windows Eventing world).

Logging engines are configurable with things like the number and size of memory buffers used, whether to be “real-time” (deliver events immediately to consumers) or log to a file, what type of memory to use for buffering, and more esoteric options. According to the Ntdebugging blog, kernel tracing can only use two specific loggers: a circular logger for detailed info on the last few seconds, and a regular logger that is used for longer-term monitoring.

Providers can produce a variety of different events, so there are mechanisms for grouping and categorizing them. Each type of event produced is associated with zero or more flags (confusingly also referred to as keywords). When starting up or updating a trace session, you can specify which flags you are interested in, and only matching events will be grabbed from the provider. (The filtering seems to happen at both ends — the provider will suppress events that nobody wants, and the session will suppress events that it doesn’t want.) In addition, there are some predefined groups of flags that can be used for convenience.

Consumers eat trace data and do something with it. A couple of consumers are built Windows, to provide standard features: eg, the Event Viewer and the Performance Monitor. tracerpt and tracefmt are command-line tools for generating raw dumps of events from an .etl file. XPerfView is a graphical tool for displaying performance information from a trace. This article is not going to discuss event consumers, because they’re the most interesting part and I’d rather make you suffer.

Ok, there. That’s it. That’s my architectural description, omitting lots and lots of important details.


The terminology used in the actual tools is inconsistent between the old and new, and even within individual tools. “Logger” and “Data Collector Set” particularly seem to mean multiple things. Also, some things seem to not quite fit the data model. There is a provider named “Circular Kernel Session Provider”. Huh? “Circular” would seem to be a description of a logging engine or logging engine parameter, and “Session” makes the name sound like it’s the provider for a kernel session, which is redundant since all providers are providers for sessions. If it were just “Kernel”, it would make sense.

But maybe that name is just fallout from the kernel provider’s special (limited) status.


If all this seems too abstract, I’ll walk through examples of playing with this stuff in my next article.

Tags: , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.