Thoughts on using MFormats SDK instead of DirectShow

Its been a while since we've released MFormats SDK, our flexible video development framework and codec library. We currently have customers, who've built their solutions on top of MFormats from scratch, there are others, who've successfully migrated their applications from DirectShow. You can read about some of such cases here.

I thought it would be a good idea to discuss the risks and challenges that we've heard about from customers, who are evaluating the idea of using MFormats instead of DirectShow in their apps.

First of all, here are some of the things that make MFormats better than DirectShow:

      • Since relevant audio data is always attached to a video frame, in MFormats audio and video are always in sync. We've heard constantly from people that this is often their main challenge with DirectShow.
      • Enjoy full transparency every step of the way: at any point in time you know exactly where each frame is. One of our customers told us he never knows if that's a DirectShow problem or a video source problem: with MFormats you'll know exactly what's going on. This alone dramatically reduces development time.
      • Use it with the coding technology of your choice: C++, C#, Delphi or VB.NET.
      • Make use of the built-in codecs – with no more pain of looking for specific filters to render your media, write to disc or stream out.
      • Connect to the best professional I/O boards: AJA, Blackmagic, Bluefish, DekTec, Deltacast, Stream Labs or Magewell. Instead of using DirectShow filters (which almost any vendor provides), we use their low-level SDKs to connect with lower latency and more quality control.
      • Do things that are extremely complicated in DirectShow - such as synchronized playback of several media files or synchronized capture of several live sources. And things like reverse playback and variable speed playback become extremely easy as well.

      DirectShow has a huge learning curve. I wonder if MFormats has the same problem?

      If what our own customers are telling us is true, then – no, MFormats is dramatically easier to learn and use compared to DirectShow. See for yourself, here's some code snippets that show how to implement transitions between sources in MFormats SDK.

      We run some filters that we've built in-house, plus there's a few third-party filters. Can we still use them in MFormats?


      The first question is whether you really need those filters. You no longer need any filters that help with audio/video sync. Most of the third-party splitters, muxers and codecs are now obsolete as well. MFormats also handles video mixing and basic text/graphics overlay – so, a lot of the areas are already covered.

      Hey, I work with a lot of live sources that only have DirectShow filters. How do I use those?


      Just plug your devices in and give it a go: MFormats supports DirectShow sources out of the box. We even provide methods to set the properties of such source filters from the MFormats code.

      DirectShow was an open eco-system. Your framework is closed. That sucks!

      Well, you almost got us there. MFormats, indeed, is a closed, proprietary product. But so is any third-party DirectShow filter – and often they are important.

      However, a the major advantage of our product compared to DirectShow is in the way you build your code. In DirectShow, you had to deal with two kinds of black boxes: the filters and the DirectShow pipeline. Both were mostly out of your control: you had to follow a bunch of rules to make sure the things you do follow the pipeline. Is that the kind of open you really need?

      In MFormats you are free to design your own pipeline, where you are the boss. The natural outcome of this is that you build exactly what the end user needs, without development overhead. This is simply a leaner way to develop software. Read this post about the basic ideas behind MFormats.

      So, whenever there's a need to insert that custom in-house-made filter of your's, all you do is remove the DirectShow wrapping and make the same code available as an object to use from your own code.

      DirectShow supports multi-threading out of the box. Will I have to handle this myself now?

      Hell, yes. And it isn't a problem, but it's not an easy one to explain in detail. The main ideas, however, are:

      • It's quite useful to have some basic understanding of threads. We highly recommend that your GUI and the MFormats-based code run in separate threads. Most of our samples are built that way, and we believe it's quite simple. Some very simple apps can be an exception to this rule of thumb, though.
      • Most decoders and encoders are multi-threaded by default, plus usually use extra buffering.
      • Most operations, that may affect performance, are in fact asynchronous (such as make use of buffers). Seeking is an exception here – but just give it a try: seeking in MFormats is a major improvement compared to DirectShow.

      Things kind of work here somehow. I don't really have the time to port my product to a different framework.

      If your product is nearing the end of its life cycle, we urge you that you don't, really. But if you do have a roadmap of new features or if you have ideas for new products coming up – then you might want to consider using a modern and powerful framework than just simply sticking with what you currently have.

      To give you an example, a lot of interest in the framework is happening because of our work with WebRTC, which is a low-latency way to transfer video over the Internet. Another one is the GPU pipeline – a dramatic performance improvement that we are currently working on. This pipeline is not – and never will be – available in DirectShow.

      Hope this helps. Download MFormats SDK and give it a try: