What's Wrong with MIDI?

How MPE and MIDI 2.0 Overcome MIDI's Shortcomings

Evan Shamoon · 02/28/24

Back in the late 1970s, when the creation of electronic musical instruments was starting to gain momentum and wider popularity, there was no standardized way for these devices to communicate with one another. With instruments coming from a range of different manufacturers, musicians had to use a slew of proprietary protocols, custom interfaces and connector types…which limited the ways in which these instruments could be integrated together. Aside from rudimentary control voltage I/O, even several seminal synthesizers — including the likes of the Sequential Circuits Prophet-5, Roland SH-101, and Korg MS-20 — simply had no shared format for transmitting data with each other.

In 1981, however, a consortium of industry pioneers and electronic music enthusiasts came together to create a new communication standard—which would eventually be called MIDI, or Musical Instrument Digital Interface. The initial group of developers that established the specification included some heavy hitters, including leadership and engineers from Roland, Yamaha, Korg, and Sequential Circuits founder Dave Smith, to name a few.

After a couple of years of intensive development, the MIDI protocol was introduced in 1983, and it was nothing short of a game changer for the world of electronic music. It used a simple and extremely efficient data format to transmit musical information—note-on/off commands, pitch, velocity, control changes and the like—and allowed synthesizers, drum machines, and other MIDI-enabled devices to sync their clocks and control one another’s parameters. MIDI was an immediate and powerful shared language that allowed these instruments to talk to each other, regardless of the manufacturer, and has proven to be one of the most durable and long-lasting formats in the history of electronics. (Consider, for instance, how many variations of USB, HDMI, and countless other digital protocols we’ve churned through over the years.) For decades, MIDI has continued to hold the electronic music world together.

But…it's not perfect. In this article, we're looking at some of MIDI's pitfalls and common complaints with this still-ubiquitous protocol. Additionally, we'll look at how new extensions of the MIDI protocol aim to circumvent these problems or approach them from new angles. So let's buckle in and address the big question: what is wrong with MIDI?

So...What's Wrong with MIDI?

Over time, MIDI has begun to show its age. For one, there are limits to its speed and resolution: MIDI was initially designed to work with hardware from the 1980s, and as a result, has a relatively low data transfer rate. This means that there can be a noticeable delay (known as MIDI latency) when transmitting MIDI messages between devices. Likewise, MIDI’s resolution for certain parameters—particularly those which are most logically addressed with CC (control change) values—is limited to 7 bits, which creates “steps” in some musical expressions (127 of them, to be precise), where more resolution would create smoother changes over time.

Perhaps more significantly, at least from a musical perspective, is the way in which MIDI handles polyphony. When dealing with polyphonic instruments, the process of mapping individual MIDI messages to control each note independently can be an arduously complex and resource-heavy affair. Mapping the pitch and timing of events isn't necessarily so complex—but per-note changes to timbre (common in acoustic instruments) are difficult to manage in typical MIDI implementations.

That is to say, when you use the pitch or modulation wheels on a keyboard MIDI, your gestures usually affect all notes playing at once. That’s because MIDI’s expression information isn’t tied to individual notes, but rather to the entire instrument channel as a whole. This makes it quite difficult to emulate the expressive nuance of something like a stringed instrument—where you’d be able to apply vibrato to an individual string or several strings, rather than all of the strings together.

With the more sophisticated DAWs and virtual instruments we use today, MIDI's bandwidth limitations are also much more apparent than they were in the early 1980s, and as such, it can become a bottleneck for transmitting the large amounts of data needed for dense, complex compositions. Finally, MIDI is also primarily designed around note-based and discrete event-based music, which makes it extremely well-suited for keyboard instruments…but perhaps less intuitive for other types of musical expression that conceptualize music as continuously evolving sound.

Enter MPE and MIDI 2.0—each of which provide their own solutions to MIDI's biggest pitfalls. Let's take a look at each and explain what they offer, on a high level.

MPE: MIDI Polyphonic Expression

MPE is a big topic which we've addressed in a dedicated article, What is MPE?. But, let's recap here. MPE is a specific (somewhat idiosyncratic) use of the original MIDI protocol which allows for more expressive control over individual notes in a polyphonic performance. As mentioned, using standard MIDI implementations, there’s only a single set of control data for all notes, making it challenging to apply different expressive techniques such as vibrato and pitch bend to the individual notes in a chord. MPE solves this problem by assigning each note its own set of MIDI messages, enabling independent control over parameters for each note. Per-note controls include pitch bend, pressure (aftertouch), and other continuous controller (MIDI CC) data for each simultaneously played note. MPE also enables seamless sliding between notes without affecting other notes, creating smoother and more expressive transitions.

Playing an MPE device can surprise even the most experienced keyboard player, thanks mainly to its continuous pressure functionality: that is, the instant pressure response when triggering a note. Though it may require some adaptation, the sensitivity of this response can foster a more immediate and natural interaction with an MPE surface.

[Above: the Roger Linn Designs Linnstrument, the first and perhaps most popular MPE MIDI controller.]

One prevalent misconception surrounding MPE is that it is solely geared towards emulating traditional instruments. But MPE's versatility extends far beyond that: it can effectively control a diverse range of devices, from granular synthesizers to modular systems and even lighting and visual rigs. This opens up a wide range of possibilities for sound designers, who can explore entirely new dimensions in their creative process. Several Eurorack modules from different manufacturers even offer the capability to convert MPE MIDI streams into gates and CVs. (Notable examples include the Endorphin.es Shuttle Control and the Expert Sleepers FH-2.) These modules serve as valuable tools for seamlessly integrating MPE-compatible instruments and controllers with modular synthesizers, allowing musicians to explore the rich world of modular synthesis with the expressive capabilities of MPE MIDI.

MPE is now a standard part of the MIDI specification. Not every MIDI controller can produce MPE data, and not every MIDI-controllable instrument can respond to MPE data—but because the data formatting is standardized, it is increasingly common for MPE to be a feature of new controllers and instruments. By all accounts, it seems that it is here to stay, offering a satisfying solution to many of MIDI's shortcomings.

MIDI 2.0 and You

In recent years, though, the electronic music world has been abuzz with discussion about the next step in MIDI's evolution: MIDI 2.0. MIDI 2.0 has been in development for years, and we're now starting to see some of the first MIDI 2.0-compatible software, synthesizers, and controllers find their way into the marketplace. So, what's so special about MIDI 2.0?

Here are some high-level important notes: MIDI 2.0 is backward compatible with MIDI 1.0 hardware. MIDI 2.0 also supports MPE. Beyond that, MIDI 2.0 is all about expanding options for connectivity, resolution, and complex inter-device communication.

MIDI 2.0 also offers higher resolution and precision for controls like pitch bend, aftertouch and other MIDI CC data. The 2.0 update significantly increases the resolution of the original MIDI spec’s messages from 7 bits to 32 bits, allowing for 4.29 billion possible values(!!). This massive increase in resolution allows for smoother transitions and more precise control, resulting in more natural and nuanced performances. (That said, it’s worth noting that the MIDI protocol still operates at a relatively low data rate compared to audio. MIDI messages consist of digital control information, and even with the 2.0 protocol are not intended for transmitting actual audio signals or audio rate modulation.)

Addressing the original’s limitations with regard to microtonal music, MIDI 2.0 supports the MIDI Tuning Standard (MTS), a feature that allows for precise and flexible tuning control. Unlike MIDI 1.0, which primarily focused on the Western equal-tempered scale, MIDI 2.0 more easily embraces the world of microtonal music, where tunings and scales with intervals smaller than the standard 12-tone equal temperament are integrated directly. It enables the transmission of microtonal pitch information, including pitch bend ranges and tuning program changes, allowing musicians the freedom to experiment with custom tunings and scales.

[Above: the Korg Keystage, one of the first commercial MIDI controllers to offer MIDI 2.0 property exchange support.]

While the resolution alone is a notable improvement, there are a host of “quality of life” improvements included as well. One convenience, for instance, is that it allows a synthesizer to upload a complete list of preset names to a controller, along with information about each parameter. This means that a MIDI 2.0 controller will be able to display preset names, parameter names and other information specific to the instrument it’s controlling — a small detail, perhaps, but one that most electronic musicians will recognize as incredibly useful.

MIDI 2.0 also comes with a range of more utilitarian, practical affordances. The aforementioned bidirectional communication between devices makes it easier for instruments and software to exchange and display data about control parameters. The concept of “profiles” that define specific sets of parameters and capabilities for different types of instruments is also new to MIDI 2.0, making it easier for devices to understand each other's capabilities. MIDI 2.0 even supports bidirectional communication for firmware updates, which means that devices can automatically receive those firmware updates from a computer when they’re available, ensuring that they remain up-to-date with the latest features and bug fixes.

Finally, it’s also worth noting that MIDI 2.0 is designed to be backward compatible with MIDI 1.0, ensuring that legacy MIDI devices can still communicate with newer MIDI 2.0 devices. This means that the same cables you’ve been using—MIDI 5-pin DIN, MIDI TRS, and of course USB MIDI—will remain compatible with MIDI 2.0.

Fully-featured MIDI 2.0 hardware is still uncommon, but it's gradually emerging. Korg's Keystage 61 and Keystage 49 controllers are some of the first noteworthy MIDI 2.0-capable controllers on the mass market; simiarly, several of their new generation of MkII digital synthesizers (the Modwave, Opsix, and Wavestate) offer MIDI 2.0 Property Exchange and polyphonic aftertouch support. Now, it's not entirely clear how deep these implementations are just yet—but perhaps soon we'll see Korg branching beyond Property Exchange and into more peculiar experimental applications of the MIDI 2.0 protocol.

Other Workarounds + Looking Forward

While MIDI 2.0 is still coming online, a number of creative workarounds for sidestepping some of the original MIDI protocol’s inefficiencies have been developed over the years. Perhaps the most significant relates to the reemergence of driving modular and semi-modular devices with control voltage.

Using MIDI-to-CV/Gate converters, musicians can control analog synthesizers or modular systems with MIDI messages. What’s interesting is that MIDI-to-CV converters effectively bridge the gap between MIDI and analog systems, allowing MIDI messages to be translated into analog control voltages that analog synthesizers understand. Significantly, these converters can be built with higher resolution to achieve more precise and detailed control over parameters. While it’s not quite the 32-bit resolution of MIDI 2.0, many MIDI-to-CV converters provide a resolution of 12 bits (i.e. 4,096 steps) or more, significantly surpassing the 7-bit/127 step resolution of standard MIDI. Naturally, once in the analog domain, the control signals can be altered any way that you please—leading to a tremendously open-ended approach to control signal modification.

There are also plenty of ways to send MIDI messages that go well beyond a human’s ability to do so in realtime. By recording and editing MIDI automation data in a sequencer or digital audio workstation (DAW), for instance, users can control various parameters of MIDI instruments over time with extreme precision that would be impossible to achieve manually. The same goes for devices that can do parameter locking, such as the Elektron machines: sending finely-tuned MIDI data on a per-trig basis can be an amazing way to control an external instrument, and goes a long way in making up for the speed and resolution limitations that exist in the original MIDI protocol.

Finally, some synthesizers and virtual instruments allow for multitimbral capabilities, where different MIDI channels can trigger different sounds simultaneously. With this functionality, each MIDI channel can be assigned a different sound or timbre; musicians can thereby layer various instruments on separate MIDI channels to build complex sounds and textures. This enables the ability to craft more expressive and orchestral-like arrangements, even with a single synthesizer or virtual instrument. Clever musicians will take note that the option for independent sounds on different voices internal to a single instrument can have far-reaching consequences for expression and sound design.

Experimentation has always been at the heart of electronic music. Over the past 40 years, the ubiquity and sheer longevity of MIDI has led to a vast range of MIDI-based resources, making it an essential tool for nearly all musicians, producers, and composers who work with electronic instruments. Both MIDI 2.0 and MPE represent significant advancements in the way these devices interface with one another, addressing the limitations of the original protocol and providing new ways for musicians to interact with their instruments. Only time will tell if it will last another 40 years—but in the meantime, we're sure that there will be no shortage of great music made with these new tools and technologies.