The nuts and bolts of SE2: infrastructure and architecture

The nuts and bolts of SE2: infrastructure and architecture

This one's for the nerds.

Suborbital Extension Engine is a platform designed to run untrusted third-party plugins that are integrated into your SaaS application. In order to do this, we've built a system that runs a highly available cluster of WebAssembly runtimes with a simple API to execute user plugins. Let's dive in to how it works.

High level overview

At the highest level, SE2 has four main components: the data plane, control plane, code editor, and builder service. The data plane is responsible for loading and executing WebAssembly plugins via an 'execution API'. The control plane is responsible for providing the metadata and Wasm modules needed for the data plane to operate. The code editor is a web application that lets end-users write JavaScript, TypeScript, Rust, or Go code to create plugins, and the builder service is responsible for taking that code and building it into a WebAssembly module that can be executed.

Drop-in code editor and builder service

Starting with what end-users interact with directly, our code editor is a Next.js web app designed to make writing plugins easy. It includes a text editor, console, and controls for building and deploying plugins. The editor can be embedded into any web app via a frame or our new React component. It allows testing new plugins by sending test payloads into the plugins and returning their output to the console. When the user runs a test, the builder service spins up an ephemeral Wasm runtime instance to execute the test.

Code Editor Screenshot

When the 'build' step is triggered, the editor connects with the builder service to compile the plugin. Our builder service includes APIs to take source code, build it into a Wasm module, and then store it. Everything is versioned using content-addressable hashes so that specific versions of plugins can be referenced in the execution API. The builder service includes the toolchains needed to build JS, TS, Go, and Rust code so that the end-user does not need to concern themselves with 'how' everything is built, just the logic of their plugin. The builder service shares its code-building logic with our CLI, Subo, and so your Wasm modules will behave identically whether you're building them locally or via SE2.

Our upcoming hosted SE2 offering runs the builder service (configured for multi-tenant builds) in a Kubernetes cluster within Google Cloud Platform. This Kubernetes cluster is spread across multiple highly available regions. All of our infrastructure is configured via Terraform, and the Kubernetes cluster is kept up to date by Flux CD.

Central control plane

The SE2 control plane is responsible for ensuring that the data plane remains up to date with the latest configuration and metadata, and allows the data plane to download (and cache) the Wasm modules. Since all of the modules are content-addressable via their hash, it's extremely easy to do caching. The entire SE2 universe (what we call the 'system') is versioned. Any time a change is made, the system version is incremented to inform the data plane that something has changed. The control plane then exposes APIs to give an overview of exactly what has changed, which allows the data plane to intelligently pull down anything new to update its configuration and cache.

In the hosted SE2 infrastructure, the control plane server lives next to the builder in the main Kubernetes cluster. It also lives alongside our API service which manages accounts, authentication, and authorization for the multi-tenant hosted environment. For our customers who host SE2 themselves, the control plane and builder service run in single-tenant mode and don't need the API service except to send telemetry data. Our Kubernetes cluster is also home to the control servers for Hashicorp Nomad, which is used to orchestrate our data plane regions. Our Nomad installation is controlled via Waypoint, which performs similar duties to Flux CD.

The metadata for our hosted SE2 API, control plane, and builder services are stored in a PlanetScale database, and our main storage for artifacts is Google Cloud Storage. Since the hosted infrastructure was designed to be multi-tenant from the get-go, everything is segmented into accounts. Each account has its own segmented storage within the cloud bucket, and is authenticated and authorized by Auth0 and an access control policy system. Resources such as accounts, tenants, builds, and artifacts are all accessed by presenting a properly-scoped bearer token, and the API service ensures that all access control rules are valid before granting access to a resource.

Global data plane

The data plane for SE2, whether self-hosted or Suborbital-hosted, is our open source E2 Core plugin server. It connects to the control plane to fetch configuration and Wasm modules, and maintains a local cache. E2 Core starts up as a proxy server, and receives requests to execute plugins identified by name or by a content-addressable hash. As E2 Core receives metadata from the control plane, it downloads, stores, and then launches each individual Wasm module in its own process, and Wasm runtime instances are then autoscaled within each process to handle each execution of the plugin. These 'satellite' processes connect with the main proxy process via a local network mesh, using websockets to receive requests and respond with its results. Every satellite process is connected to every other satellite process belonging to the same account, and so E2 Core workflows (chains of plugins strung together) can be run by passing data directly between satellite processes, which is extremely efficient.

In SE2's hosted infrastructure, there are edge data plane regions deployed in several globally-distributed regions. Each region is identical, and uses Nomad to orchestrate its services. Each region runs an autoscaled group of E2 Core worker nodes and an OpenTelemetry collector. These nodes are all currently running on the new GCP ARM VMs (which are in beta, but have been extremely reliable for us). E2 Core reports data about its activity (i.e. which plugins are executed along with details such as execution time, memory use, etc) to the OpenTelemetry collector. The collector then ships this data to an InfluxDB instance. InfluxDB is a time series database, and we use it to collect and query usage information, which we use to display metrics in the admin dashboard and for billing. We'll show off more about the admin dashboard soon!

All of the edge regions are connected to an anycast GCP global load balancer. This ensures that any request made by your application servers is automatically routed to the nearest instance of the data plane. Since all of the data plane regions are identical, any region can serve any request. Additionally, each region contains an SSD-based replica of all the Wasm modules it needs to operate, along with a continually updated cache of all the metadata it needs for routing. This means that if the control plane were to go offline, or the edge region were to lose communication with the control plane, it could continue serving requests un-hindered.

A note about edge networks

I've referred to our data plane as an 'edge' several times in this post, and I want to clarify exactly what that means. Edge networks and edge computing are still somewhat-fuzzily defined concepts, and in our case it simply means 'running servers close to the servers sending us requests'. Other edge networks such as Cloudflare, Netlify, and Fastly have designed their edge networks to place servers near people to ensure that requests to websites and APIs are served as quickly as possible. For SE2, the important metric is latency to our customers' servers, not latency to real humans' devices, which is why we're designing our edge to have data plane instances close to the most popular cloud regions.

Varied deployment options

Since the data plane is our open source E2 Core server (unmodified), it is entirely possible (and officially supported) for our customers to host it in their own infrastructure. Since SE2 plugins are designed to be extremely efficient, it makes sense for some customers to host the data plane themselves to get the absolute lowest possible latency. The currently-released version of SE2 has both the control plane and the data plane hosted by our customers. We will continue to support this configuration until the hosted SE2 infrastructure reaches General Availability, after which we will begin transitioning all of our customers' control planes into the hosted infrastructure. The data plane will continue to be self-hostable if you prefer, and we are happy to work with your team to work out the best configuration and deployment for your application's needs.

A look at what's next

As we open up the hosted version of SE2, we'll be gathering feedback from early access users and deploying more data plane regions. We'll also be adding new capabilities for the plugins running in our customers' and our hosted infrastructure, such as access to cache, storage, and more. It's imperative that the security and performance of SE2 is top-notch, regardless of the deployment model, and so we are putting special care and research into those two areas to provide the best and most secure platform for making your application programmable.

I hope you'll sign up for early access to hosted SE2, as we'll be giving access to the first batch of customers in a few short weeks. You should also keep an eye on our Twitter account, as we'll be posting a demo video and deployment overview video next week! We also love to answer questions about what we do, so don't hesitate to reach out.