Skip to content

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377

Open
JimBobSquarePants wants to merge 138 commits intomainfrom
js/canvas-api
Open

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377
JimBobSquarePants wants to merge 138 commits intomainfrom
js/canvas-api

Conversation

@JimBobSquarePants
Copy link
Member

@JimBobSquarePants JimBobSquarePants commented Mar 1, 2026

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Breaking Changes: DrawingCanvas API

Fix #106
Fix #244
Fix #344
Fix #367

This is a major breaking change. The library's public API has been completely redesigned around a canvas-based drawing model, replacing the previous collection of imperative extension methods.

What changed

The old API surface — dozens of IImageProcessingContext extension methods like DrawLine(), DrawPolygon(), FillPolygon(), DrawBeziers(), DrawImage(), DrawText(), etc. — has been removed entirely. These methods were individually simple but suffered from several architectural limitations:

  • Each call was an independent image processor that rasterized and composited in isolation, making it impossible to batch or reorder operations.
  • State (blending mode, clip paths, transforms) had to be passed to every single call.
  • There was no way for an alternate rendering backend to intercept or accelerate a sequence of draw calls.

The new model: DrawingCanvas

All drawing now goes through IDrawingCanvas / DrawingCanvas<TPixel>, a stateful canvas that queues draw commands and flushes them as a batch.

Via Image.Mutate() (most common)

using SixLabors.ImageSharp.Drawing;
using SixLabors.ImageSharp.Drawing.Processing;

image.Mutate(ctx => ctx.ProcessWithCanvas(canvas =>
{
    // Fill a path
    canvas.Fill(Brushes.Solid(Color.Red), new EllipsePolygon(200, 200, 100));

    // Stroke a path
    canvas.Draw(Pens.Solid(Color.Blue, 3), new RectangularPolygon(50, 50, 200, 100));

    // Draw a polyline
    canvas.DrawLine(Pens.Solid(Color.Green, 2), new PointF(0, 0), new PointF(100, 100));

    // Draw text
    canvas.DrawText(
        new RichTextOptions(font) { Origin = new PointF(10, 10) },
        "Hello, World!",
        brush: Brushes.Solid(Color.Black),
        pen: null);

    // Draw an image
    canvas.DrawImage(sourceImage, sourceRect, destinationRect);

    // Save/Restore state (options, clip paths)
    canvas.Save(new DrawingOptions
    {
        GraphicsOptions = new GraphicsOptions { BlendPercentage = 0.5f }
    });
    canvas.Fill(brush, path);
    canvas.Restore();

    // Apply arbitrary image processing to a path region
    canvas.Process(path, inner => inner.Brightness(0.5f));

    // Commands are flushed on Dispose (or call canvas.Flush() explicitly)
}));

Standalone usage (without Image.Mutate)

DrawingCanvas<TPixel> can be constructed directly against an image frame:

using var canvas = DrawingCanvas<Rgba32>.FromRootFrame(image, new DrawingOptions());

canvas.Fill(brush, path);
canvas.Draw(pen, path);
canvas.Flush();
using var canvas = DrawingCanvas<Rgba32>.FromImage(image, frameIndex: 0, new DrawingOptions());
// ...
using var canvas = DrawingCanvas<Rgba32>.FromFrame(frame, new DrawingOptions());
// ...

Canvas state management

The canvas supports a save/restore stack (similar to HTML Canvas or SkCanvas):

int saveCount = canvas.Save();             // push current state
canvas.Save(options, clipPath1, clipPath2); // push and replace state

canvas.Restore();              // pop one level
canvas.RestoreTo(saveCount);   // pop to a specific level

State includes DrawingOptions (graphics options, shape options, transform) and clip paths. SaveLayer creates an offscreen layer that composites back on Restore.

IDrawingBackend — bring your own renderer

The library's rasterization and composition pipeline is abstracted behind IDrawingBackend. This interface has the following methods:

Method Purpose
FlushCompositions<TPixel> Flushes queued composition operations for the target.
TryReadRegion<TPixel> Read pixels back from the target (needed for Process() and DrawImage()).
ComposeLayer<TPixel> Composites a layer surface onto a destination frame.
CreateLayerFrame<TPixel> Creates an offscreen layer frame for SaveLayer.
ReleaseFrameResources<TPixel> Releases any backend resources cached against the specified target frame.

The library ships with DefaultDrawingBackend (CPU, tiled fixed-point rasterizer). An experimental WebGPU compute-shader backend (ImageSharp.Drawing.WebGPU) is also available, demonstrating how alternate backends plug in. Users can provide their own implementations — for example, GPU-accelerated backends, SVG emitters, or recording/replay layers.

Backends are registered on Configuration:

configuration.SetDrawingBackend(myCustomBackend);

Migration guide

Old API New API
ctx.Fill(color, path) ctx.ProcessWithCanvas(c => c.Fill(Brushes.Solid(color), path))
ctx.Fill(brush, path) ctx.ProcessWithCanvas(c => c.Fill(brush, path))
ctx.Draw(pen, path) ctx.ProcessWithCanvas(c => c.Draw(pen, path))
ctx.DrawLine(pen, points) ctx.ProcessWithCanvas(c => c.DrawLine(pen, points))
ctx.DrawPolygon(pen, points) ctx.ProcessWithCanvas(c => c.Draw(pen, new Polygon(new LinearLineSegment(points))))
ctx.FillPolygon(brush, points) ctx.ProcessWithCanvas(c => c.Fill(brush, new Polygon(new LinearLineSegment(points))))
ctx.DrawText(text, font, color, origin) ctx.ProcessWithCanvas(c => c.DrawText(new RichTextOptions(font) { Origin = origin }, text, Brushes.Solid(color), null))
ctx.DrawImage(overlay, opacity) ctx.ProcessWithCanvas(c => c.DrawImage(overlay, sourceRect, destRect))
Multiple independent draw calls Single ProcessWithCanvas block — commands are batched and flushed together

Other breaking changes in this PR

  • AntialiasSubpixelDepth removed — The rasterizer now uses a fixed 256-step (8-bit) subpixel depth. The old AntialiasSubpixelDepth property (default: 16) controlled how many vertical subpixel steps the rasterizer used per pixel row. The new fixed-point scanline rasterizer integrates area/cover analytically per cell rather than sampling at discrete subpixel rows, so the "depth" is a property of the coordinate precision (24.8 fixed-point), not a tunable sample count. 256 steps gives ~0.4% coverage granularity — more than sufficient for all practical use cases. The old default of 16 (~6.25% granularity) could produce visible banding on gentle slopes.
  • GraphicsOptions.Antialias — now controls RasterizationMode (antialiased vs aliased). When false, coverage is snapped to binary using AntialiasThreshold.
  • GraphicsOptions.AntialiasThreshold — new property (0–1, default 0.5) controlling the coverage cutoff in aliased mode. Pixels with coverage at or above this value become fully opaque; pixels below are discarded.

Benchmarks

The DrawPolygonAll benchmark renders a 7200x4800px path of the state of Mississippi with a 2px stroke.

Due to the fused design of our rasterizer, we're absolutely dominating. 🚀🚀🚀🚀🚀

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.26200
Unknown processor
.NET SDK=10.0.103
  [Host] : .NET 8.0.24 (8.0.2426.7010), X64 RyuJIT

Toolchain=InProcessEmitToolchain  InvocationCount=1  IterationCount=40
LaunchCount=3  UnrollFactor=1  WarmupCount=40
Method Mean Error StdDev Median Ratio RatioSD
SkiaSharp 37.30 ms 0.487 ms 1.510 ms 37.49 ms 1.00 0.00
SystemDrawing 44.03 ms 0.599 ms 1.935 ms 44.60 ms 1.19 0.08
ImageSharp 11.61 ms 0.231 ms 0.731 ms 11.56 ms 0.31 0.02
ImageSharpWebGPUNativeSurface 16.72 ms 0.283 ms 0.912 ms 16.63 ms 0.45 0.03

@JimBobSquarePants JimBobSquarePants marked this pull request as ready for review March 10, 2026 13:32
@JimBobSquarePants JimBobSquarePants changed the title WIP. DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model Mar 10, 2026
Copy link
Member

@antonfirsov antonfirsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is massive. Before I would jump into the code I would need some help building up my understanding, ideally a better architectural doc.

It's a big opportunity that AI cuts the time needed to introduce such features, but the implementation is likely far from perfect, so humans need to jump into the loop and the vast mass of the change makes things quite difficult to start that work for a reviewer. Having better and more human docs is my best idea to address this problem.

Comment on lines +65 to +66
ELSE IF ProcessorCount >= 2
-> Parallel.For across tiles (band-sorted edges)
Copy link
Member

@antonfirsov antonfirsov Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For high-load server workloads it is very likely not good to have parallelism on by default since things will be running pretty much in parallel and saturate the CPU-s anyways. No matter how great is the algorithm, there will be some overhead around the parallelization because of CPU cache contention. With highly parallel processing at request/image level, all that Parallel.For is adding is the contention overhead.

We need to see benchmark results demonstrating Parallel Efficiency:

This would mean comparing the single-threaded execution time to the parallel one. In an ideal world the parallel run should be MaxDegreeOfParallelism times faster, in reality it won't be the case. We need to see how the value SingleThreadedTime compares to the ParallelTime * MaxDegreeOfParallelism.

Comment on lines +7 to +22
`DrawingCanvas<TPixel>` is the high-level drawing API. It manages a state stack, command batching, layer compositing, and delegates rasterization to an `IDrawingBackend`. It implements a deferred command model—draw calls queue `CompositionCommand` objects in a batcher, which are flushed to the backend on `Flush()` or `Dispose()`.

## Class Structure

```text
DrawingCanvas<TPixel> : IDrawingCanvas, IDisposable
Fields:
configuration : Configuration
backend : IDrawingBackend
targetFrame : ICanvasFrame<TPixel> (root frame, immutable)
batcher : DrawingCanvasBatcher<TPixel> (reassigned on SaveLayer/Restore)
savedStates : Stack<DrawingCanvasState> (min depth 1)
layerDataStack : Stack<LayerData<TPixel>> (one per active SaveLayer)
pendingImageResources : List<Image<TPixel>> (temp images awaiting flush)
isDisposed : bool
```
Copy link
Member

@antonfirsov antonfirsov Mar 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be extremely helpful for the review to see the big picture before jumping into the code, and ideally do it without reverse-engineering everything. I hoped this document would be helpful, but unfortunately it is not useful as a starting point for me in it's current state.

After mentioning command batching as the architectural basis, it jumps into all kinds of details (class structure, methods, properties bunch of new terms etc.) cross referencing each other which are not (yet) relevant big-picture wise. I'm missing the part that would explain the architecture.

That part should be either human-written or written with strong human guidance and should try really hard to introduce the thing to a newcomer:

  • What are the most important problems the unification of various backends (CPU, GPU, vector files) brings us?
  • What are key ideas for the solution?
  • How do those ideas translate into architectural terms and what do those terms exactly mean (eg. layers, canvas frames - for me those can have different meanings and I really don't understand what do they mean or what purpose do they serve in this particular architecture)

namespace SixLabors.ImageSharp.Drawing.Processing.Backends;

/// <summary>
/// One normalized composition command queued by <see cref="DrawingCanvasBatcher{TPixel}"/>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// One normalized composition command queued by <see cref="DrawingCanvasBatcher{TPixel}"/>.
/// A normalized composition command queued by <see cref="DrawingCanvasBatcher{TPixel}"/>.

Is there any way to efficiently improve the LLM-generated docs?
What is normalization of a command? Such concepts should be explained either in architectural docs or around the types they are being introduced.

/// <summary>
/// One normalized composition command queued by <see cref="DrawingCanvasBatcher{TPixel}"/>.
/// </summary>
public readonly struct CompositionCommand
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how does the command define what it will be drawing. Or is it more complicated?

{
private const int WindowWidth = 800;
private const int WindowHeight = 600;
private const int BallCount = 50;
Copy link
Member

@antonfirsov antonfirsov Mar 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushing this up to 1000 brings FPS down to 20-40 depending on whether there is text in the background or not. HTML canvas still runs smoothly with that amount balls on my machine (+much more text +gradient fill).

We need to understand where is the bottleneck.

@antonfirsov
Copy link
Member

antonfirsov commented Mar 15, 2026

Btw, once I get going, my strategy would be to implement a backend on top of a modern 2D renderer like Vello or Skia Graphite. I don't trust LLM output for relatively new tech like WebGPU, I think there's insufficient training material out there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

2 participants