Literal and Optional Parameters Make Code Hard to Read

I want to convince you that this function’s parameters will cause problems:

function findThingsByTypeId(
  typeId: string,
  limit: number,
  withArchived: boolean = false,
): Promise<Array<Thing>> {}

Why? When we look at the function declaration, nothing is obviously terrible; we have reasonable types for a limited number of arguments. The problem arises because the function parameters can be called in unclear ways:

findThingsByTypeId(typeId, 20);
findThingsByTypeId(typeId, isExport ? 0 : 10, isExport);

By reading the function calls and not looking at the function declaration, will you be able to tell exactly what’s going on? You might be able to guess that 20 is a limit, but will you be able to tell what’s happening with isExport? Or know that the first call isn’t returning archived elements? I doubt it.

One way of addressing this problem is to rewrite the code to use explicit named variables:

/* findThingsByTypeId(typeId, 20); */
const limit = 20;
const withArchived = false;
findThingsByTypeId(typeId, limit, withArchived);

/* findThingsByTypeId(typeId, isExport ? 0 : 10, isExport */
// 0 means unlimited
const limit = isExport ? 0 : 10;
// when running an export job, we include _all_ things, even if they're archived
const withArchived = isExport;
findThingsByTypeId(typeId, limit, withArchived);

This code is much more verbose, but reading it gives you a clearer picture of what’s happening. You don’t need to guess that 0 is a limit or know that the third (now visible!) parameter controls whether or not that function includes archived elements.

Another tool that can help is TypeScript Parameter Name Inlay Hints. Here’s what VSCode looks like with parameter inlay hints turned on and off for the original version of the code:

Editor augmented with parameter inlay hints. It shows “limit:” and “withArchived:” before the literal parameter arguments Without inlay hints

💡

To turn on typescript parameter inlay hints in VSCode, check that editor.inlayHints.enabled is set to on and then set typescript.inlayHints.parameterNames.enabled to "all". (There are a decent number of configuration options to make inlay hints behave the way that you’d like, including setting them up to only handle literal values)

Inlay hints are better than nothing, but they only work in the editor, and they don’t tell you the times that an optional parameter wasn’t specified. I spend a lot of time looking at code in spots without inlay hints—GitHub, delta’s diff viewer, Claude Code—and none of these tools support inlay hints. Letting your editor display the names of bare literal parameters is useful, especially when you don’t control a function’s parameters, but when it’s possible, I’d prefer to avoid bare literals and optional parameters altogether..

Switching from bare literals to an options object means we have no choice but to write code in a way where someone reading the code can tell more of what’s happening:

function findThingsByTypeId(
  typeId: string,
  options: { limit: number; withArchived: boolean },
): Promise<Array<Thing>> {}

findThingsByTypeId(typeId, { limit: 20, withArchived: false });
findThingsByTypeId(
  typeId,
  isExport
    ? { limit: 0, withArchived: true }
    : { limit: 20, withArchived: false },
);

My favorite part of this change is making withArchived required. Optional options aren’t the root of all evil, but they’re the cause of a surprising amount of bugs and wasted time. Future engineers won’t necessarily remember or check that findThingsByTypeId is filtering out archived items, or they might have different assumptions about what the defaults should be.

Things are better. { limit: number, withArchived: boolean } leads to more readable code, but I still don’t love the types:

If we wanted, we could use a branded type for limit, which leads to a pattern of using parsing any limit used in our system to see if it’s a legitimate value.1

type Id = string & { __brand: "string" };
type Limit = number & { __brand: "limit" };

type FindThingsByIdOptions =
  | { limit: Limit; withArchived: boolean }
  | { noLimit: true; withArchived: boolean };
function findThingsByType(
  typeId: Id,
  options: FindThingsByIdOptions,
): Promise<Array<Thing>> {}

findThingsByTypeId(typeId, { limit: validatedLimit, withArchived: false });
findThingsByTypeId(
  typeId,
  isExport
    ? { noLimit: true, withArchived: true }
    : { limit: limit: NON_EXPORT_LIMIT, withArchived: false },
);

This code is more verbose than the initial example, but I think the increase in clarity is worth it..

Plus, some of the terseness of the original example is an illusion. When we have limit: number, we’re relying on every resource that allows clients to provide limit to do validation that those limits are non-zero and within acceptable ranges! If we’re doing it that way, we’ll also want tests that check that that validation is taking place and that the resource returns an appropriate status code when a client provides a limit value outside of the range we support. Our code is more “terse” because we’re requiring invisible logic elsehwere. By constraining limit with our type system, we guide developers towards a single safe way of parsing limits from clients.

💬

Even after all of these parameter changes, I’m still unhappy with this function. If this were code in a production system, I’d want findThingsByTypeIdForExport to be its own function. We’ll want to label export queries differently for metrics and logs, we will always want them to go to a read-replica, we may want the ability to turn them off under heavy load, we may want to stagger its requests or iterate through in batches, and those queries have different latency requirements and timeouts than loading things for a regular client. Finding things for an export won’t actually share much in common with findQueriesByTypeId except for the superficial query shape.

Increased clarity from named parameters isn’t limited to literal values. It’s less likely that someone will confuse non-literal values, but it can happen. I recently saw a bug where the parameters were switched to a function designed to read and compare values from an old data source to a new data source.2

function doubleRead<T>(
  getOldData: () => Promise<Array<T>>,
  getNewData: () => Promise<Array<T>>,
): Promise<Array<T>> {}

doubleRead(getNewData, getOldData);

// vs.

doubleRead({ getNewData, getOldData });

Changing the function definition to doubleRead to take { getOldData, getNewData } makes it much harder to get wrong. Getting any individual call wrong might be unlikely, but in a large-enough codebase, any pattern that allows the possibility of a problem will have those problems occur.

In general, I want to understand code without hovering over type definitions or jumping around an editor. Literal function paramaters and optional ones both make that harder, so I try to avoid them as much as I can. Incremental improvements to the readability of code lead to large improvements in the maintainability of a system.


  1. If we have a branded Limit type, it encourages us to create a parsing function of some sort:

    type Limit = number & { __brand: "limit" };
    function asLimit(limit: number, { max }: { max: number }): Limit | null {
      if (limit !== Math.floor(number)) return null; // floats probably indicate a bug _somewhere_
      if (isNaN(limit)) return null; // ditto for this
      if (limit === 0) return null; // limit = 0 is going to be interpreted as Infinity by a lot of systems.
      if (limit < 0) return null; // probably a bug
    
      // if we're ever parsing a user-provided input
      // there's always going to be a maximum that we want to allow
      if (limit > max) return null;
      return limit as Limit;
    }
    

    You could imagine a similar validation function for Ids or any other value in our system.

    Parse, don’t validate is a great read about using the type system to constrain what states are possible for our system. Haskell is a whole heck of a lot more powerful than TS for this, but some of the ideas are still applicable! ↩︎

  2. This bug would have led to an outage if another bug hadn’t meant that readOldData and readNewData were both actually reading from the old data source. Sometimes two bugs really do make a right. ↩︎